Not yet; I saw an internal presentation on it, but I don't know the communication plan.
Feel free to check with @apxltd directly... email or slack seem to be best ;)
Not yet; I saw an internal presentation on it, but I don't know the communication plan.
Feel free to check with @apxltd directly... email or slack seem to be best ;)
Option 1. That digest references the blob which represents your manifest.
According to Docker's Content Digest Docs, option 2 (the Docker-Content-Digest header) does not reference a blob, it's just a hash of the response itself.
We made and tested several changes to the installer a while back, but it's not something we regularly test/verify.
Please share what you find work! Thanks.
In ProGet 5.3, we plan to have a couple tabs on each Tag (i.e. container image) that would provide this info: Metadata (will be a key/value pair of a bunch of stuff), and Layers will show details about each of these layers.
That might help, but otherwise, we have retention policies which are designed to clean up old and unused images.We'll also have a way to detect which images are actually being used :)
As @apxltd mentioned, we've got a whole bunch planned for ProGet 5.3.
I've logged this to our internal project document, and if it's easy to implement in ProGet 5.2 (I can't imagine it wouldn't be), we'll log it as a bug and ship in a maintence release.
Do note, this is not an IMAGE description, it's a REPOSITORY (i.e. a collection of images with the same name, like MyCoolContainerApp) description; so this means the description will be there on all images/tags in the repository.
You're right, I guess that's showing the "layers" instead of the "tags"; I think it should be showing container registries separately (they're not really feeds), but that's how it's represented behind the scenes now.
Anyways we are working on ProGet 5.3 now; there's a whole bunch of container improvements coming, so I've noted this on our internal project document, to make sure we get a better display for container registries.
Hello;
This error indicates that nuget.org is having some kind of networking/performance problems, and not responding to that request. NuGet.org is owned/maintained by Microsoft, so there's really nothing you can do, aside from wait for the problem to go away on their end.
Great question!
The answer is, unfortunately, buried in the Formal Specifications. But long story short, you'll want to wrap the Get-Asset operation in a with executionPolicy = always block.
For more information, note that there are three modes of executions:
So what's happening is that Get-Asset will never run in a Collect pass, where as Ensure-DscResource will always run in a Collect pass (but only in Collection mode). By forcing Get-Asset to always execute, it will run even in the collect pass.
By the way: I would love to find a way to properly document the answer to this, so users don't get frustrated; any suggestions on where to edit the contents?
Nice OtterScript :)
This will work, the variables won't "leak over" or anything like that.
I think, you want to use $Eval function. Note that grave apostrophe (`) is an escape character.
set $BuildMaster_Test_1 = Test;
set $Number = 1;
Log-Debug `$BuildMaster_Test_$Number;
Log-Debug $Eval(`$BuildMaster_Test_$Number);
So the output would be:
$BuildMaster_Test_1
Test
We've got some major container improvements coming in ProGet 5.3, and will revamp our product; hopefully we'll be able to present this pretty soon!
I think, once you see what we have planned, you'll want to change/improve your workflows to simplify things, and this may not even be necessary... anyways, stay tuned.
The ProGet Dockerfile is based on mono, which is the latest stable version; so, every maintenance release it's whatever the latest version is at the time.
Hi Ali,
Sure, it would just be like $Variable2$Variable1 or ${Variable 2}${Variable 1}.
Check out documentation on Strings & Values in OtterScript to learn more.
The user running the install/upgrade needs to have dbowner rights on the ProGet database. The installer will give a database access error.
@abm_4780 said in Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool:
Regarding using nginx, not exactly sure how this would help with Mono's network handling.
I really don't remember the details, it was a while ago; it had something to do with keep-alive connections, perhaps? It made no sense, at all, but ngix fixed it. Later on, mono fixed whatever bug caused it.
Hello; you should be able to get the error message from Admin > Diagnostic center, inside of PRoGet. Hopefully that will give some insight as to where the underlying problem is...
The error message is "cvs.badguy does not have the Feeds_AddPackage privilege", which means that the api key you've configured does not have that privilege. Please add it.
@patchlings said in ProGet in docker/linux hanging after using all memory:
Most "linux container progets" run mono.exe right?
Actually, all of them run mono.exe. We've had some people build a WINE-version of our products as a container, but I'm not sure if that's any better...
Great, thanks for letting us know; the installer crashes when trying to read/parse the URL reservation (new CommonSecurityDescriptor() from the stack trace) to search if there's a conflicting one for the port you selected.
It shouldn't be possible, but it clearly is happening. So I guess, we will add a try/catch around that.
We don't document how to set-up the ngix proxy, but it's a fairly common setup, and the way to support HTTPS on Linux.
Yes, our plan is to move to .NET5 as Microsoft comes closer to releasing that and it's proven stable (likely next year).
Hello; we haven't had any other users report this, so I'm afraid I don't have any idea how to help. It certainly sounds like a memory leak, and it'st most definitely a mono-specific bug; unfortunately these are extremely hard to track down, and sometimes are even platform specific (i.e. depending on host operating system).
I would make sure to upgrade to latest version of container image.
If you're not already using SQL Server for Linux (and you're using Postgres), then switch to SQL Server.
I would simplify configuration; if you have a lot of connectors, etc.
Try putting a ngix proxy infront of it.
Once we have a clue about where the mono-bug is, we can at least consider ways to work-around it.
Hello; sorry on the slow reply, we are still not get notifications on replies to old posts... we may block replying to them, but in the meantime...
I think ProGet does support the deletion endpoint now (PG-1632), but just for manifests. Is there an official DELETE tag API?
To simplify the import/export options, BuildMaster 6.2 only supports backing up / restoring to a "Package Source" (i.e. a ProGet feed); we may support for using a disk-based package source instead, but for now it's only a ProGet Universal feed.
BuildMaster 6.1.5 lets you back-up to a feed URL.
Note that BuildMaster 6.1.25 also has "Package Sources" (as a preview feature), which you can use to back-up all of your applications if you'd like.
This error is related to URL reservations; sometimes this happens when programs interact with the url registry.
You can use netsh http show urlacl to help identify where the problems are, and netsh http delete urlacl <bad-url> to try to remove them
Here are some links that might be helpful:
Please let us know what you find!
Here's the current state of this feed type:
We did a pretty deep dive into PHP/Composer packages a while back, and our conclusion was that they were very difficult to implement due to the way the tightly integrate with git repositories.
However, we did this assessment without any user partners, and we know next to nothing about PHP, so it could be we misunderstood or looked at the wrong things. Maybe not everyone uses the tight git-repository integration? Hard to say. This is why we partner with customers now.
Since then, there haven’t been too many requests for it, and we have no idea what the level of interest is for. Please add to QA#2690 if you've got some insight.
You're the first person to inquire about it in over two years... but that same document talks about how we partner with users, and I'd encourage you to check out the RPM Thread -- we've got some great user partners in that!
Hello; this is a sign of network connectivity being overloaded.
Ultimately your best bet is to use load balancing; see How to Prevent Server Overload in ProGet to learn more.
But I've heard that putting a NGIX reverse proxy in-front of the Linux container helps (due to some poor network handling/bugs by Mono's code) or moving to the Windows/IIS stack.
Oh I see! Thanks; that would be a nice place to put it; we have a lot of links on that page, and are trying to reorganize it...

BuildMaster is licensed per user, so if the same group of users will be using these instances, then you can use the same key.
In any case, import/export is also in the free edition. What version are you using? It should be on the Admin page.
@PhilippeC sorry about that, but it should be available now;
It's a very exciting release, but we really wanted to roll the upgrades out slowly, and there was an inconsistency in the Hub's upgrade availability logic (for installation), and BuildMaster 6.1's logic (for notification)
Don't forget to check out the upgrade notes - https://inedo.com/support/kb/1766/buildmaster-6-2-upgrade-notes
Hello; the agents don't currently support this, though this is something that we've considered for a long time -- some of our key customers have requested this as well.
However, we've developed some interesting technical alternatives that make the pull-based agents largely moot (at least according to the folks who requested it originally); for example Romp and universal packages allow the client to self-install, or at least have an in-house BuildMaster or Otter instance that can manage installations based on packages.
Hello; this was fixed in GitHub-1.4.3 extension, but as a work-around you can just set BaseUrl in Admin > Advanced Settings
HTTP should be about same speed as FTP; you'll most certainly need to use chunked uploads.
But Asset directories don't support drop folders, and we don't have a reindexing function for asset directories. So unfortunately there's no supported way to handle this.
You might be able to "hack" something by going to the database and filesystem directly, but we obviously can't recommend it.
I think so; that's a Postgres error message.
We deprecated Postgres a long while back, and new features aren't tested against Postgres database code.
Interesting; I know it's not ideal, but it works, and it may only be a slight inconvenience at best, since not many search for packages from the UI, I think.
https://github.com/Inedo/inedo-docs/blob/master/ProGet/feeds/pypi.md
If anyone finds more issues with this, please let us know and we can consider investing in a proper fix.
What user principle are you running the Inedo Agent under?
The default is LOCAL SYSTEM.
Unfortunately we didn't totally understand that detail either when implementing the feed, either... so unfortunately it's not trivial to fix.
We'd like to gauge the impact of not changing it; aside from this search odditiy, was there any other problems? Are packages not installing?
I'm not very familiar with PyPi packages, but I know there are some oddities with - and _, and that they are sometimes supposed to be treated the same, and sometimes not. We don't totally understand all the rules, to be honest (even after reading PEP503 specifications).
In this case, the package is actually websocket_client, not websocket-client.
See: https://pypi.org/project/websocket_client/
When you search for websocket_client in ProGet, it shows up, as expected.
Hello; I think this may have already been addressed in PG-1477, which was shipped in ProGet 5.2.0.
Can you try upgrading to latest version and try again?
Hi;
It's hard to say why this is happening, if you don't see it in the UI, you souldn't see it in Visual Studio.
it could be related to cached packages in visual studio. The best way to diagnose this would be attach Fiddler or Wireshark, so that Visual STudio is going through that, and then monitor the exact queries that Visual STudio is sending to ProGet, and find if anyone is actually returning those packages. If so, then please share the details and we can try to investigate.
Otherwise clear all of your local NuGet caches.
Best,
Alana
I can't say why it's like that, probably because NuGet.org didn't have it a long time ago?
One important note; changing this could break a lot of the tools we do support and integrate with --- which include older versions of NuGet client that NuGet.org no longer supports.
Can you bring this up with Dependabot team? It should be a totally trivial fix on their end, and it's not necessary at all for them to use it.
Hello;
What are you trying to do?
This definitely won't work for several reasons.
Ensure-File is an "Ensure" operation and will detect configuration drift and can optionally create the file if it doesn't exist; but in this case, it would be a 0-byte file named test.zip, which isn't a valid zip
Extract-ZipFile is a "Execute-only" operation, which means it will run only within a block (e.g. { ... } ) if drift was detected (i.e. the file didn't exist in the operation before); but if that was the case, it would fail because it's not a valid zip file
You probably want to use an Orchestration plan for this.
Please read through this eBook to get feeling for how configuration and Orchestration Plans: https://inedo.com/support/resources/ebooks/windows-first-iac-cca
Hi there;
ProGet is quite optimized as a far as software goes (check out ProGet Case Studies to see the scale that enterprises use it at), and a lot of our users have switched from competitive tools for massive performance gains that our tools have. But you need to put it on proper hardware.
Keep in mind that NuGet tools were designed to operate against NuGet.org, which is run on a massive server farm that sends primarily static content. You are making 1000's of requests, and each request to ProGet is comparatively expensive, because it proxies those requests to NuGet.org (assuming you have a connector loaded), makes database connections, it checks permissions, vulnerabilities, etc.
Each request to the server can result in several subsequent network requests... and it sounds like your ProGet server is a Win10 desktop... there's just no way it's going to keep up with developers hammering it with more powerful workstations.
Check out ProGet Free to ProGet Enterprise to see the performance recommendations we have, and other reasons organizations upgrade.
Hello; unfortunately we hit a few snags in getting the environment and PoC code running (a bit more variety in R/CRAN packages), and then we ran out of budgeted time :(
But it's still definitely on our roadmap and we're going to take another stab at it in NOvember.
Hi @gravufo sorry on the delay; this fell thru the cracks on prioritization, but it's been merged just now and will be shipped into ProGet 5.2.15, shipping next Friday.
Thanks!
So to be clear, is this in a "configuration plan" or an "orchestration plan"? Can you give some more context as to how you're trying to use this (i.e. why do you want to unzip)? Can you share the full plan?
Hello; I've updated the documentation to clarify this, but it's available starting in ProGet 5.2.9. So, you'll need to upgrade to enable it :)
hi; please share your OtterScript and we might be able to get a better idea :)
Hello; when you go to add a server (Servers page > Add) to Otter, just select "WSMan / Powershell" as the agent type, and then you'll be able to enter the details for a server that has PowerShell remoting enabled.
Does that help?
Hi; that message is coming from the operating system and ProGet does not have any sort of limit (we have customers with terrabytes of packages); perhaps you've configured it to a different drive, or something. I'd check there :)
Hello; for this, or really any other Dockerized web application, you can setup a reverse proxy. There are a ton of ways to do this, but I'd suggest to search for "How to Configure a Nginx as a HTTPS Reverse Proxy" and see which option(s) make sense for your environment