TIL that PowerShell can use internal CLR generic reference type names like that! But really, please don't do that...
[System.Nullable``1[[System.Int32]]]
[Nullable[int]]
... much easier to read 
TIL that PowerShell can use internal CLR generic reference type names like that! But really, please don't do that...
[System.Nullable``1[[System.Int32]]]
[Nullable[int]]... much easier to read 
You can ignore this error; for whatever reason, the NuGet client unexpectedly terminated the connection, and the result was that ProGEt stopped writing bytes. Not really anything to worry about.
The diagnostic center isn't for proactive-monitoring, more for diagnostic purposes. So ulnless users are reporting a problem, you don't need to check it.
Hi @jimbobmcgee ,
Thanks for all the details; we plan to review/investigate this via OT-518 in an upcoming maintenance release, likely in the next few two-week cycles.
-- Dean
Hi @andreas-unverdorben_1551 ,
npmjs.org primarily serves static content and runs on massive server farms running in Microsoft's datacenters.
Your ProGet server is much less powerful and does not serve static content. Not only is every request is dynamic (authentication, authorization, vulnerability checking, license checking, etc), but most requests (such as "what is the latest version of package X") need to be forwarded to npmjs.org and aggregated with local data.
So, a much less powerful server doing a lot more processing is going to be a little slower ;)
Running ProGet in a server cluster will certainly help.
Cheers,
Dean
Hi @mmaharjan_0067 ,
It sounds like you're on the right track with researching this; your reverse proxy is definitely "breaking things" somehow.
Based on what you wrote, it sounds like your reverse proxy is terminating the request because there's no output from the server after a while. The "no output" is expected, since assembling the upload takes quite some time, and that's likely where the "operation cancelled" would be coming from.
I would look there and see if you can adjust timeouts. As for pgutil, here's the code used to perform the multi-part upload:
https://github.com/Inedo/pgutil/blob/thousand/Inedo.ProGet/AssetDirectories/AssetDirectoryClient.cs#L197
-- Dean
Hi @kichikawa_2913,
We see multiple connectors pretty often, and it rarely presents a problem.
The main downside comes in the overhead of aggregation; for some queries like "list all package versions", each connector will need to be queried and have the results aggregated. So it could cause performance issues for for very high-traffic feeds - at least that's what we see on the support side of things.
However, if you plan on using a package-approval workflow, then it won't be a problem, as your approved-npm feed wouldn't have any connectors.
Hope that gives some insight,
Dean
@joel-shuman_8427 thanks for the heads up!
I just updated it
https://github.com/Inedo/inedo-docs/blob/master/CONTRIBUTING.md
Hi mwatt_5816,
BuildMaster does support "release-less" builds, though you may need to enable it under the application's Settings > Configure Build & Release Features > Set Release Usage to optional. That will allow you to create a build that's not associated with a release.
It's also possible to do "ad-hoc" builds (i.e. builds with no pipeline), but we don't make it easy to do in the UI because it's almost always a mistake (once you already have pipelines configured). So in your case, I think you should create a secondary pipeline for this purpose.
-- Dean
Hi @d-kimmich_0782 ,
This behavior is by design; the "publish date" in ProGet 2025 and earlier is whenever a package is added to a feed. This means that, even if a package was published to NuGet.org 3 years ago, the "publish date" will be whenever it was first cached.
However, in ProGet 2025.14 and later, you can change this behavior under "Admin > Advanced Settings > Use Connector Publish Date". This will be the default behavior in ProGet 2026.
This is being done for a similar set of rules you should investigate, which we call Recently Published & Aged Rules :
https://docs.inedo.com/docs/proget/sca/policies#recently-published-aged-rules-proget-2026-preview
-- Dean
Hi @Nils-Nilsson,
Thanks for the report; this was a trivial fix and I just committed the change to PG-3138 , which will be in the next maintenance release (Oct 24).
As an FYI, if you uncheck "Restrict viewing download statistics to Feed Administrators" on the Feed Permissions page, then error shouldn't occur.
-- Dean
Hi @parthu-reddy ,
Thanks for the additional information. Thinking about it further, I suspect this was a temporary network outage. It could have been DNS related, or who knows what.
As for your configuration...
5000 is definitely too high; set this towards 100-500 max. If you're working on a load-balanced cluster, this should be done at the load balancer instead.
I was incorrect about api.nuget.org, that is also used by the V3 API. I thought it was only V2. So please disregard.
It's unlikely metadata caching will help, but you could try it. That's a relatively short-lived cache meant for traffic bursts, and it's not really going to help with a network outage.
-- Dean
Hi @parthu-reddy,
Thanks for the detailed investigation notes. From what you've described, the behavior doesn't appear to be related to SQL Server locking. It's most certainly related to blocking/waiting on the 200+ outgoing connections to receive a response from api.nuget.org.
If api.nuget.org is running slow, then ProGet will run slow. There's really no way around this when you use connectors, as ProGet is effectively forwarding client requests.
Most likely, someone or some build server is making legacy/V2 NuGet API requests (they look like ODATA/SQL queries on the url), and those are being forwarded. The V3 requests are just JSON files.
-- Dean
Hi @0xFFFFFFFF,
Thanks for the detailed write-up and explanation.
It looks trivial to add size and upload-time. Adding versions is probably simple, assuming it's just an array of distinct version numbers (we already have all the "packages" in context).
It's probably fine, but just to be safe we will do this in ProGet 2026, given some of the subtle behavioral changes you mentioned ("tools like uv to make use of heuristics to speed up downloads"). For someone with an overloaded server, that might "put it over the edge".
ProGet 2026 may be ready by the end of the month, so it's not so far away.
-- Dean
Hi @Nils-Nilsson ,
Thanks for the bug report; it seems to issue is on the "view" page, and it's mistakenly editing the wrong data.
PG-3258 will fix this in the next maintenance release (April 17), but a prerelease is now available (inedo/proget:25.0.26-ci.6) if you'd like to try it.
-- Dean
@rcpa0 thanks for letting us know; we'll try to fix this next time we make some changes to pgutil it should only print "Version is not a valid semantic version." or something
Hi @geraldizo_0690 , unfortunately we didn't get a chance to review this for the last maintenance release, but it'll be in the next one (April 17) via PG-3257. It's also available via prerelease if you'd like (inedo/proget:25.0.26-ci.5)
-- Dean
Hi @davi-morris_9177 ,
I'm afraid this behavior is expected because kotlin-stdlib-2.3.20.jar is not in the downloaded index file. There's is no file-listing API, which means there's no way to know what should be there otherwise.
The Maven client downloads "blindly downloads" some files, and "silently accepts" a 404 for other files. When it's downloaded (cached), then ProGet knows about it. So, you really need to download the artifacts first if you want to make a curated feed I'm afraid.
-- Dean
Check out our Licenses for Non-production / Testing Environments for the full details.
But if its a short-term migration testing scenario, then using an existing or trial key is fine. For a long-term environment, a separate license is required.
-- Dean
Hi @adoran_4131 ,
Keep in mind that ProGet and Artifactory work differently; Artifactory is basically a "file server" and is just does "blind proxying" of HTTP requests. That's why it doesn't matter what URLs you put in. ProGet, on the other hand, is a package server, and will index the remote repository first. That's where things are failing right now.
It's failing because the index file is not being found based on the input. This is what a Debian repository is supposed to look like:
dists/
{distribution-name}/
Release
Release.gpg
main/
binary-amd64/
Packages
…
pool/
main/ …
So, the distribution-name is incorrect. I thought it might be binary based on the instructions, but these URLs are both a 404:
https://pkg.jenkins.io/debian-stable/dists/binary/Release
https://pkg.jenkins.io/debian-stable/dists/binary/InRelease
So, it must be something else. It's whatever apt is sending by default, I guess?
Anyway, if you can find what distribution it should be, then it will work. Perhaps consider just doing something like this:
sudo apt update -o Debug::Acquire::http=true
That will show you the HTTP requests being made, and you can see exactly the URL for the Release file, which you can then use to reverse-engineer the distribution name.
Let us know what you find!
Hope that helps,
Steve
Hi @parthu-reddy ,
Sorry on the slow reply, check out the link I sent you via t icket! Just let us know when you upload the files and we'll talke a look then!
Thanks.
-- Dean