Hi @steviecoaster ,
Nice suggestion; I made a small change and you can now do pgutil apikeys create feed --feed=* to create such a key.
Thanks,
Alana
Hi @steviecoaster ,
Nice suggestion; I made a small change and you can now do pgutil apikeys create feed --feed=* to create such a key.
Thanks,
Alana
Hi @steviecoaster , FYI - I just a note to the license restriction page; it looks like we don't mention the restrictions on other APIs (delete, scan, etc), but it's something we'll consider next round of docs refactoring. I know we do call it out on the main documentation pages though.
Hi @steviecoaster ,
The pgutil security features are brand new API endpoints and we haven't had a chance to document them yet. Like other new API endpoints that make ProGet easier to manage, we decided to make the new feature only available for paid editions.
Our product management philosophy is that core functionality (curate open-source packages from public repositories, centrally manage packages and containers) is available in free edition, and others are in paid edition.
Thanks,
Alana
Hi @udi-moshe_0021,
Thanks for clarifying.
In this case, then someone may have disabled package caching on the feed or deleted the package using the "clear cache" button, a retention policy, or manually. That is the most likely scenario here.
There may have also been an error adding the package to the feed (file system error, etc), although those are very rare and would have been logged under Admin > Diagnostic Center.
Thanks,
Alana
Hi @udi-moshe_0021,
I'm afraid we don't have enough information to help troubleshoot, but I'll try to explain how ProGet works behind the scenes.
If you have package caching enabled (the default), then packages downloaded through ProGet will be automatically cached. They will still show up with the "remote" icon, but they will be stored on the ProGet server. You'll be able to tell that they're cached (instead of remote) because there will be a "delete cached package" option.
You can verify this behavior by simply downloading a package from the ProGet UI. It will cache the package. The same link/url is used by the npm client to download a package file.
So why is a package not cached then? The most likely case is that the npm client already had it internally cached, and thus it was never requested from the server. You may also have a scenario where cached packages are deleted with a retention policy.
Thanks,
Alana
Based on the stack trace, there's something wrong with the data being returned via the API from the remote connector. Specifically, one/more of the version objects being returned is missing a name or version property,
Based on the URL, the invalid data is at the {repository-root-url}/browser-sync. Here's what it's supposed to look like, based on the public repository at least:
https://registry.npmjs.org/browser-sync
Thanks,
Alana
@pmsensi sure thing!
2025.16-rc.1 should be available now.
Here's how to download it:
https://docs.inedo.com/docs/installation/windows/inedo-hub/howto-install-prerelease-product-versions
Hi @pmsensi
Are you able to easily test a pre-release container? I wasn't sure if you're on Linux/Docker, but I just made a code change (PG-3164) that should show up soon as inedo/proget:25.0.16-ci.1
Let me know if you're able to try that -- if you're on Windows, I can push a package for the Ineod Hub to download as well.
Thanks,
Alana
Hi @parthu-reddy ,
This can particular occur when there are some issues with the database, such as outdated statistics or highly-fragmented indexes. This script should help fix this:
https://proget.inedo.com/endpoints/Public/content/DefragmentIndexesWithRowCount.sql
It coudl be something else, but the query that page uses seems to be mostly effected by those.
Thanks,
Alana
Hi @parthu-reddy ,
This error means that PowerShellGallery.com is taking too long to respond to that query. You can try increasing the connector timeout; it's 10 seconds by default. Maybe try 20 or 30 seconds? Just a guess.
Unfortunately the PowerShellGallery seems to be in a state of abandonment these days and it performs really poorly. It's pretty buggy too.
I would consider using a multi-feed, package-approval process to pull packages so you don't have to rely on the gallery's API.
Cheers,
Alana
Hi @jw,
Unfortunately there's no easy way to guess which name is "correct", so sometimes the "wrong" name gets de-duplicated. This also should have no real side-effect, except perhaps seeing the "wrong" casing in some places.
However, as you noticed, the name is overwritten when a package is added to a feed. So, if jquery is the package name stored in the database, that record will be updated to jQuery upon upload of a package.
This doesn't seem to impact many packages at all.
Thanks,
Alana
Hi @pmsensi,
Nice find. So we can't make it a configurable value, but we can try finding something that works with pipy.org, which is the main requirement.
According to the example posted in their docs, this might be what a Simple API might look like:
# Construct our list of acceptable content types, we want to prefer
# that we get a v1 response serialized using JSON, however we also
# can support a v1 response serialized using HTML. For compatibility
# we also request text/html, but we prefer it least of all since we
# don't know if it's actually a Simple API response, or just some
# random HTML page that we've gotten due to a misconfiguration.
CONTENT_TYPES = [
"application/vnd.pypi.simple.v1+json",
"application/vnd.pypi.simple.v1+html;q=0.2",
"text/html;q=0.01", # For legacy compatibility
]
ACCEPT = ", ".join(CONTENT_TYPES)
So I guess, could you try that header?
application/vnd.pypi.simple.v1+json, application/vnd.pypi.simple.v1+html;q=0.2, text/html;q=0.01
Thanks,
Alana
Hi @jw ,
Thanks for confirming that; we were able to identify the bug -- this time it ws SQL-server specific.
This is fixed via PG-3163, which we're shipping in this week's maintenance release. You'll still need to de-deduplication after however.
Tanks,
Alana
Hi @andreas_9392 ,
That configuration is not supported and will not work; You'll need to configure https://proget.mycompany.com/ or use a port.
Thanks,
Alana
@wechselberg-nisboerge_3629 great news, thanks! Well it'll be in the upcoming release (2025.13) in that case :)
@wechselberg-nisboerge_3629 can you check it out again? Should be there now :)
Hi @wechselberg-nisboerge_3629 ,
Thanks for sharing that; sadly I'm still at a total loss here 
But I did make a change that I think should work, or at least give us a different error.... can you try upgrading to inedo/proget:25.0.14-ci.7?
The change is in that build. OF course, you can easily downgrade later.
Thanks,
Alana
Hi @Anthony ,
When you use SHCall, it's translated into a remote SSH command that includes all arguments inline on the shell. Basically something like ssh user@host bash -c '...'
However, there is an OS-enforced limit on how long this can be, which is typically between ~32K and ~64K characters. It looks like you're there exactly, and you may be able to see this limit with getconf ARG_MAX. Note that you would also get this error if you did ssh user@host bash -c 'echo "Really long....."'.
So bottom line -- this is an OS/SSH limit. To work-around it, you can just write out $arg to a file, and have your script read in that file.
Thanks,
Alana
Thanks @wechselberg-nisboerge_3629 , Exactly what I was looking for!
Can you provide some more information about this image? Basically I'm trying to find the layer / mediatype / size. I believe these commands will do it:
docker image inspect vl-dev-repo.ki.lan/sst-coco-oci-prod/sub-coco-cli:test --format '{{json .RootFS.Layers}}' | jq .
docker history vl-dev-repo.ki.lan/sst-coco-oci-prod/sub-coco-cli:test
Thanks,
Alana
Hi @koksime-yap_5909 ,
[1] I would do localhost as to reduce network traffic; a lot of time, "loopback" connections are handled in software and never make it to the network hardware
[2] multiple copies of the package are stored
In general, you should use data-deduplication anyway. Even with out self-connectors, we've seen a 90% reduction in space due to the nature of what's stored (i.e. nearly identical versions).
Thanks,
Alana
@wechselberg-nisboerge_3629 the main thing I'm looking for is the HTTP access logs - we have 1.5 Entires before (PATCH-finish, PUT-Start, PUT-finish), so seeing more would be really helpful.
What's odd is seeing the "retrying..."
@wechselberg-nisboerge_3629 thanks for comfirming!
Any chance you can get more entries from the container log? It'd be really helpful to see more requests going back/forth. This is just such a strange behavior given the seeming simplicity of your image
Also, there should be an option, in ProGet 2025.12, to enable web logging (Admin > HTTPS Logging); it's a brand new feature, but it writes logs to a log file.
Hi @mayorovp_3701 ,
Thanks for confirming; we can will try to get this fixed in the next maintenance release via PG-3139 -- the underlying issue is most likely a race condition on that trigger I mentioned, so we're going to fix it by adding an advisory lock.
It will not remove the duplicate content, but in theory deleting the image will.
Thanks,
Alana
Hi @koksime-yap_5909 ,
I'm afraid this is a known limitation with Maven feeds; we made the assumption that package authors would follow the bare-minimum of Maven versioning: packages start with letters, versions start with numbers.
The only examples we found that were counter to that were 20+ year old artifacts, however, we've since learned that authors still mistakenly use these incorrect versions.
Unfortunately, supporting these types of versions require a complex/risky change.
Maven is a file-based API and the client just GETs/PUTs files. However, ProGet is not a file server so we need to actually parse the URLs to figure out which artifact/package the files refer to. In this case, we parse package-alpha-0.1 as package-alpha (version 0.1), not package (version alpha-0.1). Hence, why it's not working.
If these are your internal packages, the easiest solution is to follow the standard:
https://docs.oracle.com/middleware/1212/core/MAVEN/maven_version.htm
Thanks,
Alana
Hi @wechselberg-nisboerge_3629 ,
This is definitely a strange error; are you using PostgreSQL by chance?
I'm seeing 53babe930602: Retrying... a few times. Is this consistently happening with this layer? Is there anything special about it (big, small, etc)?
Thanks,
Alana
@Sigve-opedal_6476 great news! Thanks for the update
Actually the version you're using (2025.8) has a known regression relating these PackageIds that we already fixed via PG-3097. I didn't realize it until you told me the version and I found that issue.
Anyway please try latest version of ProGet 2025, it should work.
Thanks,
Alana
Hi @parthu-reddy ,
I'm not sure what version you're using, but the latest version has a checkbox on the reindex function to delete duplicate ids/names wen running the reindex job.
I would try that -- note you may have to run it twice. This issue is fairly complicated and it's hard to fix without working against exported/backups of user databases.
Thanks,
Alana
Hi @d-mischke_3966 ,
I don't really know... based on the url it does like V1 to me, but the response looks like V2 API looks like (it's an ODATA/RSS style). I don't know what V1 looks like, it's before all of our times :)
Anyway I would ask them to investigate the issue. V2 has been deprecated for over 5 years now, so we don't wanna make changes, especially to work-around a "bad" third-party feed.
Thanks,
Alana
Hi @jw ,
Whoops sorry I keep forgetting - it shows up different on our end. Let me know if you'd like me to update your email on the forums, so you can login with your company account. It's fine either way, but we might forget again -- it shows up as free/community user on our dashboard 
Anyway in that case, sure we can prioritize this for you!
I guess in the end you guys need to sort out the question if you want to support partial updates.
Since you're the first user whose requested this... we'll go with what you suggested. That makes sense to me. I just made this (PG-3137) since it was trivial:
curl -X POST -H "Content-Type: application/json" -d "{\"id\": 1, \"code\": \"0BSD\", \"title\": \"Zero BSD\"}" http://localhost:8624/api/licenses/update
curl -X POST -H "Content-Type: application/json" -d "{\"id\": 1, \"code\": \"0BSD-X\", \"title\": \"XXXZero BSD\"}" http://localhost:8624/api/licenses/update
curl -X POST -H "Content-Type: application/json" -d "{\"id\": 1, \"code\": \"0BSD\", \"title\": \"Zero BSD\", \"spdx\": [\"0BSD\", \"0BSD1\"]}" http://localhost:8624/api/licenses/update
curl -X POST -H "Content-Type: application/json" -d "{\"id\": 1, \"code\": \"0BSD\", \"title\": \"Zero BSD\", \"spdx\": []}" http://localhost:8624/api/licenses/update
Hardest part, by far, was getting the curl commands figured out 
Thanks,
Alana
Hi @d-mischke_3966,
When you say a "v1 NuGet feed", do you really mean the ancient version (i.e. used from 2010 to 2015)?
I don't think ProGet has ever supported that if so.
Ultimately it sounds like there is a problem with the API though -- 01.01.0001 01:00:00 is not a valid date.
Thanks,
Alana
This one is tricky and I'm afraid I cannot reproduce this error either.... but I think I can see how it's possible.
First off, it's definitely possible for two processes to attempt to add the same manifest to the feed at the same time, and I might expect that during a parallel run like you describe.
However, there is data integrity constraint (trigger) to design this from happening. This trigger should yield a Image_Digest must be unique across the containing feed error.
But, looking at the trigger code for PostgreSQL, it looks like we aren't using an advisory lock to completely prevent a race condition.
Can you confirm that you're using PostgreSQL?
Thanks,
Alana
Hi @frei_zs ,
We are currently working on PG-3110 to add support for "v4 signatures" and intend to release that soon (along with better support for public repositories) in an upcoming maintenance release.
Unfortunately it's not trivial, as the underlying cryptography library (Bouncy Casstle) does not support it, so we have to reimplement signing -- and the good news is that it seems to work so far, and is much faster.
Thanks,
Alana
Hi @fabrice-mejean ,
It's no longer possible to query this information from the database.
As you've noticed, ProGet now uses a version range (i.e. AffectedVersions_Text ) to determine whether a package is vulnerable or not. So instead of 4.2.3 it's now 4.2.3-4.2.8 or [4.2.*) or something like that.
Unfortunately it's not practical/feasible to parse this information unless you were to rewrite a substantial amount of ecosystem-specific parsing logic - this would be basically impossible to do in a SQL query.
Instead, you'll need to use the upcoming pgutil packages meatdata command to see what vulnerabilities a particular packages has. You can also use Notifiers to address newly-discovered vulnerabilities.
Thanks,
Alana
Hi @koksime-yap_5909 ,
Not necessarily - it really depends on the API query.
If the query is like "give me a list of all versions of MyPackage", then ProGet will need to aggregate local packages and connector packages to produce that list.
If the query is "give me the metadata for MyPackage-1.3.1", then the first source that returns a result is used.
In practice, NuGet asks for "all versions" a lot. So you'll get a lot of queries.
Thanks,
Alana
@james-woods_8996 we haven't released the extension yet, so it won't be in any builds of the product. We are waiting on someone to test/verify that it works
Hi @thomas_3037 ,
The specific issue was already fixed; I'm going to close it because whatever your experiencing is different and the "history" in this long winding thread will only make it harder to understand ,
Please "start from the scratch" as if you never read this.
Thanks,
Alana
They are ordered by the name - so if orderis important to you, I guess you could always do the 10-nuget.org, 20-abc.net or whatever
Thanks,
Alana
Hi @Stholm ,
I assume you saw the Terraform Modules documentation in ProGet?
While updating the Support for Terraform Backends to link to this discussion, I noticed we had some internal notes. So I'll transfer them here:
This would require implementing both the Provider Registry Protocol (for first-party plugins) and the Provider Network Mirror Protocol (for connectors). Both seem relatively simple, though there appear to be some complexities involving signature files.
In either case, we ought to not package these because they are quite large. For example, the
hashcorp\awsprovider for Windows is just a zip file with a single, 628mb .exe. They also have no metadata whatsoever that's returned from the API.One option is just to store these as manifest-less packages. For example,
hashicorp/awspackages could bepkg:tfprovider/hashicorp@5.75.0?os=windows&arch=amd64. This would be two purls in one feed, which might not work, so it might require a new feed.
Don't ask me what that all means, I'm just the copy/paster 
But based on my read of that, it sounds like a big effort (i.e. a new feed type) to try to fit a round peg in a square hole. And honestly your homebuilt solution might work better.
I think we'd need to see how much of a demand there is in the offline/air-gapped Terraform userbase for this. But feel free to add more thoughts as you have them.
Thanks,
Alana
Hi @james-woods_8996 ,
ProGet uses the AWS SDK for .NET, so I can't imagine environment variables would have any impact. I have no idea what those mean or do, but there's probably way to configure those in the SDK.
That said, another user is currently testing a change for Oracle Cloud Infrastructure, which seems to also be giving some kind of hash related error.
Perhaps it'll work for you? AWS v3.1.4-RC.1 is published to our prerelease feed. You can download and manually install, or update to use the prerelease feed:
https://docs.inedo.com/docs/proget/administration/extensions#manual-installation
After installing the extension, the "Disable Payload Signing" will show up on the advanced tab - and that property will be forwarded to the Put Request. In theory that will work, at least according to the one post from above.
One other thing to test would be uploading Assets (i.e. creating an Asset Directory) via the Web UI. That is the easiest way to do multi-part upload testing.
If it doesn't work, then we can try to research how else to change the extension to get it working.
Thanks,
Alana
Can you clarify what you've tried to date, and the issues you've faced?
You can silently install the Inedo Agent, or even use a Manual Installation process if you'd prefer.
Ultimately it's a standard Windows service, and you can change the account from LOCAL SYSTEM (which we recommend, by the way) to another account using sc.exe or other tools.
Thanks,
Alana
Hi @jw ,
Technically it's double, though it's not trivial due to the number of places the change would need to be made and tested... ProGet API, pgutil, docs.
The code/title change itself looks trivial (i.e. just pass in External_Id and Title_Text to the call to Licenses_UpdateLicenseData), though I'm not totally clear what to do about the other one. What does pgutil send in? Null? []? Etc.
As a free/community user this isn't all that easy to prioritize... but if you could do the heavy-lifting on the Docs and pgutil (i.e. submit a PR), and give us a script or set of pgutil commands that we can just run against a local instance... I'm like 95% I can make the API change in 5 minutes.
Thanks,
Alana
Hi @jlarionov_2030 , easy change! We will not require a valid license for Settings API Endpoint going forward; PG-3133 will ship in next maintenance release, scheduled for Friday.
Hi @tobe-burce_3659 ,
We do not support deleting or modifying any of the contents under the program directory; they will simply return when you upgrade and it may cause other problems.
Instead, please create an exception; we are aware of the vulnerabilities in libraries that Postgresql uses and can assure you that they are false positives and will have no impact on ProGet... even if you were to be using PostgreSQL.
Using virus/malware tools to scan/monitor ProGet's operation causes lots of problems, as these tools interfere with file operations and cause big headaches.
Thanks,
Alana
Hi @Sigve-opedal_6476 ,
No idea why it wouldn't work, but I would look at something like ProxyMan or Wireshark to capture HTTP traffic, and see what requests are different.
You should see a pattern of requests that work, and a pattern that doesn't.
Maybe the client is requesting some other file that you aren't uploading? I don't think there's an API or HTTP header... I think it's all basic GET requests. But that will tell you the delta.
Thanks,
Alana
Hi @fabrice-mejean ,
I definitely understand where you're coming from.... both commands basically work off the assets file, which is generated at build time.
But your workflow is not common... the standard for SBOM generation is post-build. Doing it pre-build checking requires that packages.lock.json is used, which not many use... it's hard for us to advocate this workflow when most users don't care about saving time in this stage.
I know we could add a "switch" or something to pgutil, but we learned "the hard way" that adding lots of complex alternative/branching paths to pgscan made for very difficult to maintain/understand code, so we want to keep the utility as simple as possible.
Thanks,
Alana
Hi @pmsensi,
Correct -- it'll be whatever data is on the "Dependencies" tab in ProGet, which is basically whatever is in the manifest file (.nuspec, etc).
Thanks,
Alana
Hi @fabrice-mejean @pmsensi ,
We've got this spec'd out and on the roadmap now as PG-3126! It'll come through a maintenance release, along with pgutil security commands for configuring users, groups, and tasks.
The target is 2025.13, which is planned for October 24. I don't know if we'll hit that target, but that's what we're aiming for.
Please check out the specs on PG-3126; I think it captures what you're looking for, which is basically an expanded metadata object that includes compliance data, detected licenses, and vulnerabilities.
Thanks,
Alana
Hi @fabrice-mejean,
Using packages.lock.json seemed to make the most sense to us too, but ultimately we decided not to use it for a few reasons.
First and foremost, none of the other .NET SBOM-generators seemed to use the packages.lock.json file. That's usually a sign that there's a "good reason" for us not to either.
From our perspective, pgutil builds scan is intended to be used in a CI environment, where dotnet build is run anyway and the assets file is already present. We don't have a use-case for an alternative workflow, where a build is not actually run.
In addition, packages.lock.json are still pretty niche and not widely used. You have to "go out of your way" to use it, and <PackageReference .../ > is by far the most common approach. It might be worth monitoring Issue #658 at CycloneDX/cyclonedx-dotnet to see anyone picks it up there.
Technically it's not all that complex to do, but it adds complexity and confusion... especially since most users will not be familiar with the differences between the lock and asset file. So it's not a good fit for pgutil builds scan.
HOWEVER, you could probably write ask ChatGPT to write a trivial PowerShell script that "Transforms" a lock file into a minimal SBOM document, and tweak it for what you want in ProGet. That same script could just upload the file to ProGet, or use pgutil as well.
Thanks,
Alana
Hi @layfield_8963 ,
Thanks that makes sense -- since you've already got a personal repo going, I think it makes sense to stick with that for now. If other users are interested, we can explore it.
We publish pgutil pretty regularly, so we'd need to automate updating that repository and that's just one more script t write and one more thing to break later :)
Thanks,
Alana