Hi @scusson_9923 ,
Looks like this is a bug in not overriding the job/execution status; the force normal statement should make it "green" and a normal execution status. Anyway we'll get it fixed via OT-524.
Cheers,
Steve
Hi @scusson_9923 ,
Looks like this is a bug in not overriding the job/execution status; the force normal statement should make it "green" and a normal execution status. Anyway we'll get it fixed via OT-524.
Cheers,
Steve
Hi @adoran_4131 ,
It looks like the 404 error is occurring while trying to download the Release file (i.e. the index) for the repository. The file is being downloaded from this URL:
{connector-url}/dists/{distro}/Release"
And that URL is returning a 404. So make sure you are entering the correct distro in the connector.
Thanks,
Steve
The "Signature ... was created after the --not-after date" message is coming from sqv (Sequoia-PGP verifier), which newer versions of APT use for signature verification.
It almost always indicates a system clock problem on the affected machine, not a repository problem, and often means "The system clock is behind the signature creation time."
So bottom line, I would check the clocks to make sure they are accurate.
Thanks,
Steve
Hi @scusson_9923 ,
The message is expected, but you should see scriptExists: false written at the end, and aNormal status (i.e. green) for the execution.
Is that not the case?
Thanks,
Steve
Hi @scusson_9923 ,
Sorry on the slow reply; i wanted to test this, but didn't get a chance and figure I'd just share this now (which should work):
set $scriptExists = true;
try
{
Get-Asset FooBar.ps1
(
Overwrite: true,
Type: Script,
To: D:\temp\FooBar.ps1
);
}
catch
{
set $scriptExists = false;
force normal;
}
Log-Information scriptExists: $scriptExists;
Cheers,
Steve
Hi @v-makkenze_6348 ,
This is a regression introduced from ProGet 2025.20's changes to malicious package handling. It's not intentional, and only the specific versions should be blocked (8.10.1, 9.1.1, 10.1.6, 10.1.7)
We'll get it fixed via PG-3227 in the next maintenance release (scheduled for this Friday, but we may do a pre-release sooner). For now your best bet is to rollback to ProGet 2025.19.
Thanks,
Steve
Hi @daniel-pardo_5658 ,
This behavior is expected; the UI is meant for creating basic, case-insenstiive archives.
As for the permissions.... File metadata (including owner, execute permissions, etc) are stored within the filesystem (or as metadata in a zip file)... so once you transmit a file, that information is irrevocably lost.
Best to upload a package file.
Cheers,
Steve
Hi @daniel-pardo_5658 ,
Thanks for the suggestion; Universal Packages already support tags in the package manifest file: https://docs.inedo.com/docs/proget/feeds/universal/universal-packages#manifest
Otherwise, if you're referring to "tagging" a package already added to a feed - that's a hard pass :)
The reason is that a package is designed to be self-contained (i.e. all the metadata about the package is stored within the package) and cryptographically sealed (i.e. so you can't edit/mutate a package). Tags break these, as they apply semantic metadata outside of the package.
These have caused big issues in ecosystems that have tried them (like npm) - but long story short, there's a good reason they don't exist and there's most certainly a better way to accomplish what you're trying to :)
Cheers,
Steve
Hi @geraldizo_0690,
Thanks for the pointers -- now as an FYI, these settings would have to be a Feed-level setting, but the drop-downs would be the same.
FYI, here's the code we use to generate the Release file --- I'm not sure what those other header values do, but we probably wouold just want to add the two you suggested.
What do you think?
I suspect this will be a quick, opt-in change!
private void WriteReleaseFile(Stream output)
{
using var writer = new StreamWriter(output, InedoLib.UTF8Encoding, leaveOpen: true) { NewLine = "\n" };
writer.WriteLine($"Suite: {this.Distro}");
writer.WriteLine($"Codename: {this.Distro}");
writer.WriteLine(FormattableString.Invariant($"Date: {this.Generated:ddd', 'dd' 'MMM' 'yyyy' 'HH':'mm':'ss' UTC'}"));
// NotAutomatic: yes <-- add here
// ButAutomaticUpgrades: yes <-- add here
writer.WriteLine($"Architectures: {string.Join(' ', this.indexes.Select(i => i.Architecture).Distinct(StringComparer.OrdinalIgnoreCase))}");
writer.WriteLine($"Components: {string.Join(' ', this.indexes.Select(i => i.Component).Distinct(StringComparer.OrdinalIgnoreCase))}");
var desc = FeedCache.GetFeed(this.feedId)?.Feed_Description;
if (!string.IsNullOrWhiteSpace(desc))
writer.WriteLine($"Description: {desc.ReplaceLineEndings(" ")}");
writeHashes("MD5Sum:", i => i.MD5);
writeHashes("SHA1:", i => i.SHA1);
writeHashes("SHA256:", i => i.SHA256);
writeHashes("SHA512:", i => i.SHA512);
void writeHashes(string name, Func<IndexHashData, byte[]> getHash)
{
writer.WriteLine(name);
foreach (var i in this.indexes)
{
writer.WriteLine($" {Convert.ToHexString(getHash(i.Uncompressed)).ToLowerInvariant()} {i.Uncompressed.Length,16} {i.Component}/binary-{i.Architecture}/Packages")
writer.WriteLine($" {Convert.ToHexString(getHash(i.GZip)).ToLowerInvariant()} {i.GZip.Length,16} {i.Component}/binary-{i.Architecture}/Packages.gz");
}
}
}
It sounds like you're on the right rack with troubleshooting; the issue is definitely on the server-side in this case, so I asked ChatGPT. Who knows if any of this is accurate, but...
This is a very common situation with older versions of Bitbucket Server (especially pre-6.x / pre-7.x era, but even up to some 7.x versions in certain setups).
The REST API (e.g.
/rest/api/1.0/...) and the Git Smart HTTP protocol (/scm/.../info/refs,/git-upload-pack, etc.) are handled by different authentication filters in Bitbucket Server.Most likely you're using a Personal Access Token / HTTP Access Token (most frequent cause in older versions). In many Bitbucket Server versions (especially ≤ 7.17–7.21), HTTP access tokens were designed mainly for REST API and did not work reliably (or at all) for Git over HTTPS in many cases.
As a workaround , you need to use a real username + password (or username + app password if 2FA is on) for Git operations
We've seen similar in really old version of ADO, GitHub, etc, where API tokens wouldn't work for Git.
Anyway, I would try that - at least from the curl side of things. And maybe upgrading will help as well. If it works, then you'll likely only be able to use a Generic Git repository with a real username/password -- and just create a special builds user which effectiveely acts like an APi key.
Cheers,
Steve
Hi @Stephen-Schaff,
It seems pretty easy to add these up and display them on the screen! I suppose the "hard part" is the UI...
A "Total" line doesn't seem to look right. And it seems like too little information to put in one of those info-boxes. "Total Size: XXXX MB" at the bottom just looks incomplete.
Any suggestions? I'm struggling a bit to see how it could be displayed without looking a little out of place... and since it was your idea I figured I'd ask ;)
Thanks,
Steve
Hi @geraldizo_0690,
Hello,
I'm not all that familiar with Debian/APT... but I briefly researched this, and it seems like this involves adding values like this at the top of the Release file like this:
NotAutomatic: yes
ButAutomaticUpgrades: yes
Is that it really? And this setting would impact the entire feed... but have no real relation/impact to connectors or packages?
If that's the case, how would you envision configuring this? I'm thinking on the Feed Properties page, but perhaps as a checkbox? How do other products/tools do it in your experience?
Thanks,
Steve
Thanks for the feedback. Based on what you described, it sounds like...
You were able to confirm this with the "Generic Git Repository" also not working. If you were to do a curl -I -u USERNAME;APIKEY https:/.../.git you would most certainly get a 401 response as well.
Anyway that's where I would start -- try to figure out why the Git API is not accepting the credentials. It's most likely related to permissions on theh key, but it's really hard to say... just a guess.
Thankks,
Steve
Hi @Julian-huebner_9077 ,
Here's some information on file storage paths:
https://docs.inedo.com/docs/proget/feeds/feed-overview/proget-feed-storage
Long story short, if you modify Storage.PackagesRootPath under Admin > Advanced Settings and move your files as needed, then it should work just fine.
Thanks,
Steve
Hi @kquinn_2909 ,
We haven't forgotten about this; the issue is trying to figure out steps to reproduce it based on the information we have... considering it works on our test instance and all. We may consider putting some more debugging code in, though figuring out how to expose that in this context is a little challenging.
Just as a sanity check though, do you have a project that doesn't have a "space" in the name? I want to make sure this isn't something really simple as WebProjects%20Replicator vs WebProjects_Replicator.
The other idea is authentication/authorization, though I would imagine you would get an error accessing the project instead of no builds.
Thanks,
Steve
if the resolved version that
npm i underscorechose was released in the blocking period, thenpmcommand would 400?
If you have "Block Noncompliant Packages" enabled (which we generally don't recommend) and you have a rule that new packages are complaint, then the npm command would most certainly give some kind error.
You will probably see a 400 code, but I don't think it will display the message that's sent by ProGet (i.e. "package blocked due to...")? The real issue comes with a large dependency tree, and it'll be hard to know what exactly the issue is.
As such, we recommend running pgutil builds scan/audit in your CI/CD pipelines instead of blocking. This will produce a much easier to understand report, and even allow you to bypass issues reported on a case-by-case basis.
Thanks,
Steve
Hi @sigurd-hansen_7559 ,
Thanks for pointing me to the repository; I was able to reproduce this, and it will be fixed via PG-3194 in the next maintenance release (scheduled for Friday).
Thanks,
Steve
Hi @toseb82171_2602,
In Docker, images must have a namespace. When they don't, the Docker client will transparently append library/ to those namespaces. In general, this behavior is not desirable, and it's recommended to use library/python or, in a ProGet context, myproget.corp/mydockerfeed/library/python.
That said, there is a setting on the connector that may help resolves such images, but these images can be problematic even with that.
AS for extending the trial, no problem - you can actually do this yourself on my.inedo.com, on the day of expiry. Of course, please contact us if you run into any issues or have licensing questions.
Thanks,
Steve
Hi @nachtmahr,
Yes, that's what I would reommend doing -- using the internal storage path. Just make sure to NOT select the option to delete the files and make sure to select "search subdirectories" so everything will be imported.
Cheers,
Steve
Hi @nachtmahr ,
There is no "clone" method per se, but you can accomplish this by bulk-importing packages into a new feed: https://docs.inedo.com/docs/proget/feeds/feed-overview/proget-bulk-import
Cheers,
Steve
Hi @tim-vanryzin_8423 ,
Great question on developer experience! We're very curious to learn that ourselves, so please let us know as you implement it.
First and foremost, if you haven't already, check out the Recently Published & Aged Packages rules blog article to see how this works and our current advice. FYI - we are likely going to change the best practices guidance in 2026 to discourage download blocking.
From an API/technical standpoint, it's simply not possible to "hide" the fact that 1.12.15 is the latest version. So, if you have a connector, ProGet will report the latest version as reported by the connector.
But even it were technically possible, there's no simply great developer experience here. Keep in mind that most never look at the ProGet ui -- they configure things once, and forgeet about it.
So really, it's just a question of when you want the developer to find out they can't use Xyz-1.12.15. Here are the general options:
pgutil builds audit on your build server, and you can see if the packages are noncompliant; this is where we are shifting our advice, as a failed build at that stage will be so much more obvious than a 400 buried in a package restore stepUltimately this requires training developers to use lock files and not always get latest. That's why we are shifting to pgutil builds audit -- it's almost self-training. When their builds fail, they will see the reason clearly and should be able to adjust their code/configuration to not use a non-compliant version.
-- Dean
Thanks for clarifying! I'll be honest, I had no idea where you were getting configuration files in the first place... and I forgot that was ever a thing. Sorry about that.
Anyway, I don't think that this has worked in years, and it's most definitely not something we'd recommend today. We'll try to track it down and remove it in the docs / help text.
For your use case (automated installation) just use pgutil settings to set that value.
Thanks,
Steve
Hi @henderkes,
I'm not familiar with module streams, but if you tried it in ProGet and it didn't work then likely not? It sounds like a different API/endpoint, but I haven't researched it at all.
However, based on how you described it ("removed from Fedora 39"), it sounds like one of those "good but old" technologies that don't make sense for us to implement.
Thanks,
Alana
Hi @pmsensi ,
You'd need to run through a proxy like Fiddler or ProxyMan, which can capture the outbound / inbound traffic. Or if you provide us with a URL/API key we might be able to play around and attach a debugger. You could always create a free trial or something and set it up that way.
First thing that comes to mind is that your endpoint is incorrect and you may need to "play around" with the URL. Typically it's like this:
https://server.corp/path/my-pypihttps://server.corp/path/my-pypi/simple/simple unless you specify otherwise on AdvancedThe "Simple " endpoint is just a list of HTML like <a href="...">. That's what ProGet/connectors/pypi follow.
Thanks,
Steve
Hi @ayatsenko_3635, @dubrsl_1715,
Thanks for the feedback and continued discussion.
So, our Composer feed is relatively new, so we are open to exploring new ideas. One option might be to do an "import" of packages into ProGet, similar to how we handle connectors in Terraform feeds. But that's something that could also be done with a script, so we'd want to see what that looks like as a prototype.
That said, we definitely can't implement "content pointers" to Git repositories. The Git-based "package" model is not only outdated but it has several "fatal" Software Supply Chain problems.
The biggest problem, by far, is that package content is hosted by a third party (i.e. GitHub) and managed by another third party (i.e. the repository owner). At any time, a package author or the host can simply delete the repository and cause a major downstream impact - like the infamous left-pad incident in npm.
This is why other package ecosystems have adopted a "read only" package repositories and disallow deletes (except for rare cases like abuse). Once you upload a package to npmjs, nuget, rubygems, etc. -- it's permanently there, and users can always rely on that being the case.
That's simply not possible with Git-based "packages". The "content pointer" must be updated periodically updated, such as if the author decides to move the Git repo to GitLab, Gitea, etc. Now you no longer have a read-only package repository, but one that must be editable. Who edits? Are they tracked? What edits are allowed? Etc.
There are several of other issues, especially with private/organizational usage, like the fact that there's no reasonable way to QA/test packages (i.e. compared to using a pre-release/repackaging workflow) and that committing/publishing are coupled (i.e. you tag a commit to publish it). This makes governance impractical.
And that's not to mention the fact that there's no real way to "cache" or "privatize" third-party remote Git repositories. Unless, of course, you mirror them into your own Git server... which is technically challenging, especially if the author changes hosts.
We first investigated feeds years ago, but they didn't have package files at the time -- only Git repository pointers. Like Rust/Cargo (which used to be Git-based, and still technically supports Git-based packages), we anticipate this same maturity coming to Packagist/Composer as well.
So while we certainly understand that this is a workflow shift, it's a natural evolution/maturation of package management. Likewise, it took decades for the development world to shift from "file shares" and legacy source control to "Git", but that's what everyone has standardized on.
That's the world ProGet operates in, so it might be a bit "too far ahead", or we might be misreading the tea leaves - but a "package mindset" is a requirement in using ProGet. But hoepfully we can make that easier, possibly with exploring "package imports" or something like that.
Cheers,
Steve
Hi @frank-benson_4606 ,
Thanks for clarifying, that makes sense.
I'm afraid that ProGet does not "crawl" the parent artifacts for metadata; we had considered it, but it's rather challenging to do from an engineering standpoint, difficult to present crawler errors, and fairly uncommon.
Thanks,
Steve
Hi @dubrsl_1715 ,
Thanks for the clear explanation; I'm afraid your best bet here is to "take the plunge" and adopt package management best practices. A key tenet being a "self-contained archive file with a manifest" that is read-only and "cryptographically sealed" (i.e. hashed).
A "pointer to a Git commit" seems convenient at first, but there's a good reason that many ecosystems (including Composer) have moved away from it -- ultimately it's just not scalable and doesn't handle team workflows very effectively.
This will likely involve updating your composer.lock files and also the way you publish packages. In general, your process for creating a new version of a PHP packages should look something like:
composer.json file with a version numberAs you get more mature, you can get into pre-release versioning and repackaging.
Hope that helps,
Steve
Hi @frank-benson_4606,
I looked into this a bit closer now.
Looking at the commons-io-2.14.0.pom, there is no Licenses element specified. The pom should have that, and it'd be nice if the package authors added it; if you requested that via a pull request or issue in their github, I'm sure they would. That said, that's why it's not showing in ProGet.
This is why you see the unknown license detected, and that means you have to click "Assign License Type to Package" for ProGet to associate the package/license. I assume that you did that on 2.14.0, and selected Apache-2.0.
By default, that selection only applies to the specific version, and if you wanted it to apply to all versions of commons-io (including future ones not yet published) you'd need to click on the "Apply to all versions".
If you navigate to SCA > Licenses, and click on Apache-2.0, you can see the assignment to the package under the "Purls" tab. It would show: pkg:maven/commons-io/commons-io@2.14.0 for the version you selected.
You will need to either do this for all versions or decide if you want to add an entry to the Package Name tab (i.e. pkg:maven/commons-io/commons-io) under the Apache-2.0 license definition.
Thanks,
Steve
Hi @sneh-patel_0294 ,
What I mean is, in your browser, open multiple tabs -- one for /administration/cluster on each node in the cluster, bypassing the load balancer. All nodes should show "green" for that.
The one that shows "red" still has the wrong encryption key. Modify the encryption key, and restart the servicies, and reload the tab, and it should work fine.
Thanks,
Steve
@sneh-patel_0294 to restart the services, you can do so from the Inedo Hub or the Windows Services (look for INEDOPROGETSVC and INEDOPROGETWEBSVC). If you're still using IIS, make sure to restart the app pool as well
Hi @sneh-patel_0294 ,
This message means that the decryption keys across machines are different, which result in exactly the behavior you describe (403s, logouts):
https://docs.inedo.com/docs/installation/configuration-files
I know you mentioned you already checked, so there's likely a typo, miscopy, looking at the wrong folder, etc. Note that the folder is %PROGRAMDATA%\Inedo\SharedConfig\ProGet.config, as opposed to C:\ProgramData\Inedo\... - on some machines, the program data folder is stored in a different location.
I would also make sure to restart the service/web as well. To test, you can try loading that tpage on all nodes and you should not see "Encryption key decryption failure" when refreshing the nodes.
Hope that helps,
Steve
Hi @frank-benson_4606 ,
This appears to be a known issue that will be fixed in 2025.15, releasing this Friday, that causes certain URL-based licenses to not be detected (PG-3153).
If you're using Docker, you can try upgrading to inedo/proget:25.0.15-ci.4, which should have that fix in it.
Thanks,
Steve
Hi @k-lis_1147,
Sorry on the slow reply; we did not get a chance to investigate in last release, but it was on the list this week. That being said, it was also an easy fix (didn't anticipate it to be a copy/paste fix)-- and we'll get it via PG-3160 in this week's mainteancne release.
Thanks,
Steve
Hi @frank-benson_4606,
Whoops, it looks like that was kept in the documentation by mistake; I just removed it now.
We had planned that feature way back in ProGet 2023, but it never was implemented. That feature -- as well as some of the more advanced license compliance ideas we had -- have since left our roadmap due to a total lack of interest from end-users.
The main reason for lack of interest is that the pgutil builds audit lists all packages and licenses. So, most users found that to be sufficient. They just hand that list to the legal team, who creates that amendment. So perhaps that will suffice in your use-case as well.
Let us know if not, always open to hearing more.
Thanks,
Steve
Hi @frank-benson_4606,
ProGet's license detection requires generally that a package is cached or local to ProGet in order to detect the license. When you visit the package page, a request is being made to download the metadata from the remote connector, which is how you can see the license in that case.
That being said:
Hope that helps to troubleshoot. A prerelease version of 2025.15 is vailable should be interested
Thanks,
Steve
That vulnerability is in our database as PGV-2228003, and it shows up when I view that package:

If you can provide more details about what you mean by " the report is telling us no vulnerabilities are detected" I can investigate further.
Thanks,
Steve
Hi @yaakov-smith_7984 ,
This behavior is expected and by design. "Deprecation" and "Unlisted" are server-side metadata (i.e. stored in the remote repository, not the package itself), and once a package is brought into to a different server (i.e. ProGet), it's "disconnected" from the other server.
That being said, there is a feature in ProGet that can routinely "sync" this server-side metadata:
This feature obviously comes with some performance costs, though you'd really have to enable it to see if that has any impact on operation.
Another approach is to use a retention policy that deletes cached packages older than 90 days.
Thanks,
Steve
Hi @frei_zs,
Based on the fact that unpackaging/repackaging it works, there's definitely something "wrong" with the original package file.
Debian uses a tarfile format, and there are several "buggy" tarfile writers that don't get the format quite write. If I remember correctly, some ancient versions of dpkg wrote these files incorrectly. Some tarfile readers account for these errors while others (perhaps including ours) do not.
We may be able to attach the file to a debugger and give more details, but if this is a one-off or rare circumstance, than I would just repackage it and not worry about it.
Thanks,
Steve
Hi @jw ,
Did you try this on a new instance, or did you discover this on your (older) instance?
This was a known issue through several versions of ProGet 2025, and it impacts mostly SCA as you noticed. However, the vuln updater has since been fixed, so it shouldn't be continuing.
The "feed reindex" function can also merge/fix these duplicate names. They should be detected during a "feed integrity" check, and show as a "warning".
Thanks,
Hi @koksime-yap_5909 ,
I'm afraid we can't provide much clearer guidance than that, as there are so many factors involved that make predicting performance basically impossible. For example, the feed types you're using, your CI server configuration, how often developers are rebuilding, etc.
The article you found is actually what we send users who experience symptoms of server overload, to help understand where it comes from and how to prevent it. As the article mentions, the biggest bottleneck is network traffic during peak traffic - there's only so much that a single network card can handle, and scaling CPU/RAM doesn't really help.
This is where load-balancing comes in. The main downside is complexity/cost, which is why a most customers start with a single instance. It can take quite a while for a tool like ProGet to be fully onboarded across teams, so performance problems likely won't happen at first.
Hope that helps, let us know if you have any other questions!
Thanks,
Steve
Hi @koksime-yap_5909 ,
Good catch; that is most definitely a bug. I just checked, and it's isolated to assets - packages and Docker images work as expected.
This will be fixed in the upcoming maintenance release via PG-3150; it's shiping Friday, but we can provide a pre-release if you're interested in testing earlier.
Thanks,
Steve
In the event that the artifact has not been downloaded (i.e. the last download date is "null"), then the publish date will be considered. So if you set "90 days", then an artifact will be deleted at earliest, 90 days from publication if it hasn't been downloaded.
Thanks,
Steve
The command will recreate the user, restore administrative privileges, etc. It's safe to run - and you'll ultimately be left with a Admin/Admin user that you can log-in as.
On ProGet 2025, the command is proget or proget.exe We should update the docs for sure
Thanks,
Steve
Hi @Sigve-opedal_6476 ,
There are some known issues that we intend to fix with PG-3144 in the next maintenance release (scheduled for Friday). This will likely be resolved then.
The inedo/proget:25.0.14-ci.10 container should have these changes inthem, if you'd like to try it out sooner.
Thanks,
Steve
@yakobseval_2238 thanks for letting us know, I just updated it!
Hi @k-lis_1147,
Based on what you described, it should show up.
Can you confirm what feed type you're using, and whether or not you're using PostgreSQL (this is the default for ProGet 2025).
I just discovered a bug (PG-3145) that would impact PostgreSQL (all feeds probably) and certain feed types on SQL Server (Maven) that would cause that information not to display on that page.
Easy fix, but just want to double-check
Thanks,
Steve
Hi @koksime-yap_5909 ,
If you ever get "locked out" of an Inedo product, either due to misconfiguration or a lost password, you can restore the default Admin/Admin account and reenable Built-in User Sign-on by using ProGet.exe resetadminpassword
Here's more information on this procedure:
https://docs.inedo.com/docs/installation/security-ldap-active-directory/various-ldap-troubleshooting
Thanks,
Steve
Hi @tayl7973_1825 ,
Thanks for the feedback; this is all a relatively new space, so we're in the process of building best practices / advice as well as tools to help teams solve these problems.
Right now, based on your suggestion, it sounds like the workflow would require us to manually identify which applications depend on a vulnerable library, notify each owning team
You are correct - the SCA Builds & Projects functionality is designed to "provide that link" between specific package versions and specific builds of applications. The builds are a moving target, as they may or may not be active/deployed.
The "Project" in ProGet is not intended to the "source of truth" about the project itself, but be sort of sync'd with the truth (e.g. like an Application in BuildMaster). That's why there's a "stages timeline" for builds in PRoGet.
hope it fits within their priorities, and then track remediation through individual tickets.
Our advice here is to think of it more like, "advise them of the identified security risk and unavailability of the impacted library they are using". Ultimately it should be up to the team (their product owner) to evaluate the risk you identified and mitigate it. For example, TeamLunchDecider1000 can probably live with a security risk, but let the team decide.
Once you've removed the library from ProGet, they can't use it anymore and it's "no longer your problem" to worry about or track through tickets.
Ideally, we were hoping our package management system — since it already governs distribution and security controls — could act as that “one stop shop” to track and visualize which applications still rely on a vulnerable version along side it's assigned severity rating.
ProGet already provides visibility into consumers through SCA, and you can already see how OSS Vulnerabilities impact builds.
HOWEVER, our core advice here is to not try to establish your own in-house "vulnerability database" for in-house libraries your organization. Even large orgs (2000+ developers) won't do that.
Instead, it's a simple binary decision: PULL or KEEP the library. If you PULL, then notify consumers it's unavailable going forward and let them decide how to mitigate.
That approach is superior to OSS Vulnerability workflows, but it's obviously not possible for OSS library authors to do.
Cheers,
Steve