Hi @caterina ,
Thanks for checking; I was able to reproduce this; it seems to be a regression.... we'll get it fixed via PG-2795 in next week's maintenance release. Thanks for letting us know.
Thanks,
Alana
Hi @caterina ,
Thanks for checking; I was able to reproduce this; it seems to be a regression.... we'll get it fixed via PG-2795 in next week's maintenance release. Thanks for letting us know.
Thanks,
Alana
Hi @caterina,
I'm not seeing any changes in 2024.14 that would cause this behavior, so I'm not sure what it could be offhand.
The first thing I would check is the connector filtering; can you try that out? It's under Admin > Manage Feed. There might be something in ther.e
You may also see some recent connector errors under Admin > diagnostic Center.
Thanks,
Alana
Hi @jw ,
This doesn't seem like a a trivial change, due to the way those pages work, so I'll add it to the "wishlist" - we've got a lot of other ProGet 2025 roadmap stuff prioritized ahead for the time being :)
Cheers,
Alana
Hi @dan-brown_0128 ,
It doesn't look like there's been much interest in this so far (we haven't heard any othe rrequests for it), but I wanted to mention that Terraform repositories are planned and something we hope to accomplish in the coming months.
Cheers,
Alana
Hi @dan-brown_0128,
Thanks we'll definitely keep this in mind when we explore updating/expanding the feature!
In the meantime, I think there might be a better workflow for you to consider. We wouldn't recommend using "download blocking" for an application packages like this.
The reason is... the status could change, and when you go to deploy the application, it will probably fail in an inconvenient place and the person doing the deployment won't quite get why there's a random error/crash.
Instead, how about using pgutil builds audit as a means to achieve a similar goal? You would run it as a deployment precheck, and the tool would output errors if the compliance status of packages were not acceptable. This will be much more intuitive of a failure, and you could run it early in the process.
I'm not quite sure if pgutil builds audit is suitable today, but I know that's on our shortlist to get working, and create HOWTO guides for using it.
Cheers,
Alana
Hi @jw ,
This can be accomplished with a script using the Set Package Status API; there are a couple scripts that should get you going on there.
Cheers,
Alana
@sebastian said in Problem with PGV-22381O7 (tree-kill1.2.2 incorrectly flagged vulnerable):
There should be a special treatment for withdrawn vulnerabilities within ProGet. Maybe not deleting them (because I'm pretty sure there will be cases where I will be looking at a package and think "I swear this one had a vulnerability, but now I can't find it?" ), but maybe auto-assess a special status to it.
That's what we were worried about as well, having them dissapear. Perhaps we just delete ones without assessments, and if you set a withdrawn vulnerability unassessed, it gets deleted 
Hi @dan-brown_0128 ,
Thanks for sharing!
#1 and #2 will be pretty easy UI changes - and I'm almost certain there's already a URL for Builds, but it's just not linked.
#3 is complicated, since we do not model package consumers (which would allow you to say "show me all packages that use X") nor do we have the concept of "application packages" (e.g. MyCorp.Package is also a project), so we'd have to really think about how to create a use case for this.
As an FYI, our team we're currently pretty heads-down on feeds (new Maven feed, then Rust/Cargo, Terraform, PHP/Composer, C++/Conan), PostgreSQL migration, and a few other things, but our rough plan is to resume working on SCA stuff in Q1.
If these are a particular blocker for rolling out the feature, we can consider to shift focus.
Hi @sebastian,
Thanks for sharing this...
I don't really know the answer, but I searched for "tree-kill" on Inedo Security Labs, and found three results:
We can see that PGV-22381O7 does say it "affects tree-kill (npm), versions (all)", so that's where the data is coming from in ProGet. My guess is that it's a data update/aggregation problem, maybe related to the Withdrawn status?
There is apparently some use case for all as a version, and I guess it's exactly what you specified? A lot of Redhat/Linux system packages have all versions and get updated later.
Anyway, ISL is managed by a different team, so I'll submit an internal request to review. It doesn't seem so urgent, just inconvenient/incorrect and easy to workaround in ProGet. But let me know if I misread that.
As for "Withdrawn" vulnerabilities... we're open to ideas for what to change in ProGet 2025. There used to just be a handful, but there are a lot more now. Our original was to just delete them from ProGet, but instead we just showed the icon. Maybe we should delete them.
Thanks,
Alana
Hi @caterina ,
Can you try to download the latest version of Inedo Hub at https://my.inedo.com/
That should allow you to install the older version. You can also install an offline installer for the ProGet 2023 version you'd like.
Thanks,
Alana
Hi @steviecoaster ,
I'm not really a PowerShell guru, but I think you'll want to do ...
$base64Salt= [System.Convert]::ToBase64String($saltBytes)
... and then pass that in.
Hope that does it!
Alana
Hi @chris-cantrell_1211 ,
We do not support nor recommend this type of configuration for a couple reasons.
First, it doesn't change the attack surface, since the APIs are already the "weakest link". The Web Site use cryptographically-secured authentication tickets with anti-CRSF protection. The API just requires an API key to access.
Second, it's confusing to end-users who are trying to troubleshoot why some urls aren't accessible. The API may provide them with a link (for example to a vulnerability in a package), and then it will give some kind of error because the page is blocked. This causes everyone a headache.
Obviously you can use a lot of tools to block/allow access to URLs, just not using ProGet.
Cheers,
Alana
Hi @tamir-dahan_7908 ,
Either you're making a typo somewhere, or there is something odd about your Docker/server configuration that is preventing a local network connection. I'm afraid I'm not a Docker expert, so I don't know what else to look for -- it could be specialized security configuration you have or a monitoring tool that's blocking things.
If you have a Docker expert on your team, I would check with them to get some help. Ultimately, ProGet is just an ordinary .NET application making a connection on an ordinary SQL Server that's on the same local network (i.e. inedo-sql)... it's a super-common use case and just works out of the box.
The most common issue is a typo (i.e. inedo-sql in one command and inedosql in another), but since this is such a common use case, if you search "Docker" and "A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible", you'll find a ton of results with lots of different things to try as well.
If you're new to Docker, i would suggest installing with Inedo Hub - it's a lot easier to use.
Cheers,
Alana
The error message coming from ProGet is this:
Unhandled exception: Microsoft.Data.SqlClient.SqlException (0x80131904): A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 40 - Could not open a connection to SQL Server)
So basically it's a network error, and the ProGet container can't talk to the SQL Server container. I'm not an expert on troubleshooting Docker, but my "guess' would be that the network is different between containers.
This is specified by the net command, and I know I've mistyped that a few times. Here's the script I use to get up and running on Docker:
docker run --name inedo-sql \
-e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=«YourStrong!Passw0rd»' \
-e 'MSSQL_PID=Express' --net=inedo --restart=unless-stopped \
-d mcr.microsoft.com/mssql/server:2019-latest
docker exec -it inedo-sql /opt/mssql-tools/bin/sqlcmd \
-S localhost -U SA -P '«YourStrong!Passw0rd»' \
-Q 'CREATE DATABASE [ProGet] COLLATE SQL_Latin1_General_CP1_CI_AS'
docker run -d --name=proget --restart=unless-stopped \
-v proget-packages:/var/proget/packages -p 80:80 --net=inedo \
-e PROGET_SQL_CONNECTION_STRING='Data Source=inedo-sql; Initial Catalog=ProGet; User ID=sa; Password=«YourStrong!Passw0rd»' \
proget.inedo.com/productimages/inedo/proget:latest
Hope that helps,
Alana
Hi @jw ,
This error should not be logged and you can ignore it; we'll get it fixed in an upcoming maintenance release of ProGet via PG-2740
Cheers,
Alana
Hi @stefan-seeland_4753 ,
I'm not totally sure what you mean by that... but to clarify, the API is technically called "NuGet API V2 (OData)" and it could be technically used by anything, but probably is just used by the NuGet client or a script/tool that someone wrote that uses it to query the feed.
There is no information available other than that, so you'd need to disable the feature on the feed (and therefore cause an error on the client) or use some kind of access monitoring tool to see where the query is coming from (IP address, etc).
Cheers,
Alana
Hi @jw ,
This will be fixed via PG-2739 in the next maintenance release (scheduled for Friday).
If you want to fix right away, you can download the .sql file that I attached to the linked issue above and run it against the database. Then the delete will work. Upgrading on top of the patch is fine, but if you downgrade then the patch code is overwrriten.
Cheers,
Alana
Hi @caterina ,
This issue looks very familiar, and I'm almost certain it's a bug we fixed/discovered while testing ProGet 2024 prior to release. Basically, the npm scope was not considered for vulnerability searches during build analysis.
This should not happen in ProGet 2024.
Thanks,
Alana
Hi @matt-wood_5559,
Our solution for Maven builds is to leverage CycloneDX to generate the SBOM, and then upload that SBOM to ProGet: https://docs.inedo.com/docs/proget-sca-java
We had considered reproducing the functionality, but the only way to get dependency information from a Maven project is to create a Maven plugin and "watch" the build as it happens --- and that's already what CycloneDX does very effectively.
If you can think of ways to make it easier to work with pgutil we're very open to that :)
Thanks,
Alana
Hi @matt-wood_5559,
I think I understand what you're asking - basically you'd like to create a feed when some users see one set of packages, but other users see a different set?
This is definitely not possible, and it's simply not something ProGet does from a design standpoint: i.e. "file-system type" granular permissions.
While it might seem convenient or nice to give users "just a single URL" to access, it ends up makes things much more complicated to configure/use/maintain. Basically some users will get random "package not found" errors while others will build fine. It'd be very confusing and big headache.
Instead, it's best to educate on different feeds/repositories, and help them use and request access as needed.
Hope that helps,
Alana
Hi @matt-wood_5559,
I might need a little more information / screenshots with what you're looking at here....
There could be a bug/oversight, etc. Perhps you could walk through steps / show screenshots of what you're seeing? If it's on maven central, we can then repro and take a look
Thanks,
Alana
Hi @matt-wood_5559,
A Maven package's license is determined by the license field in the POM:
https://maven.apache.org/pom.html#Licenses
There are unfortunately no real standards here, and the author can put in anything from a SPDX Code (which is recommended, by the way) to a string like "Apache license" (which probably means apache 2.0, but who knows?).
We would rather not guess what the author might have intended, so ProGet only detects licenses with SPDX codes and then lets you decide how assign which codes to other license types as you come across packages that don't follow SPDX.
However, once you start associating those strings with licenses, it will work for future packages.
Thanks,
Alana
Hi @matt-wood_5559 ,
You should be able to upload any file type. There is a drop down on the upload page, but that's for the popular types and is kind of an example. Typically the fies are uploaded via Maven/CLI anyway, and not the UI.
Thanks,
Alana
@forbzie22_0253 setting the proxy programmatically is not something we currently support or document at this time, but if you're dedicated you could use native API to the newly-developer settings API:
https://github.com/Inedo/pgutil/blob/thousand/pgutil/Settings/ListCommand.cs
It's configured via values in the Configuration table.
@francesco-campanella_3733 thanks for continuing to research this - we just haven't had a chance to look further. Did you try editing the feed settings (especially the Feed Features, which would save a new Feed Config), and then re-saving again? We will fix the bug, I would just like to be able to reproduce it so we know for sure what's causing it :)
@apxltd heard this from a user on the support forums today...
We also came across this issue: https://github.com/dotnet/SqlClient/issues/2378 which suggests it might have started with the recent 5.2.x versions of Microsoft.Data.SqlClient (I did confirm that this is the version Proget is using).
Maybe we can downgrade to 5.1 and see if it helps at all
Hi @jw ,
It's unlikely we will want to add/change this; what you're describing isn't a supported use case, and adding a "second URL" type field that would be empty on nearly every license would be confusing.
Our general recommendation for dealing with non-OSS licenses has been to create a code like DEVEXPRESS or ASPOSE, and treat them as all the others.
If there's more user demand for the particular use case we'll definitely reconsider. For now the licenses are confusing enough :)
Thanks,
Alana
Hi @jw ,
This is not possible; ProGet only stores this portion of the URL and uses that URL fragment for license detection. This is important, because then users won't have to specify every variation of a license URL that packages will present.
Thanks,
Alana
When the license information is invalid, ProGet will give an error on most API URLS (I think a 400, but I'm not totally sure) and will redirect to license info on Web page URLS.
ProGet will attempt to auto-activate the license key if there is an activation problem; so that may delay the first request.
The bes tway to check for license validity is to visit or use the API, and if it fails with "invalid license" then you know it's invalid.
The /health Page should also display this I believe.
Thanks,
Alana
@forbzie22_0253 when you select "Use Windows Proxy Setting", then ProGet will use the proxy that's configured for the operating system. You can select/set this on the Proxy Settings page.
@francesco-campanella_3733 sorry not seeing the issue yet....one more column 
Can you run this?
SELETE Feed_Id, FeedType_Name, FeedConfiguration_Xml FROM Feeds
Screenshot is okay, we can figure the XML from the post above if needed.
You can use Feed_Name instead of Feed_Id, it's up to you -- i want some kind of reference in case I want to ask you to try something in UI
Hi @francesco-campanella_3733 ,
Based on the error message, I'm thinking that one of the feeds has some kind of legacy/odd configuration, and that's causing an error in the API (which is why the library and pgutil aren't working either).
Without looking at your database, I think the easiest way to fix it is to go the Manage Feed page, and then edit/save Feed Features or NuGet settings.
If we saw the results of SELECT * FROM [Feeds], but in particular the FeedConfiguration_Xml column, we could be for sure
Thanks,
Alana
Hi @fabio-xodo_3872 ,
This is not captured in an output variable, but you can control which exit code means success or failure:
Exec MyProcess.exe
(
SuccessExitCode: "> 0"
);
Another alternative is to write a PowerShell script that captures the output as a variable, if you need to do logic based on multiple codes.
We could always add support for capturing it as an output variable as well.
hope that helps,
Alana
Hi @sebastian,
I took a quick look, but its not a simple cherry-pick; this is a bug in the code that does policy analysis, and there were enough changes between ProGet 2023 and ProGet 2024 in that to make it a bit risky / time-consuming to bring over...
Thanks,
Alana
Just wanted to give a brief update to the issue from @mortenf_3736 that we discussed via the support ticket.
We were never able to get to the bottom of it, but it was entirely related to running on ACA.
| Container Setup | VM Setup |
|---|---|
| Azure Container Apps - Running 4 minimum replicas, max 8 replicas, each 3 cores and 6gb memory | Two D4 (4 cores, 16GB memory) VM with internal load balancer in front between them |
| Mapped storage from Azure Storage Account v2 Fileshare | Managed Premium Shared Disk, connected to Fail-over Cluster and shared between both VM |
| Azure SQL Database with 4 cores dedicated | Azure SQL Database Server less, scaling between 0.5 to 4 cores |
This doesn't seem to be related to Linux/Docker/Kubernetes, as we have several high-traffic users on Kubernetes clusters without issues like this. However, we have seen a handful of Azure-related problems over the years that manifested in ProGet:
So we believe the issue is the Azure platform itself, similar to the above hardware/software glitches we've uncovered over the past.
We're doing our best to research/identifying issues, and Inedo/ProGet Users aren't the only ones who are experiencing pain like this. Consider this report from a Azure "big data" user:
I have suffered from chronic socket exceptions in multiple Azure platforms - just as everyone else is describing. The main pattern I've noticed is that they happen within Microsoft's proprietary VNET components (private endpoints). They are particularly severe in multi-tenant environments where several customers can be hosted on the same hardware (or even in containers within the same VM.
The problems are related to bugs in Microsofts software-defined-networking components (SDN proxies like "private endpoints" or "managed private endpoints"). I will typically experience these SDN bugs in a large "wave" that impacts me for an hour and then goes away. The problems have been trending for the worse over the past year, and I've opened many tickets (Power BI, ADF, and Synapse Spark).
Other Azure users (who are much more technical than we are) have confirmed that there are indeed severe issues with their SDN infrastructure. Microsoft does appear to be aware of these endemic issues with their platform, and for the time being we simply cannot recommend using Azure's container services for anything that will have any kind of load.
Hope that gives some insight in case anyone stumbles across this thread.
Alana
The error is incorrect; should say Required property missing: packagePermissions.
Since we don't document that one very well yet, I'd make sure it works with pgutil., and consider capturing traffic. Then, if you have an example we can post on the docs page, happy to add that to the docs.
Here is the client code as an FYI:
https://github.com/Inedo/pgutil/blob/thousand/pgutil/ApiKeys/Create/FeedCommand.cs
Thanks,
Alana
Hi @forbzie22_0253 ,
Some of this is discussed on some recently published docs, but let me summarize some key points.
In general, we recommend using pgutil for programmatic access to ProGet, but HTTP Endpoints may be more appropriate for working with structured data an advanced integration.
If you're using .NET/C#, you may find the Inedo.ProGet library helpful:
https://docs.inedo.com/docs/proget-reference-api#net-library-nuget-package
Overall, we are working towards aligning our HTTP Endpoints and developing new ones using a pgutil-first approach. This means we will prioritize the CLI experience by creating intuitive and self-documenting commands. Then we will use existing HTTP Endpoints and develop new ones to fit those commands.
Thanks,
Alana
Hi @parthu-reddy ,
ProGet 2024.7-rc.1 is now available for your testing/verification
Here is information on how to use it:
https://docs.inedo.com/docs/howto-install-prerelease-product-versions
Please let us know the results!
I'm afraid we don't have enough information to troubleshoot the problem. The use case you describe (using ProGet as an offline cache) is very common, so it might be some kind of misconfiguration on the npm side or something. I don't realy know.
I would check the npm logs, try to monitor http traffic, etc.
If you can "see" the package in ProGet as a Remote Package, and then download the package from the web page, then the npm client can do the same. The npm API cannot "see" whether it's a local, cached, or remote package, so the package is pulled regardless.
Best,
Alana
Hi @forbzie22_0253,
We don't document the specifics of the Native API, so I'm not sure. Your best bet is to study the underlying Stored Procedure and see what's going on there - that's something we do when needing to use the methods.
That said, this Native API methods have been removed in ProGet 2024 in favor of Create ApiKey:
https://docs.inedo.com/docs/proget-api-apikeys-create
Best,
Alana
Hi @forbzie22_0253 ,
If you're receiving a Manual Activation dialog, then it means ProGet cannot connect to our activation server. That is a requirement in ProGet Free, and it's not possible to work around.
I'm not sure what your use case is, but since you mentioned "several servers", I wanted to clarify that ProGet Free is restricted from connecting to another ProGet server. They basically need to be standalone and not part of the same system.
There is a ProGet Enterprise for Edge Computing Edition that has an activation-less model .
Thanks,
Alana
Hi @parthu-reddy ,
I created a new npm feed and had no problem downloading those package versions via npm.org.
As I mention, ProGet will not issue network errors, so it's definitely something else that's interfering. Hopefully you can find some information from Wireshark.
FYI - in the screenshot you're sharing, I see most of the requests are going to registry.npmjs.org, and not to your proget server.
Thanks,
Alana
Hi @parthu-reddy ,
Unfortunately this will be difficult to troubleshoot.
As the NPM error indicates, this error is related to network connectivity. I'm afraid that neither the npm client nor the ProGet server is able to troubleshoot network problems, so you'll need to use something like Wireshark or another traffic monitoring tool to discover why you're getting that error.
ProGet does not crash with a network-level errors like that, which means it's something else that's causing the error. Most likely it's a load-blancer, firewall, etc.
There is also no reason that package 1.3.1 version or 1.3.0 would fail to download - but if that's indeed what's happening, then it's likely related to whatever device is issuing the network error. I have no explanation for what that would be.
As for the Azure screenshot, it's showing a a 200 (success) message, which means that ProGet is not giving any kind of errors. At least not all the time.
Thanks,
Alana
Hi @parthu-reddy,
Basically you need to upgrade your npm client.
The underlying issue is that your npm code is attempting to use the now-deprecated "quick audit" API. Here is information about this api endpoint:
https://docs.npmjs.com/cli/v10/commands/npm-audit#quick-audit-endpoint
ProGet does not implement this deprecated endpoint and it's unlikely we ever will, since it's only used by old versions of the npm client.
Thanks,
Alana
Hi @edward-a-peng_7759 ,
There are no plans for a "generic" OCI Registry feed type. However, if you can help us understand what value this could add, we can consider building one.
A generic OCI registry seems to be like a "dumb" file system that's built around "dumb" cloud storage like S3, etc. What I mean, it's just files with no context. There is no real visibility into what's stored an OCI registry - it's just a place to store and access unnamed files via digests.
ProGet is a "smart" package and container system, and stores OCI-based container images (i.e. Docker images) in a Docker feed, and Helm charts in a Helm feed. There are so many advantages to this "smart" system vs a "dumb" registry:
A "dumb" file system obviously couldn't offer any of this - and is one of the reasons people prefer ProGet over ECR, ACR, GCR, etc.
So far as I can tell, it doesn't matter to client tools whether you use ProGet Feeds or a generic OCI registry -- everything works the same with regards to helm, docker, kubernetes, and other supported tools.
Are there any advantages to having Helm charts being stored or accessed in a different manner?
Thanks,
Alana
You could do this via the API; it would involve first Querying Package Versions, and then for each version returned, Setting Package Status. I think it'd be a "relatively easy" script to write - so if you create one that can do it, please share :)
Hope that helps,
Alana
Hi @forbzie22_0253 ,
This refers to Delete Package, which was introduced in ProGet 2023:
https://docs.inedo.com/docs/proget-api-packages-delete
We don't really document/track what Native APIs change between releases, so I'm not really sure: https://docs.inedo.com/docs/proget-api-http#native-api-endpoints
You can do a comparison in the /api/reference page.
Thanks,
Alana
Hi @mortenf_3736 ,
I also mentioned this in your ticket, but the issue you're experiencing is a bit different. In Pawal's case, it's a different error message (both related to the database) and it was happening in ProGet 2023 and ProGet 2024 (yours happened only after upgrade). In addition, his error was happening randomly (high/low traffic), whereas yours seems to be high traffic.
You're also running on ACA and use auto-scaling, and seem to have a very high occurrence of container stop/starts. Anyway, we will continue to troubleshoot your issue in that ticket.
Thanks,
Alana
I looked at this one more closely, and it's behaving as expected.
"any error" means AgentError, CollectionError, or RemediationError. There are many other statuses, and the page doesn't filter on all of them (including NoRoles or Unknown).
The status scenario is complex 
If you are curious to the logic
string getStatus()
{
if (!server.Active_Indicator)
return Disabled;
if (server.HasNullAgent())
return Unknown;
if (!server.HasLocalAgent())
{
if (server.AgentStatus_Code == Domains.AgentStatusCode.Error)
return AgentError;
if (server.AgentStatus_Code == Domains.AgentStatusCode.Updating)
return AgentUpdating;
if (server.AgentStatus_Code == Domains.AgentStatusCode.Unknown)
return Unknown;
}
if (server.RoutineConfigurationUsage_Code == Domains.ServerRoutineConfigurationUsage.None)
return NoCollection;
if (!server.HasRoles_Indicator)
return NoRoles;
if (server.LatestCollection_Execution_Id == null)
return Unknown;
if (server.LatestCollection_ExecutionRunState_Code == Domains.ExecutionRunState.Executing)
return Collecting;
if (server.LatestCollection_ExecutionStatus_Code == Domains.ExecutionStatus.Error)
return CollectionError;
if (server.LatestRemediation_ExecutionRunState_Code == Domains.ExecutionRunState.Executing)
return Collecting;
if (server.PendingRemediation_Indicator)
return PendingRemediation;
if (server.LatestRemediation_ExecutionStatus_Code == Domains.ExecutionStatus.Error)
return RemediationError;
if (server.ConfigurationState_Code == Domains.ConfigurationState.Current)
return Current;
if (server.ConfigurationState_Code == Domains.ConfigurationState.Drifted)
return Drifted;
return Unknown;
}
Of course it could be improved, but perhaps another day 
Best,
Alana
FYI - I was able to reproduce and fix this, at least the resource resolution portion.
Unfortunately you will still need to have a variable named $Commit and $Repository, but that's a much more complex problem to solve....
This works:
set $Commit = gitlab-vishab;
set $Repository = mast;
Git::Checkout-Code;
Or you could also just set a system-scoped vairalbe with the same name.
This will go in next maintennce release of Otter, which is scheduled May 31, but we can make a pre-release if you'd like.