Hi @michal-roszak_0767 ,
Just as an update, we've added this to our roadmap planning for ProGet 2026.
We will begin that process later this year, after ProGet 2025 has been released (see road to ProGet 2025).
Cheers,
Alana
Welcome to the Inedo Forums! Check out the Forums Guide for help getting started.
If you are experiencing any issues with the forum software, please visit the Contact Form on our website and let us know!
Hi @michal-roszak_0767 ,
Just as an update, we've added this to our roadmap planning for ProGet 2026.
We will begin that process later this year, after ProGet 2025 has been released (see road to ProGet 2025).
Cheers,
Alana
Hi @carsten_1879 ,
Can you try using buildmaster:24.0.5
instead of latest? There was a change in the Git library, and I'm wondering if this is a kind of regression on some platforms.
Thanks ,
Alana
Hi @pmsensi,
I'm sorry I'm not sure. The error HTTP_1_1_REQUIRED
indicates that an intermediary (e.g., proxy or load balancer) is rejecting HTTP/2 requests and requiring HTTP/1.1.
So, I would check with the proxies or load balancers between Kaniko (running in your GitLab pipeline) and ProGet. I suspect it's interfering with HTTP/2 traffic, causing the server to fall back to HTTP/1.1.
Alternatively, you can try forcing Kaniko to use HTTP/1.1 instead of HTTP/2. I don't know how to do that, but there may be an --insecure
flag that may also work.
Thanks,
Alana
This error means there's some kind of problem/issue with the format of the blob file. Tar is a very finicky format, so we'd need to be able to recreate this in a lab/debug setting.
If you can create a simple repro case then we'd be happy to investigate further.
As for vulnerabilities, each version of ProGet ships with an offline database. So you should be fine as long as you upgrade semi-regularly.
Cheers,
Alana
@michal-roszak_0767 currently maven2
is a workaround, so it will continue to warm. In ProGet 2025, Maven
will create the expected feed type.
This is what I meant by the Security API -- a combination of HTTP Endpoints and pgutil
commands. We will consider them for our ProGet 2026 roadmap.
So until then you'll need to use the Native API; At this time the Native API is the option.
You can also check out @steviecoaster 's PowerShell module, which has some modules that can help https://github.com/steviecoaster/InedoOps
Thanks,
Alana
It looks like you're trying to do a GET
request, which is why it's not working.
You need to POST
an array of package identifiers like this:
POST /api/sca/audit-package-vulns
[
{
"name": "NuGet.CommandLine"
"version": "6.8.0"
"type": "nuget"
}
]
Cheers,
Alana
Just an FYI, we discussed creating/documenting/etc. a new security API (users, groups, permissions) for our ProGet 2025 roadmap, but decided against fitting it in. We'll reevaluate when it comes time to planning out ProGet 2026!
Thanks,
Alana
Ah, that must have been an oversight; trivial change but best we don't change it until ProGet 2025.
I think you can use maven2
as the type for now, it's the internal identifier. Later on, we will make it maven
and mavenlegacy
Thanks,
Alana
Hi @itadmin_9894 ,
It doesn't seem to be documented, but we do have a pgutil packages audit
command:
$> pgutil packages audit --help
Description:
Analyzes a package for compliance issues
Usage:
pgutil packages audit [options]
Options:
--package=<package> (REQUIRED) Name (and group where applicable) of package
--version=<version> (REQUIRED) Version of package
Common Options (packages):
--api-key=<api-key> ProGet API key used to authorize access
--feed=<feed> Name of feed in ProGet
--no-connectors Only include local (non-connector) package data in results
--password=<password> ProGet user password used to authorize access
--source=<source> Named source or URL of ProGet
--username=<username> ProGet user name used to authorize access
-?, --help Show help and usage information
Examples:
$> pgutil packages audit --feed=approved-nuget --package=Newtonsoft.Json --version=12.0.3
$> pgutil packages audit --feed=public-npm --package=@babel/runtime --version=7.25.0
$> pgutil packages audit --feed=private-pypi --package=Django --version=5.0.6 --filename=Django-5.0.6.tar.gz
Cheers,
Alana
Hi @michal-roszak_0767 ,
This is a third-party / community plugin and we have no plans to maintain it ourselves. I hear it still works, but is a bit outdated.
HOWEVER, the former owner very recently "handed it over" to another community member. So this may mean we'll see some "new life" or changes -- I think they're still figuring out how to actually publish a new version, or something like that:
https://github.com/jenkinsci/inedo-proget-plugin
Thanks,
Alana
@michal-roszak_0767 only what's shown there is currently supported.
I saw that @zs-dahe is is looking for a the RetentionActive setting as well (see https://forums.inedo.com/topic/5341/retention-activation-via-api), so it would probably easiest to do these at once: I'm almost certain this is done at the ProGet level, not pgutil.
Can you open a new thread w/ the requested fields/properties? This will make it much easier for people searching in the future :)
Sounds like a plan, @c4buildmasters_2588 !
In case you haven't seen it, it's "relatively easy" to load a custom extension:
https://docs.inedo.com/docs/inedosdk/extending-inedo-tools-using-the-sdk/inedosdk-extending-creating
For testing this change, I would just build/pack it with a higher version number than the official AWS
extension - if all looks good on our end (i.e. won't break existing installs, etc), then we'll likely be able to accept pull request and then publish an official new version. Then you can delete your custom one.
Thanks,
Alana
Hi @michal-roszak_0767 ,
We do not have a non-Native API to assign privleges at this time, but otherwise our documented APIs are here: https://docs.inedo.com/docs/proget/api
Here is the info about the Native API:
https://docs.inedo.com/docs/proget/api/http-requests#native-api-endpoints
Thanks,
Alana
Hi @c4buildmasters_2588 ,
I haven't heard of MinIO before, but they claim to be "S3 Compatible Storage" so maybe it'll work?
I also don't know what "virtual path support" or "path style" means, but I can say that several users have had success with other S3-compatible services, while other services have had bugs and/or aren't actually S3-compatible. Best I can say, give it a shot and see what happens.
You can see how we use the AWS SDK and what our code looks like over here:
https://github.com/Inedo/inedox-aws/blob/master/AWS/InedoExtension/FileSystems/S3FileSystem.cs
No idea what to enter in any of the fields, but I suppose you would just enter the CustomServiceUrl ("Advanced" tab).
If you get it working let us know, would be nice to have a follow-up post here in case anywone else searches for it :)
Cheers,
Alana
Hi @michal-roszak_0767 ,
I assume you mean, why don't we recommend using the Native API?
We don't recommend using the Native API because we do not document usage and the methods are subject to removal or change, even in maintenance versions.
In this case, we do not have an alternative API for getting security privileges, so the only option is Security_GetPrivileges
if you want to do that programmatically.
Thanks,
Alana
Hi @zs-dahe ,
This isn't currently supported via the api / pgutil feeds update command. Let us know if you'd like us to consider this a feature request and we could likely add it an upcoming maintenance release :)
Cheers,
Alana
Hi @michal-roszak_0767 ,
The first thing that comes to mind is WIndows Integrated Authentication.
Curl does not support that, but if you use PowerShell wtih --UseDefaultCredentials it will. You can also bypass the request by appending ?bypassIntegrated=false
to the URL.
Cheers,
Alana
Hi @mmaharjan_0067 ,
The pgutil assets list
command is intended for interactive use and should not be parsed; it could change in future versions.
Instead, you can just call the endpoint directly, which will contain the details you're looking for:
https://docs.inedo.com/docs/proget/api/assets/folders/list
Cheers,
Alana
Sorry on the slow reply, I tried to quickly reproduce it but it worked as expected -- but I must have done something wrong. Then I realized we can just ask for your analysis logs.
On this particular package, can you do a "Re-Analyze" (in the drop-down button) and then share the logs from that? That will help us identify exactly what's going on.
But overall you are correct... once an assessment expires, it should be treated as if it were unassessed, which in your caase, would mean a noncompliant / blocked package.
Thanks,
Alana
Hi @parthu-reddy ,
If you can provide my with some step-by-step instructions (reproduction case), then I can see if if there's a bug in ProGet. However we can't really change the "license file is embedded in the package file" technical requirement.
That said... using custom licenses for blocking package is definitely inappropriate. Please do not do that. It will cause you headaches and probably business disruptions later. There are already tools to prevent users from downloading packages from ProGet, this is not how you want to do it.
The easiest solution here is to align the security team's understanding/expectations align with reality. You don't want to try to configure ProGet in unrealistic ways that will lead to actual problems/risks.
I suspect the security team is conflating "vulnerable packages" with "malware and viruses", so it'd be best to take this opportunity to educate them on how packages / ProGet works.
Thanks,
Alana
Thanks @daniel-pardo_5658 , I was able to reproduce it.
It seems to work when you have additional metadata fields. Anyway we'll get it fixed via PG-2935 in the next maintenance release. AS a work-around, you can just download the package, edit the upack.json, and reupload it.
Cheers,
Alana
Hi @parthu-reddy ,
This hasn't changed; to "Set Package Status", you need to first Pull the Package to the feed. From there, you can a compliance rule override of Always Block or Always Allow Downloads.
Thanks,
Alana
Hi @scusson_9923 ,
Thanks for clarifying. You're right, the result is not available as a variable. Instead, the Operation will fail, which means you'd want to handle this via a try/catch.
try
{
InedoCore::Exec
(
FileName: pwsh,
WorkingDirectory: C:\,
Arguments: test.ps1,
ErrorOutputLogLevel: Error,
OutputLogLevel: Information,
SuccessExitCode: == 0
);
... operations for success...
}
catch
{
... operations upon failure ...
}
Hope that helps,
Alana
Hi @parthu-reddy,
I'm not sure about the specifics of how you've configured this, but in general, the "first download not blocked behavior" is to be expected with certain types of license checking.
Depending on how the author configured the license, ProGet cannot detect a license without the package file... so until the package has been added to the feed in ProGet, (via caching) happens it's considered "Undetected". In your Policy, you have that as "Warn", so it won't be blocked.
It's not technically feasible to handle this any other way, as ProGet streams the file it's downloading from a remote source to the user while also adding it to the feed for caching.
Thanks,
Alana
Hi @scusson_9923 ,
Can you clarify what update you're looking for?
Are you looking for help on how to capture a failure and test on it?
Thanks,
Alana
Hi @daniel-pardo_5658 ,
Unfortunately I'm not able to reproduce the error; I think it has something to do with your upack.json
file. Can you share that?
Thanks,
Alana
Hi @pratik-kumar_8939 ,
Unfortunately Nexus has a very limited API, which is why the so the import isn't working (404 error - not found). Basically, it doesn't implement the "catalog API" that allows all artifacts/packages to be listed.
However, you can use a file-system / drop-path import after you extract the files from Nexus
Thanks,
Alana
Hi @infrastrcture_6303 ,
We've had one other user report this, and unfortunately it's a bit complicated behind the scenes. ProGet 6 allowed npm packages to be uploaded without a @
in the scope, which meant that if you used a non-npm client (your own script, etc), you could have "bad data" in the database.
We did not anticipate this data when doing the ProGet 2023 datamigration, so the bad data was brought over.
We don't have a means of fixing it, and it would take a nontrivial amount of time to do that - so as a free user we can't really put in that time. We do plan on a cleanup task ina few months that will take care of this automatically.
If you're comfortable with SQL, you may be able to fix it yourself (that's what the other user did), and it invovles a figuring out some queries around the PackageNameIds
table.
Another alternative is to setup a new instance of ProGet 2024 and then just import the packages from disk.
Thanks,
Alana
Hi @rel_0477 ,
I just want to confirm... is the error that you're not able to view the files on the View Files tab in the UI? The file uploaded fine otherwise?
Are you able to download/pull packages through the connector to Shippyard?
Thanks,
Alana
Hi @jfullmer_7346 ,
I was able to reproduce this, and we will fix it in an upcoming maintenance release (PG-2920). This came from a regression in 2024.28, with some other NuGet feed changes.
Cheers,
Alana
Hi @pratik-kumar_8939 ,
ProGet does not require two different feeds for this, and we would generally recommend putting them in the same feed. If you wish to have the artifacts intwo different feeds, then you can just create different feeds and name them how you'd like.
Thanks,
Alana
If the connector is not showing up in the UI, then it's not in the ProGet's database.
When you use ProGet to delete connectors (either with API or UI), then it will be logged as a "Connector Deleted" event under Admin > Event Log. There is no other way that the ProGet software can remove a connector record in the database.
If you don't see anything in the event log, then it's not being deleted through ProGet. It's possible that someone or some script has direct access to the database and is removing it that way. Or, perhaps, the database is being restored somehow.
Cheers,
Alana
Hi @itadmin_9894 ,
It sounds like there's a mismatch of versions with your database the web server- basically the web server has different code than the database. I would just try to downgrade/upgrade from Inedo Hub and then the code will be there.
Cheers,
Alana
Do you mean that the connector is no longer associated with the feed? Is the connector deleted altogether? Or is that you don't see packages when you load the feed?
We don't document but if @steviecoaster recently ffigured this out and published a pretty cool library: https://github.com/steviecoaster/InedoOps
It may do what you already need, so I'd check that out!
But if not you should be able to find the answers in
https://github.com/steviecoaster/InedoOps/blob/main/source/public/Security/Users/Set-ProGetUserPassword.ps1
As a note, the Users_CreateOrUpdateUser
is ust a stored procedure, so you could also peek at the code to see what it's doing behind the scenes. Groups is just <groups><group>A</group></groups>
Hi @janne-aho_4082,
This is most certainly related to heavy usage, even though it might not seem that at first; the connectors are basically a load multiplier. Every request to ProGet is forwarded to each connector via the network, and when you have self-connectors, that's a double-burdeon.
See How to Prevent Server Overload in ProGet to learn more.
Keep in mind that an npm restore will do thousands of simultaneous requests, often multiple for each package. This is to check latest version, vulnerabilities, etc. So you end up with more network traffic than a single server can handle - more ram/cpu will not help.
This is most commonly seen as SQL Server connection issues, since SQL Server also uses network traffic. The best solution is to use network load balancing and multiple nodes.
Otherwise, you have to reduce traffic. Splitting feeds may not help, because the client will then just hit all those feeds at the same time. The "connector metadata caching" can significantly reduce network traffic, but it comes at the cost of outdated packages. You may "see" a package on npmjs (or another feed), but the query is cached so it won't be available for minutes.
Since you're on Linux, I would just use ngnix to throttle/rate limit ProGet. The problem is peak traffic, so start with like 200/request/max then go up from there.
Cheers,
Alana
Hi @udi-moshe_0021 ,
That's what we'd have to do... but it's not a trivial code change on our end. And then we'd have to test, document, and support it when it doesn't work as expected.
So probably best to just use script to do the import.
Thanks,
Alana
hi @udi-moshe_0021,
I believe that rpm
feeds will support package upload, so you should be able to already use that in the latest version of ProGet 2024.
However, I don't think we can add this feature for Debian. The reason is, when you upload a package to a Debian repository, you must specify a "component" it belongs to. This is not available/determinable from just the file name... it's server-side meatdata basically. So you'd have to use a script that pushes the packages.
Cheers,
Alana
Hi @layfield_8963 ,
I would check under Admin > Diagnostic Center for errors, as well as your browser console.
I would also use pgutil assets upload
to work directly with the API and see if you can get any clues on what the underlying error is:
https://docs.inedo.com/docs/proget/reference-api/proget-api-assets/file-endpoints/proget-api-assets-files-upload
Most commonly, it's a antivirus/indexing tool that is locking/blocking the file, but the error message you see will help identify it further.
Cheers,
Alana
Hi @arose_5538 ,
Looks like this was indeed a regression in 2024.23 as a result of upgrading the JSON library we were using... I guess it's a lot more strict.
Of course, upack
is also wrong, but for whatever reason it worked before. anyway it's an easy fix, and will be fixed in the next maintenance release (scheduled Feb 7) of ProGet via PG-2884.
In the meantime, you can just downgrade to 2024.22.
And I checked pgutil
2.1.0 will be released soon :)
Cheers,
Alana
@steviecoaster great news, glad you got it all working
@kc_2466 there are some 500 errors, so please check Admin > Diagnostic Center to see what those are about. Those should each be logged.
Hi @scusson_9923 ,
The Invalid cast from 'System.String' to 'Inedo.Data.VarBinaryInput''
isn't related to the content of the file, it's just a problem with wiring up the API to database. Basically a bug.
I don't know why it works on my machine, but not in your instance. It's one of those "deep in the code" things that we'd have to investigate.
Maybe try upgrading to latest version of Otter? I suspect there was a library upgrade/fix that might make this work.
Thanks,
Alana
Hi @scusson_9923 ,
This code should work for a file on disk; it's same as before, but uses GetBytes...
Invoke-WebRequest -Method Post -Uri "http://otter.localhost/api/json/Rafts_CreateOrUpdateRaftItem" -Body @{
API_Key = "abc123"
Raft_Id = 1
RaftItemType_Code = 4
RaftItem_Name = "mypath/myscript.ps1"
ModifiedOn_Date = Get-Date
ModifiedBy_User_Name = "API"
Content_Bytes = [System.Convert]::ToBase64String([IO.File]::ReadAllBytes("c:\myfile.txt"))
}
@steviecoaster great, hopefully we'll get somethig figured out :)
Hi @scusson_9923 ,
Is this the latest version of Otter? "It worked on my machine" without an issue, so I wonder if there is a change somehow.
Are you able to upload the .yaml file in the way you want in the UI? What type shows up when you do that? I would expect you select like this?
The RaftItemType_Code for Text is 7.
Alana
@kc_2466 great news, thanks for letting us know.
If you don't see any violations recorded, then the banner should go away soon. It's cached.
hi @steviecoaster ,
Thanks for explaining; this is something we will consider on our roadmap planning, but it's currently a "free user" request which is difficult to prioritize with so many other requests from paid users.
HOWEVER, if we had a tech/marketing partnership that would allow us to prioritize this differently. That's above my paygrade, but I can definitely make a strong case internally... is that something you would want to pursue? I suspect It'd end up with our CEO's chatting and figuring something out heh.
In the mean time, the Native API willdefinitely work to automate everything you're trying to do. You can also just do some basic sql commands to insert stuff in the database, which would be easier.
Alana
Hi @kc_2466
License violations are recorded from requests that are not local. You can clear recorded violations by clicking "change" on the license key and then save. No need to actually change it.
This is the logic used to determine if a request is local:
public bool IsLocal
{
get
{
var connection = this.NativeRequest.HttpContext.Connection;
if (connection.RemoteIpAddress != null)
{
if (connection.LocalIpAddress != null)
return connection.RemoteIpAddress.Equals(connection.LocalIpAddress);
else
return IPAddress.IsLoopback(connection.RemoteIpAddress);
}
if (connection.RemoteIpAddress == null && connection.LocalIpAddress == null)
return true;
return false;
}
}
So if you are continuing to see violations, it means that you need to make sure that the local/inbound IPs are the same or the inbound request is loopback (127.0.0.1).
This may require some configuration in your ngnix container.
Cheers,
Alana