@daniel-scati looks like there was a redirect problem, but this is the method to try:
https://docs.inedo.com/docs/proget-api-packages-query-latest
So basically this:
GET /api/packages/MyDebianFeed/latest?name=pacomalarmprocessor
@daniel-scati looks like there was a redirect problem, but this is the method to try:
https://docs.inedo.com/docs/proget-api-packages-query-latest
So basically this:
GET /api/packages/MyDebianFeed/latest?name=pacomalarmprocessor
ProGet dynamically generates these indexes based on an aggregation of locally stored packages and connector results on each request, so caching doesn't make a lot of sense.
npmjs.org, on the other hand, needs to only update indexes when a new version is uploaded, so the cache duration can be a long time.
Thanks,
Steve
Hi @pbinnell_2355 ,
It looks like you have Windows Integrated Authentication enabled. Curl does not support this, but with powerShell you would need to add -UseDefaultCredentials
Thanks,
Steve
Hi @jw ,
I haven't investigated this yet, but I assume that the results are the same in the UI? That's all just pulling data from the database, so I would presume so.
Could you find the relavent parts of the analysis logs? That helps us debug much easier.
Thanks,
Steve
ProGet does not set cache headers for npm requests, so this behavior is expected.
Thanks,
Steve
@jw thanks for clarifying! We'll get the error fixed, but these would not show up in the export, since they are not a build package then
Hi @jw,
[1] Based on the stack trace, I think the issue is that one of the SBOM documents you uploaded has a Component
with a null
/missing Purl field. Obviously this should error, but that's what the error must be looking at the code. If you can confirm it, that'd be great.
[2] ProGet is considered the "source of truth", so a new SBOM document will be generated based on the build packages. That SBOM will then be augmented with some information in the original SBOM(s), such as component "Pedigree", "Description ", etc.
[3] Thanks, we'll try to play with CSS to improve this down the line.
Thanks,
Steve
@artur-wisniowski_4029 thanks for the troubleshooting! We'll investigate/fix this via PG-2671; we're going to target this Friday's maintenance release
The issue sounds like it's related to LDAP configuration (i.e. slow queries to your LDAP/AD server), but it's hard to say. This wouldn't behave any differently in IIS for IWS.
The first thing I would try is disabling LDAP / AD. And Windows Integrated Authentication (if you have it enabled). If the server is still slow, then I would try http:// instead of https://
Once you've identified where the slowness is coming from, we can address it. The most common issue with LDAP is recursive/nested group searches - especially when there are like thousands of groups and everyone's a member of something.
I would "play" with your LDAP settings and try to isolate why it's so slow.
Thanks,
Steve
Ultimately this is going to involve training for your developers. Just like instituting a code review process will be new and uncomfortable at first, a package review process will be the same. Developers will not like it and they will complain.
However, 99% of the time, developers will be fine using the approved
feed. 1% of the time (when they want to test a new package or upgrade), then will use the unapproved
feed. They'll just need to learn how to switch package sources (it's a drop-down in Visual Studio) and then learn not commit commit these package references.
My advise is to make it incumbent upon developers to not commit code/configuration that depends on unapproved packages. If they do, then it will "break the build" because the packages aren't available. This is an expected behavior - it would be like if a developer decided to upgrade to .NET9-beta.
"Don't break the build" is a common mantra on development teams, and it means developers can't commit code changes that will do that. Just extend that to using unapproved packages.
Hi @jw ,
We added a compliance
property via PG-2658 in the next maintenance release.
It basically shows what's in the database (which is also what the page in the UI does):
writer.WritePropertyName("compliance");
writer.WriteStartObject();
writer.WriteString("result", Domains.PackageAnalysisResults.GetName(package.Result_Code));
if (package.Detail_Text is not null)
writer.WriteString("detail", package.Detail_Text);
if (package.Analysis_Date.HasValue)
writer.WriteString("date", package.Analysis_Date.Value);
writer.WriteEndObject();
I think you can rely on result=Inconclusive
meaning the package isn't in ProGet. That's all we use the status for now, but in the future it might be used for something else. A result=Error
means that our code crashed and you shouldn't ever see that.
We'll definitely considering doing something other than a single result string down the line, but for now this was the easiest :)
Thanks,
Steve
Hi @pbinnell_2355 ,
The normal workflow for a two-feed package approval is to generally have developers use the approved
feed but allow them to use the unapproved
feed when they need to use a new package or version.
However, this shouldn't be their default development style. If they want to use packages that aren't approved, they can request approval for the package(s) they want to use.
This obviously slows down development, but so does code review. And it's a tradeoff in general.
You can use bulk promotion if you'd like. Go to the PAckages page, then select "Cached" from the type of package, then select the packages you wish to promote.
Hope that helps,
Steve
@rick-edwards_9161 that is correct, these will only be developed for ProGet 2024
Hi @pbinnell_2355 ,
It sounds like you've built a kind of package approval process?
https://blog.inedo.com/nuget/package-approval-workflow/
If that's the case, you'll need to promote the packages developers need to feed "B" or ask the developers to not use unapproved packages.
Thanks,
Steve
Hi @rick-edwards_9161 ,
There is a corresponding API, but we haven't documented it yet.
For now, you have to "reverse engineer" the code (ProGetClient.cs):
public async IAsyncEnumerable<VulnerabilityInfo> AuditPackagesForVulnerabilitiesAsync(IReadOnlyList<PackageVersionIdentifier> packages, [EnumeratorCancellation] CancellationToken cancellationToken = default)
{
ArgumentNullException.ThrowIfNull(packages);
using var response = await this.http.PostAsJsonAsync("api/sca/audit-package-vulns", packages, ProGetApiJsonContext.Default.IReadOnlyListPackageVersionIdentifier, cancellationToken).ConfigureAwait(false);
await CheckResponseAsync(response, cancellationToken).ConfigureAwait(false);
using var stream = await response.Content.ReadAsStreamAsync(cancellationToken).ConfigureAwait(false);
await foreach (var v in JsonSerializer.DeserializeAsyncEnumerable(stream, ProGetApiJsonContext.Default.VulnerabilityInfo, cancellationToken).ConfigureAwait(false))
yield return v!;
}
We do plan to document all this in the coming weeks/months.
Thanks,
Steve
Hi @dan-brown_0128 ,
Sorry that we your reply. It must have been closed by mistake on our dashboard or something.
You're correct - this does require a database cleanup. As it so happens, we do have a "duplicates clean-up" script available, but it's intended for ProGet 2024. It's extraordinarily complicated, as you can see:
https://gist.github.com/apxltd/351d328023c1c32852c30c335952fabb
If you're able to send us a copy of your database (we can share a secure link for you in EDO-10419, just let us know), then we can review and think of the best plan to fix it. That will be either running the script in ProGet 2023 or upgrading then running the script.
Let us know your thoughts
Thank you,
Steve
@jw we'll definitely keep this in mind, it doesn't look trivial based on our usage of that marked library
Personally, I always try to keep the Diagnostic Center clean and empty, so when new issues show up I can easily spot and address them. Sifting through messages that are basically spam, without being able to filter or ignore them costs me more of my time that I would like to invest for monitoring.
We do not recommend using the Diagnostic Center for proactive monitoring. It's only intended as a tool for troubleshooting things like connector or 500 errors that you / end-users encounter.
There are a lot of non-problem errors and warnings logged that aren't worth time even looking at.
Thanks for clarifying @jw !
I understand how this can be annoying, but I don't think we want to change the 404
not logging error for this issue in particular. Open to ideas if you have them.
As an FYI..
el.innerHTML = marked(el.textContent);
If there's a way to supress relative urls in the marked library somehow, we could probably add it if you know how!
@jw thanks for the bug report! This will be fixed via PG-2650 in the next maintenance release. It should say "License Id" instead - and it will then redirect to the Edit page after, which is almost as convenient as using the same dialog but won't require rewriting the page ;)
Hi @jw,
This is expected behavior, as 404 errors are logged for non-API requests, and that's a relative URL. Is README.assets
a kind of documented standard? It might just intended for GitHub?
Thanks,
Steve
Hey @rick-edwards_9161 ,
Yes - this would be easiest to do with the pgutil vulns audit
command, which we're still working on documenting.
Description:
List vulnerabilities associated with a package or project file
Usage:
pgutil vulns audit [options]
Options:
--input=<input> Project to audit for vulnerable packages
--package=<package> Name of package to audit for vulnerabilities
--type=<type> Type of package to audit for vulnerabilities
Valid values: apk, deb, maven, nuget, conda, cran, helm, npm, pypi, rpm, gem
--version=<version> Version of package to audit for vulnerabilities
-?, --help Show help and usage information
See Getting started with pgutil to learn more.
Hi @rick-kramer_9238 ,
I wouldn't recommend upgrading to ProGet 2024 solely for that reason, as it's a major release and if there were issues, it'd be easier to isolate Integrated Web Server vs Regression.
Here are the upgrade notes:
https://docs.inedo.com/docs/proget-upgrade-2024
That said, if you need to rollback to ProGet 2023, you can do so without restoring the database by simply using the Inedo Hub. While there are database schema changes, they are all backwards-compatible with ProGet 2023, which means you can safely rollback your ProGet installation if there's a showstopper bug, and then upgrade later.
However, you should backup your database as an extra precaution anyway.
Thank you,
Steve
Hi @rmusick_7875,
I'm afraid API endpoints are not customizable and we do not support doing "reverse proxy" or otherwise rewriting the URLs. Hopefully this will be a good chance to make the endpoint-url more easy to configure/change - this will be important, as you may wish to move to multiple feeds, etc.
Good luck,
Steve
Hi @greg-swiderski_0221 ,
It looks like there is an "Object reference not set to an instance of an object" error that's occurring while trying to connect:
2024-04-30 09:52:41,341 8256 [WARN ] - Unable to connect to source 'http://localhost:8624/nuget/approved-choco/':
Object reference not set to an instance of an object.
That error is presumably occurring from the Chocolatey client (choco
), and unfortunately there's no way to know what it means. Most likely, it's an "error reporting an error" message, but it's hard to say.
You could use some HTTP monitoring software (Fiddler Classic, Wireshark), and see if ProGet is returning an error of some kind.... but even if so, choco should report that error.
I would check with the chocolatey team on this one.
Thanks,
Steve
@philippe-camelio_3885 I just cloned the issue for Otter (OT-509), so should be an easy fix. Maybe this will hlep w/ data sync issues as well!
Hi @jw ,
Thanks - we modified the script and tested it against your database backup:
https://gist.github.com/apxltd/351d328023c1c32852c30c335952fabb
That said, your duplicate data should not cause any problems. The only feed packages this seems to modify in your database are Microsoft.NetCore.App.Runtime.win-x86-8.0.0
and Microsoft.NetCore.App.Runtime.win-x64-8.0.0
. Both are cached.
That said, given that it had a bug, we probably won't put this script in 2024.2; we'd much rather "share the script on a case-by-case" basis to fix problems, until we're confident they will be resolved. Then decide how to have users repair the data.
Thank you,
Steve
@philippe-camelio_3885 this is related to SQL Server timeouts/performance issues
Unfortunately we don't have any other information available from the information provided, and about the only suggest I would have is to attempt to increase hardware/CPU on the SQL Server, perform database maintenance (see advanced maintenance) , and finally use SQL Server performance monitoring tools to find where there are problems
There may be some queries that can be optimized or missing indexes. If you can find any of those, please let us know and we can explore to add those optimizations in the product
@jw the script should also work in ProGet 2023
That error is implying that data in the FeedPackageVersion
didn't get cleaned up, as it should have been done on this portion:
https://gist.github.com/apxltd/351d328023c1c32852c30c335952fabb#file-gistfile1-txt-L449
Unfortunately this difficult to debug on its own, and it will be impossible to debug w/o the database itself, but if you're comfortable with SQL feel free to modify or tweak. It's not an easy script :(
We haven't yet run this against all customer databases yet, but it's our list this week. If you haven't sent us your database already, please do :)
Hi @v-makkenze_6348 ,
Just an FYI, we do have a duplicates clean-up script available, and ran it against your database with no issues.
https://gist.github.com/apxltd/351d328023c1c32852c30c335952fabb
We haven't yet decided how to roll it out, but if you do run into more package analyzer issues like this (particularly on the SBOM side), it might help.
Thank you,
Steve
Looks like this is a bug when severs have list-type configuration variables; we'll get this fixed in an upcoming maintenance release via BM-3944
Otherwise I can't think of a work-around that would be any simpler then editing the impacted servers.
Thank you,
Steve
A Full Server Check will occur once per hour by default - does that also crash? You can also trigger by going to Admin > Service.
Also, can you give more detail about what you mean by crashing? There is no "app pool" on Linux/Docker (doesn't use IIS), and the web application isn't responsible for running the server check (that's the Service application). So unclear what the crash would be.
Thank you,
Steve
Hi @jw ,
So far as we can tell, these sort of errors are only occurring when there is a 4-part NuGet version ending in 0 that was added in a previous version. For example, if you added 4.2.1.0
in a previous version, it could yield an analysis error when trying to view the package as 4.2.1
(i.e. how NuGet requests it).
The duplicate package names were a problem in ProGet 2023 as well, but we haven't seen any crashes do to that yet. Just weird results in the SBOM analysis. But these would have also been a problem in ProGet 2023.
So long story short, I don't think you need to run the script.
However, if you upgrade and run into these issues, then you can run it:
https://gist.github.com/apxltd/351d328023c1c32852c30c335952fabb
If that still doesn't work, then rollback to ProGet 2023 and restore the database. And hopefully send it to us again so we can investigate the issue :)
Thank you,
Steve
Hi @jw ,
We have a data-cleanup script but did not ship it with ProGet 2024.1; we are considering it in ProGet 2023.2.
The above error occurred because of a 4-part version that ended in .0
, but if you just delete that from the feed and add it back again, its should work. Can you try that?
We can provide you the data clean-up script if you want to try it. We ran it against several customer databases and it seems to work okay, and clean-up a lot of "bad" versions and casing.
Thank you,
Steve
Hi @dan-brown_0128 ,
Oh that's really weird --- I must have mis understood from the get-go. I'm not sure how that's scenario is possible. At least in ProGet 2023, 2024, I wasn't able to reproduce this under any circumstances
Perhaps it was a bug in an import from a long-while ago. But re-reading your first post, it sounds like you've worked around it?
Otherwise I think it would take a kind of data-cleanup script to handle.
Thanks,
Steve
hi @dan-brown_0128,
ProGet simply does not support case-sensitive npm package names. This is a 10+ year old design decision (back from ProGet v3), which we made because npm package were always supposed to be lower-cased: https://docs.npmjs.com/creating-a-package-json-file#required-name-and-version-fields
JSONPath
is an invalid package that entered the public registry due to an npmjs.org bug. It's an old package -- latest version version 0.1.2, is 8+ years old -- and according to the package page says that it's "moved to jsonpath-plus
to avoid npm problems in dealing with upper-case packages."
ProGet permits these packages, but there are just going to be quirks if you have them in your feed. For example, if you have both jsonpath-1.1.1
and JSONPath-0.1.2
in your feed, then they will generally show up as jsonpath
, but in some API queries you may see JSONPath
.
We understand this is different than how the npmjs.org registry behaves, but as I mentioned, this was a "day 1" choice that we can't change today. No one's mentioned this as an issue until now.
If npm audit is crashing because sometimes JSONPath
is returned in results, that sounds like a bug in npm audit to me? We obviously can't add support for case-sensitive npm packages nor change how ProGet returns data in npm API calls just to work-around an npm audit bug like this.
The easiest solution is to stop using JSONPath
and remove from your feed, which it sounds like you already have. It's a really old package, and there are only a handful like this on the public registry.
Best,
Steve
@dan-brown_0128 I understand
So you'd either have to upgrade (where we fixed the code) or reassess it to Low so npm audit won't crash. I suppose you could also patch npm audit so it doesn't crash.
Hi @dan-brown_0128 ,
Thanks for the detailed analysis; we'll get this fixed via PG-2636 in 2024.1 (due later today).
You should be able to assess it as "Low" to work-around it in the meantime. FYI, here is the code that is used to map a vulnerability to an npm vulnerability:
//assessed
if (vuln.Severity_Code != null)
{
return vuln.Severity_Code switch
{
Domains.AssessmentSeverityCodes.Error => npmAuditSeverity.critical,
Domains.AssessmentSeverityCodes.Warning => npmAuditSeverity.high,
Domains.AssessmentSeverityCodes.NotApplicable => npmAuditSeverity.info,
Domains.AssessmentSeverityCodes.Custom => npmAuditSeverity.info,
_ => npmAuditSeverity.info
};
}
// unassessed and has a Severity Score
else if (!string.IsNullOrWhiteSpace(vuln.ScoreSeverity_Text))
{
return vuln.ScoreSeverity_Text switch
{
nameof(CVSSRange.Low) => npmAuditSeverity.low,
nameof(CVSSRange.Medium) => npmAuditSeverity.moderate,
nameof(CVSSRange.High) => npmAuditSeverity.high,
nameof(CVSSRange.Critical) => npmAuditSeverity.critical,
_ => npmAuditSeverity.info,
};
}
return npmAuditSeverity.none;
Thank you,
Steve
Hi @dan-brown_0128,
I understand that npm is currently case insensitive, but there was a brief period that it wasn't. This led to several duplicate packages being published:
I don't believe this is possible anymore? Either way, ProGet is case insensitive for npm packages, which means that JSONPath
and jsonpath
are considered the same package. If In either case, the package name should be returned in results as JSONPath
came first, then that's what the package name is I guess?jsonpath
It doesn't seem to cause any issues, except with npm audit
?
This is a rare case, so I would just not worry about it. We don't recommend npm audit
and you should use pgutil audit
anyway. More comprehensive, covers licenses, all that..
Thanks for the inquiry; I've updated our Other feed types docs with a link to this thread.
On a first glance, it looks like a CocoaPod itself is just a basic text file that acts as a "pointer" to a GitHub repository. The pod client uses the git client to "download" files and tags for versioning. In other words, there is no package file.
A CocoaPod repository is a Git repository, and the pod client seems to just use git client to commit/push files to the repo. In other words, there is no API for ProGet to implement.
Here is what a CocoPod repo looks like (note how it's just a bunch of files in a Git repo):
https://github.com/CocoaPods/Specs/tree/master/Specs
To make a "private repo", you basically just create a Git repository:
https://guides.cocoapods.org/making/private-cocoapods
This means that ultimately to implement a CocoPods repository, your only option is to create a private Git repository? That's my inital assessment at least.
And of course, we have no plans to add Git source code hosting to ProGet :)
From here, I recommend to ask the developers to research this a bit more, and maybe contribute their thoughts? It just doesn't seem like the iOS devs uses packages - it's all very open-source and just GitHub repositories.
Steve
@daniel-scati we'll also get this fixed via PG-2635 in an upcoming maintenance release (hopefully 2024.2), which is targeted for next Friday.
Hi @daniel-scati , thanks for the analysis!
We'll get this fixed via PG-2635 in an upcoming maintenance release (hopefully 2024.2), which is targeted for next Friday.
ProGet works as a "Private Docker Registry" which seems to be different than a "Docker Hub Mirror".
Last time we researched Docker Hub Mirrors, they seemed to be primarily intended to provide image to certain geographic regions (like China) where Docker Hub content would otherwise be restricted. They could also be used to set up a "local mirror" of Docker Hub, but in all cases, it seemed to basically just redirect traffic from the default docker.io URL (or whatever) - so it wasn't intended to be used as "Private Docker Registries".
In any case, Mirrors don't seem to be a good fit for ProGet; instead, if you wish to use nginx
, we would advise to "privatizing" and "lock" images using sematic tag, so that you can be assured that corp.local/images/nginx
is a tested/safe image with tags you control.
Best,
Steve
@jw excellent, thanks for the finds! I also added this to our "final touches" for ProGet 2024 to address all these-- all pretty easy fixes I think
Hi @jw ,
I'm afraid this is a bit too granular for us now, but it's something we can consider re-evaluating down the line, especially as we will likely want to add specialize permissions for projects, policies, etc. We expect that will happen later in the year, after ProGet 2024's new features get more adoption. We'll see if anyone else requests package-level permissions, etc.
As for the Advanced, I put a note in ProGet 2024's final touches to address that. Honestly I thought those were only on Debug builds only, but clearly not... thanks for reporting!
Cheers,
Steve
Hi @proget-markus-koban_7308 ,
It's highly unlikely we would consider implementing anything Keycloak-specific, but if it's something that SAML supports - and something done by the major providers like Azure, Ping ID, Okta, etc -- we definitely consider it. We just don't know much about it.
We haven't done any further research since that post and we likely won't do any further research on our own, since only one user asked in a few years (and they ended up not needing it anyway).
If this is something you'd be interested in exploring, it'd be best to collaborate and help us bridge the gap between SAML and ProGet.
Here's some relevant questions/discussion from that topic:
I'm not so familiar with SAML behind the scenes... do you know how "SAML group claims" work? For example...
- Is it something that comes back in the XML response, or does it require a separate request?
- What do the "group claims" look like? Like a list of human-readable group names?
And them most importantly... what should ProGet do with such claims upon receipt? Treat the user as if they're in the group (kind of like LDAP groups), and allow permissions to be assigned against that group (like LDAp, but without searching)?
The hardest part is going to be figuring out how to set this up in a SAML provider, document it, etc.
Thanks,
Steve
No idea I'm afraid; there's clearly some issue with unexpected data coming from your ProGet server that's not being validated.
Can you share the results of /health
(first API call) then /api/management/feeds/list
(using the API token you specified)?
With that we can hopefully spot something.
Thanks,
Steve
Hi @bbalavikram ,
The framework
option may not do what you think; first and foremost, 4.5
is not a valid option for framework. You need to use a "framework monifier" that's defined here:
https://learn.microsoft.com/en-us/dotnet/standard/frameworks
But keep in mind that framework monifier must also be in your project file. The framework
argument for dotnet
simply selects which of the frameworks in your project file to build. It's really only useful for multi-targeted builds, which you probably don't have.
It's possible that dotnet
simply will not work with your project. This is unfortunately the case with many old projects. You can continue to try to "play" with your csproj files to try get it to work (note: you can run the same dotnet
commands on your workstation).
If you can't get it to work, then you'll need to use MSBuild:Build-Project
or DevEnv::Build
. We do not have script templates from these, but you can convert your script template to OtterScript and then try modifying the script that way.
Here is some information on build scripts:
https://docs.inedo.com/docs/buildmaster-platforms-dotnet#creating-build-scripts
Best,
Steve
Hi @jw ,
We'll add this as a "if time" on our ProGet 2024 roadmap... and hopefully it's as simple as just setting that flag. We'll update as we get closer to the final release.
Cheers,
Steve
Hi @bbalavikram ,
This error message is coming from the dotnet
command-line tool, and I think it has something to with an old/legacy project file format. If you were to run the same command on your workstation, you would get the same rror.
From here, I would compare/contrast the .csproj files in the broken projects, and see if you can figure out what's the difference.
Note that if you search "root element is visualstudioproject expected project", you'll see a lot of people have a similar error, but their solution is also to do similar things - i.e. edit the file and fix the format.
Once you fix the project file, if you check the code back into Git, it shoudl work the next time.
Best,
Steve