Hi @arkady-karasin_6391 ,
Under Admin > Diagnostic Center, can you locate the error message and share the full stack trace?
Thanks,
Steve
Hi @arkady-karasin_6391 ,
Under Admin > Diagnostic Center, can you locate the error message and share the full stack trace?
Thanks,
Steve
@parthu-reddy but let us know if you'd like to use a patch/prerelease. We can ship it shortly after 2025.6 is released, and it'd be the only fix likely for the version
Hi @parthu-reddy,
That's generally our plan, and we'll fix it via PG-2702. It's currently scheduled for the 2025.7, on June 14.
Thanks,
Steve
Hi @parthu-reddy ,
It looks like the version field on Maven artifacts is limited to 50 characters, and 2.14.7-sbt-a1b0ffbb8f64bb820f4f84a0c07a0c0964507493
is 51 characters. This field has been limited since "day one" (very many years now), so it's unrelated to the upgrade.
The only workaround is to use a smaller version number (or download the artifact and re-add it without that long hash or something).
Unfortunately this isn't an easy change, as that is used in a primary key. We will have to research and let you know.
Thanks,
Steve
@chris-blyth_5143 this means that the database and application code our out of sync somehow. That's unusual and is often a result of restoring the database but not application code, doing manual installation, etc.
I would just uninstall everything, make sure all the components are gone (except the config file of course), then reinstall and point to the same database. The database poitns to the file share, so it should work upon installation again
In that case please just upgrade and it should be resolved :)
Hi @sebastian ,
This was fixed in ProGet 2024 via PG-2630 (FIX: Dual License Packages should show as compliant if one or more licenses are compliant). It was a bug in the implementation of policies, so it wouldn't work in ProGet 2023 either.
So this should get fixed once you upgrade :)
Thanks,
Steve
Hi @Darren-Gipson_6156, sounds like you've found the right tools to configure the integration.
Here's some more information about the Advanced settings:
https://docs.inedo.com/docs/various-ldap-v4-advanced
The DOMAIN\username
situation is a little complex. The DOMAIN
is considered a NetBios Alias, and needs to be mapped to a domain to search (like domain.com
). Then an LDAP query is constructed babsed on that. So in otherwords, you can't search directly for DOMAIN\username
in a search like that.
Try adding a Netbios Alias mapping in the advanced setings, like DOMAIN=domain.com
; that might allow you to log-in.
Hi @jw ,
FYI - We just wanted to clarify what "inconclusive" meant - this was a "late" change on our end, and we realized the documentation wasn't very clear. Here is how we describe it now:
Inconclusive Analysis
A build package (and thus a build as a whole) can be have an "inconclusive" compliance status. This will occur when two conditions are met:
- A rule would cause the build package to be Noncompliant, such as Undetected Licenses = Noncompliant or Deprecated = Noncompliant
- The package is not cached or otherwise pulled to ProGet, which means ProGet doesn't have enough information about the package to perform an analysis because the package is
You can resolve this by pulling or downloading (i.e. caching) the package in a feed in ProGet, or not defining rules that require server-based metadata. For example, vulnerability-based rules can be checked without the package, but deprecation or license detection cannot.
The analysis message is incorrect however, it should be "Package is Warn because of Package Status is unknown, No license detected."
Thanks,
Steve
Hi @sebastian ,
Thanks for all of the details, this is indeed a regression. We'll get this fixed via PG-2679 in teh upcoming maintenance release , ideally later this week (Friday).
Thanks,
Steve
@daniel-scati looks like there was a redirect problem, but this is the method to try:
https://docs.inedo.com/docs/proget-api-packages-query-latest
So basically this:
GET /api/packages/MyDebianFeed/latest?name=pacomalarmprocessor
ProGet dynamically generates these indexes based on an aggregation of locally stored packages and connector results on each request, so caching doesn't make a lot of sense.
npmjs.org, on the other hand, needs to only update indexes when a new version is uploaded, so the cache duration can be a long time.
Thanks,
Steve
Hi @pbinnell_2355 ,
It looks like you have Windows Integrated Authentication enabled. Curl does not support this, but with powerShell you would need to add -UseDefaultCredentials
Thanks,
Steve
Hi @jw ,
I haven't investigated this yet, but I assume that the results are the same in the UI? That's all just pulling data from the database, so I would presume so.
Could you find the relavent parts of the analysis logs? That helps us debug much easier.
Thanks,
Steve
ProGet does not set cache headers for npm requests, so this behavior is expected.
Thanks,
Steve
@jw thanks for clarifying! We'll get the error fixed, but these would not show up in the export, since they are not a build package then
Hi @jw,
[1] Based on the stack trace, I think the issue is that one of the SBOM documents you uploaded has a Component
with a null
/missing Purl field. Obviously this should error, but that's what the error must be looking at the code. If you can confirm it, that'd be great.
[2] ProGet is considered the "source of truth", so a new SBOM document will be generated based on the build packages. That SBOM will then be augmented with some information in the original SBOM(s), such as component "Pedigree", "Description ", etc.
[3] Thanks, we'll try to play with CSS to improve this down the line.
Thanks,
Steve
@artur-wisniowski_4029 thanks for the troubleshooting! We'll investigate/fix this via PG-2671; we're going to target this Friday's maintenance release
The issue sounds like it's related to LDAP configuration (i.e. slow queries to your LDAP/AD server), but it's hard to say. This wouldn't behave any differently in IIS for IWS.
The first thing I would try is disabling LDAP / AD. And Windows Integrated Authentication (if you have it enabled). If the server is still slow, then I would try http:// instead of https://
Once you've identified where the slowness is coming from, we can address it. The most common issue with LDAP is recursive/nested group searches - especially when there are like thousands of groups and everyone's a member of something.
I would "play" with your LDAP settings and try to isolate why it's so slow.
Thanks,
Steve
Ultimately this is going to involve training for your developers. Just like instituting a code review process will be new and uncomfortable at first, a package review process will be the same. Developers will not like it and they will complain.
However, 99% of the time, developers will be fine using the approved
feed. 1% of the time (when they want to test a new package or upgrade), then will use the unapproved
feed. They'll just need to learn how to switch package sources (it's a drop-down in Visual Studio) and then learn not commit commit these package references.
My advise is to make it incumbent upon developers to not commit code/configuration that depends on unapproved packages. If they do, then it will "break the build" because the packages aren't available. This is an expected behavior - it would be like if a developer decided to upgrade to .NET9-beta.
"Don't break the build" is a common mantra on development teams, and it means developers can't commit code changes that will do that. Just extend that to using unapproved packages.
Hi @jw ,
We added a compliance
property via PG-2658 in the next maintenance release.
It basically shows what's in the database (which is also what the page in the UI does):
writer.WritePropertyName("compliance");
writer.WriteStartObject();
writer.WriteString("result", Domains.PackageAnalysisResults.GetName(package.Result_Code));
if (package.Detail_Text is not null)
writer.WriteString("detail", package.Detail_Text);
if (package.Analysis_Date.HasValue)
writer.WriteString("date", package.Analysis_Date.Value);
writer.WriteEndObject();
I think you can rely on result=Inconclusive
meaning the package isn't in ProGet. That's all we use the status for now, but in the future it might be used for something else. A result=Error
means that our code crashed and you shouldn't ever see that.
We'll definitely considering doing something other than a single result string down the line, but for now this was the easiest :)
Thanks,
Steve
Hi @pbinnell_2355 ,
The normal workflow for a two-feed package approval is to generally have developers use the approved
feed but allow them to use the unapproved
feed when they need to use a new package or version.
However, this shouldn't be their default development style. If they want to use packages that aren't approved, they can request approval for the package(s) they want to use.
This obviously slows down development, but so does code review. And it's a tradeoff in general.
You can use bulk promotion if you'd like. Go to the PAckages page, then select "Cached" from the type of package, then select the packages you wish to promote.
Hope that helps,
Steve
@rick-edwards_9161 that is correct, these will only be developed for ProGet 2024
Hi @pbinnell_2355 ,
It sounds like you've built a kind of package approval process?
https://blog.inedo.com/nuget/package-approval-workflow/
If that's the case, you'll need to promote the packages developers need to feed "B" or ask the developers to not use unapproved packages.
Thanks,
Steve
Hi @rick-edwards_9161 ,
There is a corresponding API, but we haven't documented it yet.
For now, you have to "reverse engineer" the code (ProGetClient.cs):
public async IAsyncEnumerable<VulnerabilityInfo> AuditPackagesForVulnerabilitiesAsync(IReadOnlyList<PackageVersionIdentifier> packages, [EnumeratorCancellation] CancellationToken cancellationToken = default)
{
ArgumentNullException.ThrowIfNull(packages);
using var response = await this.http.PostAsJsonAsync("api/sca/audit-package-vulns", packages, ProGetApiJsonContext.Default.IReadOnlyListPackageVersionIdentifier, cancellationToken).ConfigureAwait(false);
await CheckResponseAsync(response, cancellationToken).ConfigureAwait(false);
using var stream = await response.Content.ReadAsStreamAsync(cancellationToken).ConfigureAwait(false);
await foreach (var v in JsonSerializer.DeserializeAsyncEnumerable(stream, ProGetApiJsonContext.Default.VulnerabilityInfo, cancellationToken).ConfigureAwait(false))
yield return v!;
}
We do plan to document all this in the coming weeks/months.
Thanks,
Steve
Hi @dan-brown_0128 ,
Sorry that we your reply. It must have been closed by mistake on our dashboard or something.
You're correct - this does require a database cleanup. As it so happens, we do have a "duplicates clean-up" script available, but it's intended for ProGet 2024. It's extraordinarily complicated, as you can see:
https://gist.github.com/apxltd/351d328023c1c32852c30c335952fabb
If you're able to send us a copy of your database (we can share a secure link for you in EDO-10419, just let us know), then we can review and think of the best plan to fix it. That will be either running the script in ProGet 2023 or upgrading then running the script.
Let us know your thoughts
Thank you,
Steve
@jw we'll definitely keep this in mind, it doesn't look trivial based on our usage of that marked library
Personally, I always try to keep the Diagnostic Center clean and empty, so when new issues show up I can easily spot and address them. Sifting through messages that are basically spam, without being able to filter or ignore them costs me more of my time that I would like to invest for monitoring.
We do not recommend using the Diagnostic Center for proactive monitoring. It's only intended as a tool for troubleshooting things like connector or 500 errors that you / end-users encounter.
There are a lot of non-problem errors and warnings logged that aren't worth time even looking at.
Thanks for clarifying @jw !
I understand how this can be annoying, but I don't think we want to change the 404
not logging error for this issue in particular. Open to ideas if you have them.
As an FYI..
el.innerHTML = marked(el.textContent);
If there's a way to supress relative urls in the marked library somehow, we could probably add it if you know how!
@jw thanks for the bug report! This will be fixed via PG-2650 in the next maintenance release. It should say "License Id" instead - and it will then redirect to the Edit page after, which is almost as convenient as using the same dialog but won't require rewriting the page ;)
Hi @jw,
This is expected behavior, as 404 errors are logged for non-API requests, and that's a relative URL. Is README.assets
a kind of documented standard? It might just intended for GitHub?
Thanks,
Steve
Hey @rick-edwards_9161 ,
Yes - this would be easiest to do with the pgutil vulns audit
command, which we're still working on documenting.
Description:
List vulnerabilities associated with a package or project file
Usage:
pgutil vulns audit [options]
Options:
--input=<input> Project to audit for vulnerable packages
--package=<package> Name of package to audit for vulnerabilities
--type=<type> Type of package to audit for vulnerabilities
Valid values: apk, deb, maven, nuget, conda, cran, helm, npm, pypi, rpm, gem
--version=<version> Version of package to audit for vulnerabilities
-?, --help Show help and usage information
See Getting started with pgutil to learn more.
Hi @rick-kramer_9238 ,
I wouldn't recommend upgrading to ProGet 2024 solely for that reason, as it's a major release and if there were issues, it'd be easier to isolate Integrated Web Server vs Regression.
Here are the upgrade notes:
https://docs.inedo.com/docs/proget-upgrade-2024
That said, if you need to rollback to ProGet 2023, you can do so without restoring the database by simply using the Inedo Hub. While there are database schema changes, they are all backwards-compatible with ProGet 2023, which means you can safely rollback your ProGet installation if there's a showstopper bug, and then upgrade later.
However, you should backup your database as an extra precaution anyway.
Thank you,
Steve
Hi @rmusick_7875,
I'm afraid API endpoints are not customizable and we do not support doing "reverse proxy" or otherwise rewriting the URLs. Hopefully this will be a good chance to make the endpoint-url more easy to configure/change - this will be important, as you may wish to move to multiple feeds, etc.
Good luck,
Steve
Hi @greg-swiderski_0221 ,
It looks like there is an "Object reference not set to an instance of an object" error that's occurring while trying to connect:
2024-04-30 09:52:41,341 8256 [WARN ] - Unable to connect to source 'http://localhost:8624/nuget/approved-choco/':
Object reference not set to an instance of an object.
That error is presumably occurring from the Chocolatey client (choco
), and unfortunately there's no way to know what it means. Most likely, it's an "error reporting an error" message, but it's hard to say.
You could use some HTTP monitoring software (Fiddler Classic, Wireshark), and see if ProGet is returning an error of some kind.... but even if so, choco should report that error.
I would check with the chocolatey team on this one.
Thanks,
Steve
@philippe-camelio_3885 I just cloned the issue for Otter (OT-509), so should be an easy fix. Maybe this will hlep w/ data sync issues as well!
Hi @jw ,
Thanks - we modified the script and tested it against your database backup:
https://gist.github.com/apxltd/351d328023c1c32852c30c335952fabb
That said, your duplicate data should not cause any problems. The only feed packages this seems to modify in your database are Microsoft.NetCore.App.Runtime.win-x86-8.0.0
and Microsoft.NetCore.App.Runtime.win-x64-8.0.0
. Both are cached.
That said, given that it had a bug, we probably won't put this script in 2024.2; we'd much rather "share the script on a case-by-case" basis to fix problems, until we're confident they will be resolved. Then decide how to have users repair the data.
Thank you,
Steve
@philippe-camelio_3885 this is related to SQL Server timeouts/performance issues
Unfortunately we don't have any other information available from the information provided, and about the only suggest I would have is to attempt to increase hardware/CPU on the SQL Server, perform database maintenance (see advanced maintenance) , and finally use SQL Server performance monitoring tools to find where there are problems
There may be some queries that can be optimized or missing indexes. If you can find any of those, please let us know and we can explore to add those optimizations in the product
@jw the script should also work in ProGet 2023
That error is implying that data in the FeedPackageVersion
didn't get cleaned up, as it should have been done on this portion:
https://gist.github.com/apxltd/351d328023c1c32852c30c335952fabb#file-gistfile1-txt-L449
Unfortunately this difficult to debug on its own, and it will be impossible to debug w/o the database itself, but if you're comfortable with SQL feel free to modify or tweak. It's not an easy script :(
We haven't yet run this against all customer databases yet, but it's our list this week. If you haven't sent us your database already, please do :)
Hi @v-makkenze_6348 ,
Just an FYI, we do have a duplicates clean-up script available, and ran it against your database with no issues.
https://gist.github.com/apxltd/351d328023c1c32852c30c335952fabb
We haven't yet decided how to roll it out, but if you do run into more package analyzer issues like this (particularly on the SBOM side), it might help.
Thank you,
Steve
Looks like this is a bug when severs have list-type configuration variables; we'll get this fixed in an upcoming maintenance release via BM-3944
Otherwise I can't think of a work-around that would be any simpler then editing the impacted servers.
Thank you,
Steve
A Full Server Check will occur once per hour by default - does that also crash? You can also trigger by going to Admin > Service.
Also, can you give more detail about what you mean by crashing? There is no "app pool" on Linux/Docker (doesn't use IIS), and the web application isn't responsible for running the server check (that's the Service application). So unclear what the crash would be.
Thank you,
Steve
Hi @jw ,
So far as we can tell, these sort of errors are only occurring when there is a 4-part NuGet version ending in 0 that was added in a previous version. For example, if you added 4.2.1.0
in a previous version, it could yield an analysis error when trying to view the package as 4.2.1
(i.e. how NuGet requests it).
The duplicate package names were a problem in ProGet 2023 as well, but we haven't seen any crashes do to that yet. Just weird results in the SBOM analysis. But these would have also been a problem in ProGet 2023.
So long story short, I don't think you need to run the script.
However, if you upgrade and run into these issues, then you can run it:
https://gist.github.com/apxltd/351d328023c1c32852c30c335952fabb
If that still doesn't work, then rollback to ProGet 2023 and restore the database. And hopefully send it to us again so we can investigate the issue :)
Thank you,
Steve
Hi @jw ,
We have a data-cleanup script but did not ship it with ProGet 2024.1; we are considering it in ProGet 2023.2.
The above error occurred because of a 4-part version that ended in .0
, but if you just delete that from the feed and add it back again, its should work. Can you try that?
We can provide you the data clean-up script if you want to try it. We ran it against several customer databases and it seems to work okay, and clean-up a lot of "bad" versions and casing.
Thank you,
Steve
Hi @dan-brown_0128 ,
Oh that's really weird --- I must have mis understood from the get-go. I'm not sure how that's scenario is possible. At least in ProGet 2023, 2024, I wasn't able to reproduce this under any circumstances
Perhaps it was a bug in an import from a long-while ago. But re-reading your first post, it sounds like you've worked around it?
Otherwise I think it would take a kind of data-cleanup script to handle.
Thanks,
Steve
hi @dan-brown_0128,
ProGet simply does not support case-sensitive npm package names. This is a 10+ year old design decision (back from ProGet v3), which we made because npm package were always supposed to be lower-cased: https://docs.npmjs.com/creating-a-package-json-file#required-name-and-version-fields
JSONPath
is an invalid package that entered the public registry due to an npmjs.org bug. It's an old package -- latest version version 0.1.2, is 8+ years old -- and according to the package page says that it's "moved to jsonpath-plus
to avoid npm problems in dealing with upper-case packages."
ProGet permits these packages, but there are just going to be quirks if you have them in your feed. For example, if you have both jsonpath-1.1.1
and JSONPath-0.1.2
in your feed, then they will generally show up as jsonpath
, but in some API queries you may see JSONPath
.
We understand this is different than how the npmjs.org registry behaves, but as I mentioned, this was a "day 1" choice that we can't change today. No one's mentioned this as an issue until now.
If npm audit is crashing because sometimes JSONPath
is returned in results, that sounds like a bug in npm audit to me? We obviously can't add support for case-sensitive npm packages nor change how ProGet returns data in npm API calls just to work-around an npm audit bug like this.
The easiest solution is to stop using JSONPath
and remove from your feed, which it sounds like you already have. It's a really old package, and there are only a handful like this on the public registry.
Best,
Steve
@dan-brown_0128 I understand
So you'd either have to upgrade (where we fixed the code) or reassess it to Low so npm audit won't crash. I suppose you could also patch npm audit so it doesn't crash.
Hi @dan-brown_0128 ,
Thanks for the detailed analysis; we'll get this fixed via PG-2636 in 2024.1 (due later today).
You should be able to assess it as "Low" to work-around it in the meantime. FYI, here is the code that is used to map a vulnerability to an npm vulnerability:
//assessed
if (vuln.Severity_Code != null)
{
return vuln.Severity_Code switch
{
Domains.AssessmentSeverityCodes.Error => npmAuditSeverity.critical,
Domains.AssessmentSeverityCodes.Warning => npmAuditSeverity.high,
Domains.AssessmentSeverityCodes.NotApplicable => npmAuditSeverity.info,
Domains.AssessmentSeverityCodes.Custom => npmAuditSeverity.info,
_ => npmAuditSeverity.info
};
}
// unassessed and has a Severity Score
else if (!string.IsNullOrWhiteSpace(vuln.ScoreSeverity_Text))
{
return vuln.ScoreSeverity_Text switch
{
nameof(CVSSRange.Low) => npmAuditSeverity.low,
nameof(CVSSRange.Medium) => npmAuditSeverity.moderate,
nameof(CVSSRange.High) => npmAuditSeverity.high,
nameof(CVSSRange.Critical) => npmAuditSeverity.critical,
_ => npmAuditSeverity.info,
};
}
return npmAuditSeverity.none;
Thank you,
Steve
Hi @dan-brown_0128,
I understand that npm is currently case insensitive, but there was a brief period that it wasn't. This led to several duplicate packages being published:
I don't believe this is possible anymore? Either way, ProGet is case insensitive for npm packages, which means that JSONPath
and jsonpath
are considered the same package. If In either case, the package name should be returned in results as JSONPath
came first, then that's what the package name is I guess?jsonpath
It doesn't seem to cause any issues, except with npm audit
?
This is a rare case, so I would just not worry about it. We don't recommend npm audit
and you should use pgutil audit
anyway. More comprehensive, covers licenses, all that..
Thanks for the inquiry; I've updated our Other feed types docs with a link to this thread.
On a first glance, it looks like a CocoaPod itself is just a basic text file that acts as a "pointer" to a GitHub repository. The pod client uses the git client to "download" files and tags for versioning. In other words, there is no package file.
A CocoaPod repository is a Git repository, and the pod client seems to just use git client to commit/push files to the repo. In other words, there is no API for ProGet to implement.
Here is what a CocoPod repo looks like (note how it's just a bunch of files in a Git repo):
https://github.com/CocoaPods/Specs/tree/master/Specs
To make a "private repo", you basically just create a Git repository:
https://guides.cocoapods.org/making/private-cocoapods
This means that ultimately to implement a CocoPods repository, your only option is to create a private Git repository? That's my inital assessment at least.
And of course, we have no plans to add Git source code hosting to ProGet :)
From here, I recommend to ask the developers to research this a bit more, and maybe contribute their thoughts? It just doesn't seem like the iOS devs uses packages - it's all very open-source and just GitHub repositories.
Steve
@daniel-scati we'll also get this fixed via PG-2635 in an upcoming maintenance release (hopefully 2024.2), which is targeted for next Friday.