@nathan-wilcox_0355 great to know!
And if you happen to have a shareable script, by all means post the results here - we'd love to share it on the docs, to help other users see some real-world use case
Welcome to the Inedo Forums! Check out the Forums Guide for help getting started.
If you are experiencing any issues with the forum software, please visit the Contact Form on our website and let us know!
@nathan-wilcox_0355 great to know!
And if you happen to have a shareable script, by all means post the results here - we'd love to share it on the docs, to help other users see some real-world use case
Hi @w-repinski_1472,
Unfortunately integrating with Clair v4 cannot be done with only a new plug-in / extension. It requires substantial changes to the vulnerability feature/module in ProGet, so it's something we would have to consider in a major version like ProGet 2024.
Thanks,
Steve
Hi @cole-bagshaw_3056 ,
The web interface and API are powered by the same program (the ProGet web server application), so if the UI is accessible so would the API, as you noticed.
In this case, this error message is coming from your reverse proxy (NGINX); I would check with your configuration there, as something is misconfigured.
The first and most obvious thing I would check is the hostname/port of your URLs. It's possible that you're accessing different hosts/domains. This is controlled by the X-Forwarded headers.
Hope that points you in the right direction
Cheers,
Steve
@sebastian it looks liike the "set package status" only appears for Local Packages...
When testing this, I noticed that there was a bug with the release analyzer; basically it's a constraint that wasn't properly added. We'll fix this via PG-2428, but you can just run this script - and deprecated packages should show up. At least on my machine :)
ALTER TABLE [ProjectReleaseIssues23]
DROP CONSTRAINT [CK__ProjectReleaseIssues23__IssueType_Code]
ALTER TABLE [ProjectReleaseIssues23]
-- [V]ulerability, [M]issing Package, [O]utdated Package, [L]icensing Issue, [D]eprecated
ADD CONSTRAINT [CK__ProjectReleaseIssues23__IssueType_Code]
CHECK ([IssueType_Code] IN ('V', 'M', 'O', 'L', 'D'))
Is there a way to list all packages that are cached by ProGet?
That API will allow you to list all packages in a feed; you'd basically want to query NuGet for each of those packages.
Hi @sebastian ,
This behavior is "intentional" but not ideal; we simply add the package file to the feed but don't set any server-side metadata properties.
This seems like something we should change, so I logged PG-2426 to address this; perhaps we'll get it in the next maintenance release. It seems relatively simple, but I didn't look too deeply
Best,
Steve
Hi @jw ,
Ah, I can see how that would happen; I logged this as PG-2425 and we'll try to get it fixed in the upcoming maintenance release.
Steve
Hi @vishal_2561 ,
Can you provide us with a full log/stack trace? That's an unusual error and we'd like to get some more context on where it's coming from.
Thanks,
Steve
Hi @sebastian ,
In ProGet 2023, deprecation information will be shown on the package page and through API:
The information is passed through connectors and can be set on local packages on the package status page:
They should also show up as issues in SCA reports.
However, as you noticed, it is server-side metadata (like listing status, download count, package owners, etc.), and we don't have any mechanism to "sync" server-side metadata from connectors at this time. That may come in ProGet 2024 but it's not trivial to generalize.
However, you could probably write a script that uses the new Common Package API to easily retrieve packages from a feed, check their status on nuget.org, and update deprecated info.
Best,
Steve
Hi @caterina ,
This behavior is intentional, but not ideal. It should only navigation - such as the "Usage & Statics" page on a package, or the "List Projects" page which has a "Latest Release" column. As long as the Release is active, you'll still see new issues come up.
That's really what determines if a release is scanned or not - Active or not.
The reason for this... an SCA Release's "Release Number" is a free-form field, which means there are no sorting rules. So we can't practically/easily determine what the "highest" number. Instead, we just use the order in which it was created for display purposes.
Thanks,
Steve
Hi @Justinvolved,
That error happens when HostName is not set:
https://github.com/Inedo/inedox-ftp/blob/master/FTP/InedoExtension/Operations/FtpOperationBase.cs#L120
HostName should be mapped from the Resource, but unfortunately it doesn't look like we have validation code in there to indicate if there's an invalid Resource name; it just silently fails:
https://github.com/Inedo/inedox-ftp/blob/master/FTP/InedoExtension/Operations/FtpOperationBase.cs#L160
So, I'm guessing that the resource isn't being retrieved properly; can you try putting global::
in front of the ResourceName? Since you mentioned it's global...
Cheers,
Steve
@vishal_2561 please restart the BUildMaster service and wait for a few minutes for the servers to update; if there's still issues, please post details to a new topic :)
@hwittenborn it's generally in C:\ProgramData\Inedo\SharedConfig\ProGet.config
; here's information about where to find the configuration file:
https://docs.inedo.com/docs/installation-configuration-files
It's just sample code; you would need to write a C# program (or whatever language you'd like) that follows that same logic to decrypt the content stored in SecretKeys
using AES128
I'm not entirely how SecretKeys
are persisted, but I think either base64 or hex literals
Hi @pariv_0352,
Thanks for clarifying; looking closer, ProGet requires that X-Forwarded-Host
is simply a hostname. You're right, there is no "standard" for this, but that's what ProGet does for the time being.. and if the input is invalid, then you get the error you'll see.
I would change your reverse-proxy header configuration to:
X-Forwarded-Host: www.testdomain.com
X-Forwarded-Port: 82
Hope that helps,
Steve
If you're looking for nuget.org-specific metadata, I recommend querying nuget.org directly; of course if you need to work-around internet access issues, you could configure a special feed/connector with no caching.
But if you're looking for latest version of a package, the registration API is your best choice. That's what Visual Studio (NuGet client) does for every package and dependency, every time a restore happens.
Hi @jw,
While we added the published-by column to the database, it seems that it's not being populated properly in all cases; we'll get it fixed by PG-2413 .
Cheers,
Steve
Hi @aivanov_3749 ,
I'm afraid the reply is the same :(
ProGet 2023 is effectively a new database architecture entirely, and while we tested every possible scenario we could imagine (as well as dozens of customer databases), some regressions are to be expected. It's also possible that there was a bug or edge case in the old retention rules, and the packages that should have been deleted weren't/
Upgrading to ProGet 2023 will automatically disable all retention rules on all feeds, and you'll be prompted to attempt a dry run before re-enabling them. The best way to troubleshoot retention rules deleting unexpected packages is to use the "dry run" feature. This will let you tweak the rules, and find which setting is behaving unexpectedly.
If you can let us know specifics or provide those execution logs, we will definitely do our best to identify the underlying cause.
Thanks,
Steve
In an ideal environment, when a user is logged into a domain-joined Windows workstation, then Visual Studio or Edge/Chrome should never prompt the user when WIA is enabled. This applies to ProGet, or any other site/webapp that uses WIA.
However, there are many things that can go wrong, and cause WIA to break. Even something as simple as an out-of-sync clock on a workstation. We've written some docs that try to explain how WIA works and give some tips on how to troubleshoot the issue:
https://docs.inedo.com/docs/various-ldap-troubleshooting#integrated-authentication-not-working
My personal opinion is that WIA was designed for a time before password managers and when everyone worked in an office without VPN. You may find it just not worthwhile to use.
NOTE: you can still use your domain credentials (i.e. Active Directory / LDAP), but users will just be required to enter them into ProGet. They can use an API key inside of Visual Studio.
Cheers,
Steve
@v-makkenze_6348 said in Reporting & Software Composition Analysis (SCA) shows many unresolved Issues:
I repackaged the Owin package but didn't relalize that that would break all my builds as the dll's are now in a 1.0.0 folder where all the project files expect them in the 1.0 folder.
I guess this would work if the projects are in sdk project format but most of them are not.
Unfortunately, a consequence of those quirky versions. Hopefully it won't be too bad to update those projects/references with a bit of search/replace :)
Hi @sebastian,
could you tell me what you mean by disabling the SCA feature in the Feed Features? I don't think I've seen this option in the Feed Features
This is new to ProGet 2023, and you can find it under the Manage Feed page:
Also: there are at least two mechanisms in ProGet to block/allow package downloads: license filters and package filters (in the feed's connector settings). What happens when you combine those filters? Is a package always blocked when it is blocked by one mechanism and allowed by the other? What happens if we'd set the default license filter rule to "Block downloads by default" and allow packages like Microsoft.* in the Nuget connector? Could Microsoft.* packages without a known license be downloaded or would they be blocked?
A package can be blocked due to vulnerabilities, licenses, connector filters, or package filters rules (i.e. white source). Any one of those will block a download, so I think in your case "Microsoft.* packages without a known license" would be blocked.
This can be overridden at a package level, FYI:
Cheers,
Steve
Hi @v-makkenze_6348 ,
Unfortunately it's a bit difficult to troubleshoot what happened with the information provided...
The best way to troubleshoot retention rules deleting unexpected packages is to use the "dry run" feature. This will let you tweak the rules, and find which setting is behaving unexpectedly.
FYI, retention rules do not consider package statistics ("download history".. i.e. records of individual downloads) but instead use "last download date" and "download count" (metadata fields on the package version). If you delete a package, and then re-add it, the "download count" would effectively reset to zero, but the "download history" records would still remain.
Hope that helps,
Steve
@guyk thanks for letting us know; unfortunately this would require a substantial change to the way we handle authentication on connectors; if we see more of this down the line we'll definitely consider it further :)
@sebastian said in Reporting & Software Composition Analysis (SCA) shows many unresolved Issues:
I just noticed that the fix seems to be offered only for "short" versions (i.e. 1.0 to 1.0.0) but not for "long" versions (i.e. 1.0.0.0 to 1.0.0). Is this intended? I think that in cases where the last version part is 0, long versions could be auto-fixed the same way as short version.
A four-part version is not considered a "quirky version" (it's still supported by NuGet), but for some reason the NuGet client/API will occasionally drop the last 0 (e.g. 1.0.0.0 -> 1.0.0), but not always (e.g. 2.1.0.0 isn't dropped?). So we didn't bother with figuring out the rules when displaying that helper-dialog.
[1] Packages with packageid:// type licenses are still reported as "Unknown License". According to PG-2381 this should have been fixed in 2023.7, but it seems that the problem still persists. When I look at the package's page, the (manually applied) license is displayed correctly, but the SCA report still does not recognize it.
Can you create a new thread/ticket for this, with some specific repro instructions/packages (or attach an SBOM so we can very easily recreate it)? This could could be related to PG-2405, but we'd want to see some specific examples of packages to test.
[2] We have a certain license type which is allowed in some feeds and blocked in other feeds. We do this to make sure that packages with that license are downloaded from the "correct" feed. This has worked fine so far. However, starting with ProGet 2023, all packages with that specific license show up as issues in our SCA reports. How can we get rid of that? Manually resolving those issues is not an option, as we are talking about ~100 affected packages on a project with daily builds.
This was actually how ProGet 2022 was supposed to work: if a package download would be blocked in at least one feed, then an issue will be created. The reason for this, pgscan (or an SBOM ) won't know/specify the feed the package is being used from.
The solution we have is to disable the "SCA feature" on the Feed Features. Would that work? We're open to other ideas, but you can see the problem we have... which feed should the analysis use? Etc.
// FYI: might be worth opening a new topic for this one, since it's a different issue as well
Thanks @sebastian, that's pretty much it :)
The underlying issue is that Visual Studio (NuGet) is referencing 1.0.0
while the actual package uses a quirky version 1.0
. ProGet does not fully support quirky versions, and the SCA Feature will not try to resolve those differences.
If you have a "quirky version" of a NuGet package, ProGet 2023 will prompt you to fix it:
In the case of the above, I just created a blank NuGet feed and downloaded "Owin 1.0.0". Then Owin 1.0 appeared in the feed. Anyway, once you fix the quirky versions its should work fine.
Cheers,
Steve
Hi @dionc_5568 ,
Great question; I've updated the documentation as follows:
Usage (CLI/tool)
Execute
pgscan
with theidentify
command. For example, to generate an SBOM and submit the dependencies of v1.0.0 theMyLibrary
project to ProGet:_
pgscan identify --input=MyLibrary.csproj --proget-url=https://proget.local --version=1.0.0
_Note that the
identify
command requires ProGet 2022 and later. If you're using ProGet 6.0, you'll need to use the now-deprecatedpublish
command; see the old version of this README to learn how.
Hopefully that makes it clear. But yes, please just use identify
. It uses a different API that's much slower and will be removed in later versions of ProGet.
Please check the Web.BaseUrl
property under advanced settings; when that is set, the X-Forwarded headers will not be used.
Thanks,
Steve
@guyk there were some changes to containers with the implicit library
changes in ProGet 2023, however we didn't encounter this error when testing
can you confirm that this works in ProGet 2022, but not 2022? What architecture are you using, sometimes it's related to so-called "fat manifests"?
Hi @jimbobmcgee,
I haven't seen this option in too many tools that use SSH; and actually it's the first time I've even heard of this as an option. But I'm not an SSH expert by anymeans, so no idea if this is common.
We use libssh2
, and I have no idea if it's technically possible. We use the libssh2_userauth_password_ex(IntPtr session, IntPtr username, uint username_len, IntPtr password, uint password_len, IntPtr passwd_change_cb)
method to authenticate.
Unless it's absolutely trivial to change (like a simple flag on libssh2
or something), it probably doesn't make sense for us to invest in this feature... unless it came from a from a paid user trying to solve a specific problem/usecase that we could work together on.
SSH is already difficult to support/maintain, so this would add more complexity to testing, debugging, documentation, etc... and we've got enough of that already heh
Cheers,
Steve
I haven't seen that error before.
It's coming from the .NET cryptography library, and when searching for the error message ("The key contents do not contain a PEM, the content is malformed, or the key does not match the certificate."), I'm not getting any hints on specific "gotchas" or ways to resolve the error.
This is where the error is coming from:
https://learn.microsoft.com/en-us/dotnet/api/system.security.cryptography.x509certificates.x509certificate2.createfromencryptedpem?view=net-6.0
So I think the issue must be that the PEM is "invalid" - at least according to the library we're using. I wish I had more information on that, but perhaps you can try a different way to generate it, or try a different way to configure HTTPS.
Cheers,
Steve
We don't maintain the Jenkins extension, but I wonder if you entered an API Key? It looks blank on that screen...
I'm really not sure what the error means otherwise ...
Hi @sebastian ,
[1] That definitely doesn't sound right; that didn't happen when we tested, so we'll have to check that out, it could be a bug...
[2] 274/566 seems awfully high; several do not have scores, but since we have to compute the score ourselves with equations like these, it's very possible that the underlying data isn't formatted perfect or there's a bug somewhere -- can you share the examples you found so we can investigate?
[3] This is expected; it seems that many (or most) vulnerabilities in the database do not have a conspicuous CVE number (perhaps they're not CVEs??), and in those cases, the descriptions are very thorough... it's a huge dataset so we're still learning what's in it.
Cheers,
Steve
Hi @rie_6529 ,
It's hard to say; one possibility is that your server is overloaded. If you have multiple build servers, multiple connectors, etc., then it's like a denial of service attack. Under Admin > Advanced Settings, there is a setting called Web.ConcurrentRequestLimit
; I would configure that to 500.
Next, I would investigate SQL Server, and see where/what queries are taking a ton of time.
With the new indexing system in ProGet 2023, it's possible we missed a SQL index or something. We tested with absolutely massive datasets, but it's hard to say.
Cheers,
Steve
Hi @reincarnator247_4909 , please submit a ticket for this with as many details as you can (types of error messages, configuration, etc.), so we can properly review and give advice. It's very case-by-case.
Hi @rie_6529 ,
Did you allow the data migration to complete?
That can take a significant amount of time if you have a lot of packages (which you likely do).
That's the only thing that would cause an issue like that which I can think of. Here's the full upgrade notes:
https://docs.inedo.com/docs/proget-upgrade-2023
Steve
Hi @sebastian,
Thanks for clarifying that :)
[1] auto-assess should work just fine; that would be our recommendation anyway
[2] this is what we thought too, but there were just so few that this would have worked on that we gave up
So in that case, you can just enable PGVC, enabled download blocking on the feed, and then you'll get all the new PGVC vulnerabilities added to the system after running a Vulnerability Download scheduld job.
If you delete the OSSIndex source, then all the vulnerabilities/assessments will be deleted.
There was a very long-standing bug where ProGet wouldn't update or delete a vulnerability if it was updated/deleted at the source. We fixed that in 2023.
Now I can't say for certain if that's what happened here... but we noticed that some similar erroneous vulnerabilities -- like a vulnerability with a mangled title or some other data entry problem -- disappeared after a nightly scan.
FYI - expiration dates won't delete the assessments, it'll just consider them invalid
Hope that helps!
Hi @jw ,
The version number is a bit buried in the logs I believe, but it sounds like things are working now... and it's too much of a guessing game to figure out what might have happened now.
Cheers,
Steve
Hi @sebastian ,
We actually used your set to analyze this - and we just couldn't come up with a solution.
The PGVC found more total vulnerabilities than OSS Index did, but without doing some really complex code or machine learning something, we couldn't figure out an simple way to reconcile the two datasets.
Many of the OSS Index vulnerabilities didn't list a CVE number, and the titles and descriptions were different - but it seemed like they were talking about the same problem in the same package.
We gave up after that. Open to ideas for sure!
Thanks,
Steve
Hi @cshipley_6136 ,
Since it sounds like there's some logs/sensitive info.. I just submitted a ticket on your behalf (EDO-9257), so we'll work to troubleshoot from there!
Cheers,
Steve
Nothing new I'm afraid; you're the second user who's inquired about this so far.
We do have several customers using Terraform and ProGet, either using universal packages (like @jeff-miles_5073 suggested) or using Asset Directories. We asked them a while back if they had interest for a "proper" Private Registry, and the response was that even if it was available... they likely wouldn't use it because it would involve changing their system, and they saw no benefit to a feed.
We know basically nothing about Terraform and have done no research into the costs/complexity of implementing a registry. Do other products/vendors support this?
The first "red flag" on my end is this:
Terraform can use versioned modules from any service that implements the registry API. The Terraform open source project does not provide a server implementation, but we welcome community members to create their own private registries by following the published protocol.
In our experience, documentation is almost always outdated and inaccurate. So this means lots of reverse-engineering of API protocols and lots of debugging.
Not saying it's impossible, but we'd definitely need community support to make it happen. Check out how the rpm feeds came to be - if it's something you can prototype in Asset Directories (for example), that could go a long way in making it a first-class feed in ProGet :)
Cheers,
Steve
Hi @msimkin_1572 , good idea! I just added a small note at the bottom, where we specify manifest. To be honest, I didn't even know that was possible
Hi @itsoftware_2704 ,
We don't have instructions specific for Docker containers, but you basically just need to run the ProGet.Service.exe resetadminpassword
command on the container. And then restart the container.
Here is more information on what this does:;
https://docs.inedo.com/docs/installation-security-ldap-active-directory
I'm not very well versed in Docker, so I don't know the exact way to run that command - but in the interest of giving you an answer right away, I wanted to share this. Hoping you know how to though :)
Feel free to share how you ran the command, in case someone else runs into this thread!
Cheers,
Steve
Thanks for confirming that; that is really strange. I just tried it msyefl and got the same result.
The technical answer is that PowerShell gallery is sorting the feed alphabetically, and is only returning the top 100 entries. Of those entries, all but two are unlisted. So that's all you see.
Not a great UX, so hopefully it goes away (i.e. they return the default sort order by downloads), or maybe we can sort differently ourselves. We're not too keen to change this, since it's the "very sensitive" legacy NuGet v2 API, but perhaps in v2023 we can alter this.
Cheers,
Steve
When you add a connector to your feed, it will display the packages from those connectors. There's no way to "remove" these packages, since they're on the PowerShell Gallery. But you can always add "connector filters" to show a subset of packages (by name).
The PowerShell recently changed the API sort order to show packages alphabetically (instead of by download count), which is why you see those packages first.
Cheers,
Steve
Hi @jwest_6990
What Windows server version is Otter installed on? That might help us narrow this down.
We have seen that error if RSAT is not enabled:
https://learn.microsoft.com/en-us/troubleshoot/windows-server/system-management-components/remote-server-administration-tools
Cheers,
Steve
Hi @e-rotteveel_1850 ,
Thanks for all the details here! Very helpful, especially since we know very little about Conda.
In our code, we have a WriteChannelData
and a WriteRepoDataAsync
method, which write out these files on demand using the Newtonsoft.Json
library for this.
So, I just specified a StringEscapeHandling of EscapeNonAscii
, which will escape all properties. I don't think that will be a problem.
The change is PG-2295, and it will ship in next maintenance release (Friday, Mar 10). If you'd like to try in a prerelease, just let me know and I can promote our CI-build so you can use it sooner.
Cheers,
Steve
Hi @priyanka-m_4184 ,
That message usually indicates a network problem; are you seeing any issues on the ProGet side of things?
Can you try pushing a package with PowerShell directly instead of using upack.exe? That way you can see an error more clearly, hopefully.
Cheers,
Steve
Hi @OtterFanboy ,
I was able to identify/fix this as BM-3818; looks like the issue was purging a deployed build from a deployed release causes the overview page to crash.
You can work-around this by finding that release on the main "Releases" tab and purging it too.
Cheers,
Steve
@OtterFanboy forgot to mention, we did plan to build a first-class Gitea integration, so you could browse your repositories (just like GitHub etc.), but didn't get it in time. It's still on our list!
Hi @OtterFanboy ,
Thanks for all the details :)
Looks like this is a regression/bug with browsing Generic Git repositories. That's a brand-new 2022 feature, and we're still working out kinks with it. We'll investigate and get it fixed ASAP!
Stay tuned :)
Cheers,
Steve
We haven't specifically tested w/ OpenLDAP (I think), but the integration does work with other providers. One compatibility issue seems to that sAMAccountName
isn't the Username property on all LDAP servers.
However, in v4 of the LDAP provider, you can now customize these queries. Have you tried this yet? https://docs.inedo.com/docs/en/various-ldap-v4-advanced
In any case, we'd be happy to work with you on getting OpenLDAP working. Just let us know what specific issues you're having, and we can patching, etc.
Of course, you're welcome to try modifying the code yourself... and if you want to try I would start by forking the InedoCore extension and then using a custom build of that (just use a version number higher than published). Make sure to delete that custom version once we accept pullrequest or publish version with identical changes.
If you want a totally custom extension, just make one witha different name.
We don't have instructions for custom extensions with Docker, but it follows the same process as Windows (just restart container instead of AppPool/Service): https://docs.inedo.com/docs/proget-administration-extensions#manual-installation
Cheers,
Steve