Hi @v-makkenze_6348,
thanks for the report, I was able to reproduce it with that particular package. We plan to resolve this via PG-2587 in the next maintenance release scheduled for next Friday.
Hi @v-makkenze_6348,
thanks for the report, I was able to reproduce it with that particular package. We plan to resolve this via PG-2587 in the next maintenance release scheduled for next Friday.
Hi @dan-brown_0128 ,
Yes, but only once you've enabled ProGet 2024 Vulnerability Preview features (available in ProGet 2023.29+).
Thanks,
Steve
Hi @bbalavikram ,
Sorry about that! How strange..... it looks like there was a weird issue with your MyInedo account, and it was missing some internal data. This lead to an error in generating the key with your email address.
I corrected your account, so please try again (etiher from BuildMaster or MyInedo).
And let us know if you have ay qeustions about BuildMaster too -- happy to help!
Best,
Steve
Hi @v-makkenze_6348,
We've rewritten this to not use MERGE, which should help; we plan to ship PG-2583 in the upcoming maintenance release on Mar 1.
Cheers,
Steve
Hi @davidroberts63 ,
That sounds like a neat use case :)
The ProGet Health API will report the version of a ProGet instance, and you can use the Universal Package API - List Packages Endpoint to query our products feed for the ProGet package.
Cheers,
Steve
Hi @c-schuette_7781 ,
This error is occurring on the remote server (i.e. nuget.devexpress.com). This error can happen when a server is overloaded... so you're basically doing a DoS on DevExpress's server. You'll need to try again or contact DevExpress for help,.
I believe that DevExpress wrote their own, custom NuGet server. We've had several issues with it in the past. While talking to them, you should also suggest they switch to ProGet ISV Edition like some other component vendors
Best,
Steve
Hi @c-schuette_7781,
I have a NuGet "default" feed that is connected to the standard nuget.org feed and also includes connectors to three "local" NuGet feeds. I
So basically, you're doing a Denial of Service attack against your ProGet server ;)
When the NuGet client makes that FindPackagesById()
request, ProGet needs to now make four separate web requests (nuget.org plus the three other feeds). Considering that NuGet client makes 100's of simultaneous requests for different packages, you're going to run into errors like this. Especially with multiple builds (multiple sets of 100's of requests / second).
If you want to handle this level of traffic, you need to use load balancing.
Otherwise, you need to reduce traffic. Switch to the NuGet v3 API, use connector metadata caching, reduce the number of connectors, set a Web.ConcurrentRequestLimit
in the admin > advanced, etc.
Adding an entry to the hosts file should not cause a "blocked" connection, so it sounds like there's definitely something strange going with your machine's configuration. I'm not sure how to troubleshoot this further. Self-connectors work fine in testing Free edition, and other users don't have an issue.
If it helps, here's the code that ProGet uses to determine if a connection is local:
public bool IsLocal
{
get
{
var connection = this.NativeRequest.HttpContext.Connection;
if (connection.RemoteIpAddress != null)
{
if (connection.LocalIpAddress != null)
return connection.RemoteIpAddress.Equals(connection.LocalIpAddress);
else
return IPAddress.IsLoopback(connection.RemoteIpAddress);
}
if (connection.RemoteIpAddress == null && connection.LocalIpAddress == null)
return true;
return false;
}
}
I would explore the hosts file issue; the fact that a loopback (127.0.0.1) entry wouldn't work sounds like there was some kind of data entry error/typo in your hosts error, but hard to say.
Best,
Steve
This behavior is expected, though a little confusing. The blue info box explains it a little bit, but if you add a second PGVC vulnerability source, then you'll see two entries for PGVC in your list. Those are separate sources that point to the same database. It's not recommended, and only acts as a work-around to allow for different aassessments for different feeds..
What are you trying to accomplish? If it's just basic vulnerability scanning, then I recommend doing the following:
Hope that helps,
Steve
Hi @m-webster_0049 ,
The first thing I would try is to troubleshoot this is to switch to a very basic API key like hello
. That just eliminates any typos, spacing, etc.
Next, I would try specifying the API Key via X-ApiKey
header (see docs) - just to see if you get a different error. It's possible there is a regression somewhere.
Best,
Steve
Can you share a screenshot of your Admin > Vulnerability Sources screen? It looks like you have three vulnerability sources configured.
Note that we no longer recommend using OSS Index, and instead just having (one) PGVC enabled.
Thanks,
Steve
Hello,
That makes sense; there's a few threads with this similar issue, so you may want to search and follow some of the troubleshooting steps.
But basically, ProGet checks for local requests using HttpRequest.IsLocal
, which basically just looks for 127.0.0.1
. If it's not local, then a license violation is recorded.
Try using 127.0.0.1 for your connectors, or if that's not possible, and your server doesn't resolve proget.xxxx.com
as 127.0.0.1
, you may need to add a /etc/hosts
entry for proget.xxxx.com 127.0.0.1
so that it will come across as a local.
Cheers,
Steve
Hi @jw ,
There is one other setting, under SCA > Vulnerabilities > Download Blocking. Try setting that, then maybe you'll also need to run Package Analysis again.
Let us know -- we can try to add a few more hints/clues in the UI to make this less confusing, at least as a temporary measure before tying this together better in the back-end.
Thanks,
Steve
Hi @jw ,
One thing to check --- is "vulnerability blocking" enabled on the nuget-proxy feed? That's currently how the SCA Projects know to pick up if a vulnerability issue is desired.
Thanks,
Steve
I see a few issues here...
First, the URL you're using is not correct; the easiest way to to find the url by clicking the "download package link" in the ProGet UI. It will look like this: /nuget/your-feed-name/package/your-package-name/your-version-number
Second, you're downloading a file - so you want to use a powershell command like this:
$url = "https://myprogetserver/feeds/mynuggets/package/mypackage/1.0.0"
$destination = "c:\mypackages\mypackage-1.0.0.zip"
Invoke-WebRequest -Uri $url -OutFile $destination
Best,
Steve
@jw thanks for the bug report!
We'll get this fixed in the an upcoming maintenance release via PG-2491 :)
Hi @carl-westman_8110 ,
This is likely due to some authentication or other configuration with Azure Blob. You will see the specific error on the ProGet, logged under Admin > Diagnostic Center.
Best,
Steve
Hi @avoisin_5738 ,
The error message means that an invalid zip file was received in the request; so the file can't even be opened. I don't know how Klondike works.
If you're totally sure that you're uploading the nupkg file, I would try opening it as a zip file (like rename it to .zip, use 7zip, etc.). I would expect a similar error.
If it's a valid zip file, I would upload it via the UI; if that works, it means your script has some kind of issue in corrupting the stream, or not sending the complete file, etc.
Steve
Hi @avoisin_5738 ,
The error "End of Central Directory record could not be found" basically means that the file is not a valid ZIP file. The most common case for this is pushing the wrong file (.nuspec
instead of .nupkg
, or a dll or .psm file). There are some other rare cases where the stream can be corrupted on the way to ProGet, but that's not common.
Hope that helps,
Steve
Hi @k_2363,
We cannot get the endpoint ApiKeys_CreateOrUpdateApiKey to get to work. It seems that the JSON requires ApiKeyTasks_Table (IEnumerable1)`. Unfortunately we cannot find what we have to provide here. If i look at the Stored Procedure, it seems that this cannot be filled with an API request.
Hmm, it looks like you may not be able to use table-value parameters via JSON. I don't know how easy that will be to add support for; one option is just to just do direct INSERT statements into ApiKeys
and ApiKeyTasks
tables. I'm aware of at least one other user that does that, since it was significantly easier to do a single statement that joined on another table in another database on the same SQL Server.
It's not ideal, but this is pretty rare use case.
Would that work?
Thanks,
Steve
Hi @k_2363,
However your explanation for [3] doesn't seem to be right in our case. We're using the 'Pull to ProGet' button to download packages from an Azure DevOps Artifact to our ProGet feed, however when the package hasn't been downloaded yet it shows in the Feed with a Antenna icon.
Actually this won't work with the Common Package API; this API works only with local packages, and does not query connectors. So instead, you'll need to download the package file from the NuGet endpoint (which does connectors).
You can find the download URL by looking in the front-end, and generating a file url from that. But it's basically like this for NuGet/chocolatey packages: /nuget/<feed_name>/package/<package_name>/<package_version>
Best,
Steve
Hi @w-repinski_1472 ,
Based on your initial usage, I think SQL Server Express will suffice. The ProGet database is basically a package metadata index, and essentially stores things like package name, version number, and a variety of things including the manifest file (.nuspec, package.json, etc). It's maybe a few kb per package, and you'll need 100k's of packages to even reach 1GB of metadata storage.
In your information you state that network connections are the bottleneck. I don't understand this completely in times when we have 100G cards, maybe I don't understand the scale on which ProGet is used in other companies.
The issue is with the number of connections, and a single server struggling with 100's of expensive queries/requests per second. Running "nuget restore" or "npm restore" will hammer the repository with 1000's of simultaneous requests, and many of those need to go to nuget.org or npm.json to be resolved. When you have multiple users and multiple build servers running these kinds of restores, then you run into load issues.
At about 50 users, a load-balanced / high-availability cluster starts to make sense. After 250 users, sticking to just a single server doesn't make a lot of sense (cost of downtime is expensive). Once you need a server cluster, then upgrading SQL Server would probably make sense.
There's a big cost difference between a single server and a server cluster - in part the ProGet licensing fees, but also managing a server cluster is more complicated. Some organizations prefer to start with high-availability right away rather than worry about upgrading later.
hope that helps clarify!
Best,
Steve
Hi @cimen-eray1_6870 ,
Great questions; there's no problem having a second instance with ProGet Free Edition.
The relevant restriction is that you can't use a Connector in ProGet Free Edition to connect to another instance of ProGet (either another Free Edition or your paid edition).
Hopefully you can use your Maven feed as a proof of concept for implementing it in the main instance. Good luck!
Cheers,
Steve
Hi @brett-polivka ,
It looks like you've got something configured incorrectly; the endpoint should be something like:
http://<redacted>/pypi/maglabs/simple
Cheers.
Steve
Hi @priyanka-m_4184 ,
It sounds like you have package statistics enabled; as you can see, this table gets really big over several years.
If you aren't using this data and don't care about it, then just run TRUNCATE PackageDownloads
and disable the feature.
Another big table is often EventOccurences
, but usually that's much smaller.
Here is query that will purge data from those tables before 2023:
DECLARE @DELETED INT = 1
WHILE (@DELETED > 0)
BEGIN
BEGIN TRANSACTION
-- PURGE OLD DATA
DELETE TOP (10000) [PackageDownloads]
WHERE [Download_Date] < '2023-01-01'
SET @DELETED = @@ROWCOUNT
-- PURGE OLD EVENTS
DELETE TOP (10000) [EventOccurrences]
WHERE [Occurrence_Date] < '2023-01-01'
SET @DELETED = @DELETED + @@ROWCOUNT
COMMIT
CHECKPOINT
END
Best,
steve
Hi @justin-zollar_1098 ,
At first, it looks like debug-level logging is enabled, so I would definitely disable that under Admin > Advanced Settings > Diagnostics.MinimumLogLevel. It should be 20.
The most common reason for a SQL Timeout (i.e. if you google the problem) is a SQL query that is taking too long. That shouldn't happen in Otter, but it sometimes does, especially when there is a lot of data and some non-optimzed queries.
A SQL Timeout when starting the Otter Service is unusual, and it may not be related to SQL queries.
The first thing I would check... are these queries actually taking that long to run in the database? You can use a tool like SQL Server Profiler or resource monitor, which will show you what's going on. You can then try running those queries directly against the database, and see if they're also taking an eternity.
It's very possible that SQL Server isn't the issue at all. It could be network related - and we've even seen some bad Windows updates trigger some strange side-effects to the Windows wait handles.
Best,
Steve
Hi @jerome-virot_4088 ,
Linux does not support UNC paths, so you'll need to mount the appropriate machine and drive to a local directory under the Linux OS. Once this has been done, you can then map the volume in your Docker container, and configure the Drop Path in ProGet.
Best,
Steve
Hi Justin,
The Inedo Agent Service is generally something that you'd run on a remote server; if it's crashing on start-up, then the error message would be in the Windows Event Log. The most likely reason is not sufficient permissions or invalid configuration.
The error message that you're sharing is from the Otter Web application, and it's happening while trying to view the "Admin > Diagnostic Center". That's an unrelated problem... but it's also a bit unusual, as there shouldn't be more than 1000 entries in that table.
The first thing I would investigate is the data the underlying table. You can just run SELECT * FROM [LogMessages] ORDER BY [LogMessage_Id] DESC
, and peek at what's there.
That won't help with the agent, but it will help troubleshoot other issues. There definitely shouldnt' be atime out there.
Cheers,
Steve
Hi @jfullmer_7346 ,
The ProGet Service (and WebApp if using IIS) will crash when the database is no longer accessible. Based on the error messages, that exactly the case. The "good news" isn't that isn't ProGet related, so that at least gives you one less place to look.
It looks like you're using the SQL Server on the same machine ("shared memory provider"), but I'm not totally sure. If that's the case, then my guess is that the SQL Server is crashing; you'd have to check SQL Server's event/error logs for that. It's very rare for SQL Server to crash, and I'd be worried that it's a sign of hardware failure.
Beyond that, I don't have any specific trips/ticks on research SQL Server connectivity problems, but but if you search ...
... you'll find lots of advice all over the place, since this could impact pretty much any software that uses SQL Server.
Good luck and let us know what you find!
Thanks,
Steve
@nathan-wilcox_0355 great to know!
And if you happen to have a shareable script, by all means post the results here - we'd love to share it on the docs, to help other users see some real-world use case
Hi @w-repinski_1472,
Unfortunately integrating with Clair v4 cannot be done with only a new plug-in / extension. It requires substantial changes to the vulnerability feature/module in ProGet, so it's something we would have to consider in a major version like ProGet 2024.
Thanks,
Steve
Hi @cole-bagshaw_3056 ,
The web interface and API are powered by the same program (the ProGet web server application), so if the UI is accessible so would the API, as you noticed.
In this case, this error message is coming from your reverse proxy (NGINX); I would check with your configuration there, as something is misconfigured.
The first and most obvious thing I would check is the hostname/port of your URLs. It's possible that you're accessing different hosts/domains. This is controlled by the X-Forwarded headers.
Hope that points you in the right direction
Cheers,
Steve
@sebastian it looks liike the "set package status" only appears for Local Packages...
When testing this, I noticed that there was a bug with the release analyzer; basically it's a constraint that wasn't properly added. We'll fix this via PG-2428, but you can just run this script - and deprecated packages should show up. At least on my machine :)
ALTER TABLE [ProjectReleaseIssues23]
DROP CONSTRAINT [CK__ProjectReleaseIssues23__IssueType_Code]
ALTER TABLE [ProjectReleaseIssues23]
-- [V]ulerability, [M]issing Package, [O]utdated Package, [L]icensing Issue, [D]eprecated
ADD CONSTRAINT [CK__ProjectReleaseIssues23__IssueType_Code]
CHECK ([IssueType_Code] IN ('V', 'M', 'O', 'L', 'D'))
Is there a way to list all packages that are cached by ProGet?
That API will allow you to list all packages in a feed; you'd basically want to query NuGet for each of those packages.
Hi @sebastian ,
This behavior is "intentional" but not ideal; we simply add the package file to the feed but don't set any server-side metadata properties.
This seems like something we should change, so I logged PG-2426 to address this; perhaps we'll get it in the next maintenance release. It seems relatively simple, but I didn't look too deeply
Best,
Steve
Hi @jw ,
Ah, I can see how that would happen; I logged this as PG-2425 and we'll try to get it fixed in the upcoming maintenance release.
Steve
Hi @vishal_2561 ,
Can you provide us with a full log/stack trace? That's an unusual error and we'd like to get some more context on where it's coming from.
Thanks,
Steve
Hi @sebastian ,
In ProGet 2023, deprecation information will be shown on the package page and through API:
The information is passed through connectors and can be set on local packages on the package status page:
They should also show up as issues in SCA reports.
However, as you noticed, it is server-side metadata (like listing status, download count, package owners, etc.), and we don't have any mechanism to "sync" server-side metadata from connectors at this time. That may come in ProGet 2024 but it's not trivial to generalize.
However, you could probably write a script that uses the new Common Package API to easily retrieve packages from a feed, check their status on nuget.org, and update deprecated info.
Best,
Steve
Hi @caterina ,
This behavior is intentional, but not ideal. It should only navigation - such as the "Usage & Statics" page on a package, or the "List Projects" page which has a "Latest Release" column. As long as the Release is active, you'll still see new issues come up.
That's really what determines if a release is scanned or not - Active or not.
The reason for this... an SCA Release's "Release Number" is a free-form field, which means there are no sorting rules. So we can't practically/easily determine what the "highest" number. Instead, we just use the order in which it was created for display purposes.
Thanks,
Steve
Hi @Justinvolved,
That error happens when HostName is not set:
https://github.com/Inedo/inedox-ftp/blob/master/FTP/InedoExtension/Operations/FtpOperationBase.cs#L120
HostName should be mapped from the Resource, but unfortunately it doesn't look like we have validation code in there to indicate if there's an invalid Resource name; it just silently fails:
https://github.com/Inedo/inedox-ftp/blob/master/FTP/InedoExtension/Operations/FtpOperationBase.cs#L160
So, I'm guessing that the resource isn't being retrieved properly; can you try putting global::
in front of the ResourceName? Since you mentioned it's global...
Cheers,
Steve
@vishal_2561 please restart the BUildMaster service and wait for a few minutes for the servers to update; if there's still issues, please post details to a new topic :)
@hwittenborn it's generally in C:\ProgramData\Inedo\SharedConfig\ProGet.config
; here's information about where to find the configuration file:
https://docs.inedo.com/docs/installation-configuration-files
It's just sample code; you would need to write a C# program (or whatever language you'd like) that follows that same logic to decrypt the content stored in SecretKeys
using AES128
I'm not entirely how SecretKeys
are persisted, but I think either base64 or hex literals
Hi @pariv_0352,
Thanks for clarifying; looking closer, ProGet requires that X-Forwarded-Host
is simply a hostname. You're right, there is no "standard" for this, but that's what ProGet does for the time being.. and if the input is invalid, then you get the error you'll see.
I would change your reverse-proxy header configuration to:
X-Forwarded-Host: www.testdomain.com
X-Forwarded-Port: 82
Hope that helps,
Steve
If you're looking for nuget.org-specific metadata, I recommend querying nuget.org directly; of course if you need to work-around internet access issues, you could configure a special feed/connector with no caching.
But if you're looking for latest version of a package, the registration API is your best choice. That's what Visual Studio (NuGet client) does for every package and dependency, every time a restore happens.
Hi @jw,
While we added the published-by column to the database, it seems that it's not being populated properly in all cases; we'll get it fixed by PG-2413 .
Cheers,
Steve
Hi @aivanov_3749 ,
I'm afraid the reply is the same :(
ProGet 2023 is effectively a new database architecture entirely, and while we tested every possible scenario we could imagine (as well as dozens of customer databases), some regressions are to be expected. It's also possible that there was a bug or edge case in the old retention rules, and the packages that should have been deleted weren't/
Upgrading to ProGet 2023 will automatically disable all retention rules on all feeds, and you'll be prompted to attempt a dry run before re-enabling them. The best way to troubleshoot retention rules deleting unexpected packages is to use the "dry run" feature. This will let you tweak the rules, and find which setting is behaving unexpectedly.
If you can let us know specifics or provide those execution logs, we will definitely do our best to identify the underlying cause.
Thanks,
Steve
In an ideal environment, when a user is logged into a domain-joined Windows workstation, then Visual Studio or Edge/Chrome should never prompt the user when WIA is enabled. This applies to ProGet, or any other site/webapp that uses WIA.
However, there are many things that can go wrong, and cause WIA to break. Even something as simple as an out-of-sync clock on a workstation. We've written some docs that try to explain how WIA works and give some tips on how to troubleshoot the issue:
https://docs.inedo.com/docs/various-ldap-troubleshooting#integrated-authentication-not-working
My personal opinion is that WIA was designed for a time before password managers and when everyone worked in an office without VPN. You may find it just not worthwhile to use.
NOTE: you can still use your domain credentials (i.e. Active Directory / LDAP), but users will just be required to enter them into ProGet. They can use an API key inside of Visual Studio.
Cheers,
Steve
@v-makkenze_6348 said in Reporting & Software Composition Analysis (SCA) shows many unresolved Issues:
I repackaged the Owin package but didn't relalize that that would break all my builds as the dll's are now in a 1.0.0 folder where all the project files expect them in the 1.0 folder.
I guess this would work if the projects are in sdk project format but most of them are not.
Unfortunately, a consequence of those quirky versions. Hopefully it won't be too bad to update those projects/references with a bit of search/replace :)
Hi @sebastian,
could you tell me what you mean by disabling the SCA feature in the Feed Features? I don't think I've seen this option in the Feed Features
This is new to ProGet 2023, and you can find it under the Manage Feed page:
Also: there are at least two mechanisms in ProGet to block/allow package downloads: license filters and package filters (in the feed's connector settings). What happens when you combine those filters? Is a package always blocked when it is blocked by one mechanism and allowed by the other? What happens if we'd set the default license filter rule to "Block downloads by default" and allow packages like Microsoft.* in the Nuget connector? Could Microsoft.* packages without a known license be downloaded or would they be blocked?
A package can be blocked due to vulnerabilities, licenses, connector filters, or package filters rules (i.e. white source). Any one of those will block a download, so I think in your case "Microsoft.* packages without a known license" would be blocked.
This can be overridden at a package level, FYI:
Cheers,
Steve
Hi @v-makkenze_6348 ,
Unfortunately it's a bit difficult to troubleshoot what happened with the information provided...
The best way to troubleshoot retention rules deleting unexpected packages is to use the "dry run" feature. This will let you tweak the rules, and find which setting is behaving unexpectedly.
FYI, retention rules do not consider package statistics ("download history".. i.e. records of individual downloads) but instead use "last download date" and "download count" (metadata fields on the package version). If you delete a package, and then re-add it, the "download count" would effectively reset to zero, but the "download history" records would still remain.
Hope that helps,
Steve