Hi @mail_6495 ,
Looks like this was a regression with API Key Authentication; the uploader control improperly required an API key. This will be fixed in PG-2104 on this Friday's maintenance key update.
Cheers,
Alana
Hi @mail_6495 ,
Looks like this was a regression with API Key Authentication; the uploader control improperly required an API key. This will be fixed in PG-2104 on this Friday's maintenance key update.
Cheers,
Alana
Hi @mcascone
Looking closer, it doesn't appear that https://services.gradle.org/distributions/ is a maven repository after all (no folder structure, missing metadata xml files)? It looks just a regular web page (HTML) with links to files that can be downloaded (i.e. there's no API)
This seems like something you should make an asset directory for (but obviously a connector wouldn't be possible, since there's no API ). They probably just prepend distributionUrl to a known file name, like gradle-7.3-bin.zip?
The error is definitely related to the SSL/HTTPS connection from Java (Gradle) to IIS (ProGet). It's certainly something you need to configure in Java, but I'm afraid I have no idea how to do that -- it does seem to be a common question people ask about (found on Stack overflow -- https://stackoverflow.com/questions/9210514/unable-to-find-valid-certification-path-to-requested-target-error-even-after-c)
After you fix that, you could make an asset directory probably. Please let us know, would be nice to document!
Cheers.
Alana
Hi @mcascone ,
I'm almost certain that you can just set up a Maven feed/connector for this purpose -- please let us know, I'd love to update the docs to clarify.
You probaly won't be able to "see" the packages from searching (this requires an index that many repos don't have), but only navigating to artifacts directly.
Cheers,
Alana
Hi @janne-aho_4082 ,
would it be possible to cache the authentication request and LDAP response for a short time
That definitely seems possible, but that's the sort of thing we'd want to implement in the "v4" of this directory provider (not as a patch in a maintenance release). I meant to link to that last time, but here it is: https://docs.inedo.com/docs/installation-security-ldap-active-directory#ldap-ad-user-directories-versions --- but v4 is a little whiles out.
switching from account credentials to api keys wouldn't happen over night
We definitely recommend this path going forward, in particular from a security standpoint. Generally a smaller attack surface in case the API key gets leaked (compared to LDAP credentials).
Hi @janne-aho_4082 ,
Looking at your CEIP sessions, there's a lot of factors going on.
The biggest issue is that your LDAP response is incredibly slow. We can see that a basic a query to [1] find a user is taking 500-900ms, and a query to [2] find user groups is taking upwards of 7500ms. This is compounded by thousands of incoming requests, thousands of outgoing requests, relatively slow download times, and minimum hardware requirements. This all yields different/unpredictable performance, which is why you're seeing varying results so much.
All told, it looks like ~70% of the time is going to LDAP queries (each request does the find user query), ~18% is going to outbound connections, and ~8% is going to the database (most to the "get package metadata" procedure).
There's a few "overload" points, where the OS is spending more time managing multiple things than it is doing those things, and increasing CPUs ought to help.
So, at this point, I would recommend:
username:password key or "Personal API Key"This should yield a significant performance improvement overall. We can consider new ways of caching things in v4 of this directory provider.... but if you have this kind of latency on your LDAP queries, it's best to just use Feed API keys...
Alana
Hi @mcascone ,
The ProGet Jenkins Plugin is designed for creating and publishing universal packages, so it won't work for assets.
The Asset Directory API is really simple though, and a simple PUT with curl or Invoke-WebRequest will do the trick. hopefully that's easy enough to implement :)
Cheers,
Alana
Hi @moriah-morgan_0490 ,
We are working on documenting this all much better, so thank you for bringing it up. But the scenario you describe (using Otter as a script repository/execution center) is definitely possible and is something we are actively working on improving and making easier.
Otter can pass in variables, read your existing Comment-Based Help, and you can then build Job Templates around the variables. We have a tutorial about that here: https://docs.inedo.com/docs/otter-create-a-gui-for-scripts-with-input-forms
As for Secure credentials, no problem. Behind the scenes, this is handled through the $PSCredential function in OtterScript, and now that I write this, I think we should add support to Job Templates for this.
Anyways, after uploading a script named MyScriptThatHasCredentials.ps1 to Otter, and creating a SecureCredential in Otter named defaultAdminAccount, you would just need to write a "wrapper" in OtterScript for it:
PSCall MyScriptThatHasCredentials
(
User: $PSCredential(defaultAdminAccount)
);
Do you want the Otter Service and/or Inedo agent to run as a GMSAs? Sure, there's no problem as long as there's access; https://inedo.com/support/kb/1077/running-as-a-windows-domain-account
Cheers,
Alana
@janne-aho_4082 thanks!
The timing might be okay then.
npmjs.org will most certainly be faster. Not just because they have a massive server farm compared to you, but their content is static and unauthenticated.
ProGet content isn't static -- and also needs to proxy most requests to the connectors, because they are "what is latest version of this package". Turning on metadata caching in the connector will help, but I still would expect slower response time.
@janne-aho_4082 great, thanks!
Do you know what the old times were? I really don't know if 2-3 minutes for installing 1400 packages is unreasonable... that doesn't sound so bad to me , but I don't know.
If it's easy to try the older version , we can try to compare CEIP data on both.
Oh and the easiest way to find your CEIP data is from the server/machine name... but it's probably best to submit to the EDO-8231 ticket since it's perhaps sensitive data.
@janne-aho_4082 I'm not really sure what always-auth does, but my guess is that it first tries a request with no authorization, receives a 401, then spends the authorization header My guess is that it's unrelated; that initial 401 should be really quick if anonymous access isn't enabled.
rc.4 seems to only have PG-2094 and PG-2098... both unrelated to LDAP, but prety minor. And you'll now have a "copy" button on the console :)
Hi @albert-pender_6390 ,
This is an internal Windows Error, and happens when another process (usually a UI window) has an open session to hive within the Windows Registry. It's a long-standing bug/issue with COM+ services (which Active Directory uses), and is not really ProGet specific.
It's a side-effect of the ProGet upgrade process, which often stops/starts Windows Services and IIS Application pools. Ultimately restarting will fix it (as you've notcied), but changing the "Load User Profile" to "true" on the application pool is also known to fix it as well.
Best,
Alana
@paul_6112 well, as it turns out... this was actually trivial to fix 
It will make it in 7.0.19, scheduled for Feb-25th
Hi @galaxyunfold ,
The "Timeout expired" errors are indeed a result of database or network connectivity issues. It's possible to create connector loops (A ->B -> C -> A) that will yield this behavior as well.
The "server too busy" is an internal IIS error, and it can be much more complicated. It's rarely related to load, and is more related to performing an operation during an application pool recycle. Frequently crashing application pools will see this error frequently.
There are a lot of factors that determine load, and how you configure ProGet (especially with connectors and metadata caching) makes a big difference. But in general, it starts to make sense at around 50 engineers. At 250+ engineers, it makes sense not to go load-balanced / high-availability.
Here is some more information: https://blog.inedo.com/proget-free-to-proget-enterprise
Cheers,
Alana
Hi @paul_6112 ,
Just FYI is that selecting the "Application:" isn't refreshing/cascading to the list of releases or builds.
As a workaround, you can select Application, then hit the refresh button in your browser. This is a nontrivial update, but one we'll get fixed via BM-3777 in an upcoming release.
Cheers,
Alana
Hi @galaxyunfold ,
Based on the symptoms your describing, it sounds like the problem is load-related. How many developers/machines are using this instance of ProGet?
When you have a handful of engineers doing package restores with tools like npm, it's similar to a "DDoS" on the server -- the npm client tool makes hundreds of simultaneous requests to the server. And the server then has to make database connections, and often connections out to npmjs.org, etc. The network queues get overloaded, and then you get symptoms like this.
See How to Prevent Server Overload in ProGet to learn more.
Ultimately load-related issues are from a lack of network resources, not CPU/ram. You can reduce connections (throttle end-users, remove connectors, etc.), but the best bet is going with a high-availability / load-balanced configuration.
I would also recommend upgrading, as there's been a lot of performance improvements in the 4-5 years since ProGet v4 was released.
Alana
Hi @p-boeren_9744 ,
I added support to have npm packages treat SEE LICENSE IN as a embedded file licenses via PG-2085.
It now looks like this, and blocks/allows package:

This will be released in this week's upcoming maintenance release.
Cheers,
Alana
Hi @dustin-davis_2758 ,
I'm not really sure - I'm not familiar enough with ADO Docker Compose to help :/
The error is coming because the container repository (image) name is incorrect; it should be like proget.initech.com/feedName/initech/repositoryName
Generally you put this in your docker-compose.yml file, like this:
So that would be the first place I would look. If you have the proper image name in there, tehn I guess, ADO might be doing something different?
Let us know what you find!
Cheers,
Alana
hello @nmorissette_3673 ,
I can't think of anything in ProGet that could yield this behavior (most especially for a particular package), and I can't reproduce it with the package. So this is tricky to debug.
Please try reproducing with a fresh new feed.
nuuget), add connector to NuGet.org/feeds/nuuget/Puma.Security.Rules/2.4.7/nuget/nuuget/package/Puma.Security.Rules/2.4.7IF that works, then there's some difference between the two feeds.
If it doesn't work, it's likely something between ProGet (which would be weird, but maybe a content filter/proxy).
Let us know what you find!
cheers,
Aana
Hi @nmorissette_3673 ,
That's odd, but I wonder if the package file is deleted from disk, and it's a cached package?
If that's the case, you should see a very specific message about it, like "Could not find a part of the path 'c:\LocalDev\ProGet\PackageStore.nugetv2\F1\Puma.Security.Rules\Puma.Security.Rules.2.4.7.0.nupkg'.".
Otherwise, here's what I did to reproduce:
nuuget), add connector to NuGet.org/feeds/nuuget/Puma.Security.Rules/2.4.7/nuget/nuuget/package/Puma.Security.Rules/2.4.7Of course, it's no problem. If i delete package on disk, then i'll get a 404 error.
If I "delete cached package" from the Web UI, and then download again it's fine.
Hope this helps...
Cheers,
Alana
Well, that's an interesting way to specify an embedded license file. I don't know if that's a convention or specification, but that seems to be a new way of handling it. It's kind of documented now, which is good: https://docs.npmjs.com/cli/v8/configuring-npm/package-json#license
Anyways, we already handle this for NuGet packages using a URL convention like this:
packageid://Aspose.Words/21.9.0package://Aspose.Words/21.9.0/License\Aspose_End-User-License-Agreement.txtThose will be defaulted in the fields if the package specifies no license or a file license.
Not sure if it works now for npm packages, but it'd be relatively easy to adoption that convention, and then suggest it when the "license" field starts with "SEE LICENSE IN"...
Anyways we'll investigate this and update in a day or two.
Cheers,
Alana
@araxnid_6067 thanks, gload it worked! I'll work to update the documentation about this topic :)
Hi @cronventis ,
Great find - that seems to explain what we're seeing: containerd reports on containers differently than dockerd. So, we'll just search for container images based on configurationblob_digest OR image_digest 
This change was trivial, and will be in the next maintenance release (or available as a prerelease upon request) as PG-2081 - scheduled release date is Feb 11.
Cheers,
Alana
Hello,
Are you doing an API call using PowerShell or something to delete packages? Did this happen after a recent upgrade to ProGet v6?
This post may help: https://forums.inedo.com/topic/3418/upgrading-from-5-to-6-causes-api-key-to-stop-working/2
We can definitely consider adding the API-key authentication back - we didn't realize it worked in the first place :)
Please let us know if this is the issue.
Cheers,
Alana
@araxnid_6067 is it in the [DockerBlobs] table? If not, then ProGet doesn't know about it, and then it's safe to delete
Otherwise, it might still be referenced by a manifest, but ProGet doesn't have that relation in the database.
You'd have to parse [ManifestJson_Bytes] to find out. If you're comfortable with SQL, you could do a "hack" query to convert that column a VARCHAR, then use OPENJSON or a LIKE query to search all manifests for that digest.
Howver, that's what ProGet does during feed cleanup.
@Stephen-Schaff I'm afraid I don't... it's a bit tricky to use, since you need to request a bearer token first and then send that in a header value.
https://docs.docker.com/registry/spec/auth/token/#how-to-authenticate
Thanks @mcascone , I also added this to our "Promotion / Repackaging Visibility & Permissions Rethinking" task - sounds like something we can consider :)
Hi @mcascone ,
I admit this can be confusing and is unintuitive because these were added separately over time, and the weren't originally designed for how they're used today. We need to rethink/redesign this based on the use cases.
I'm going to add this thread under the "promotion/repackaging workflows" topic for our next major version of ProGet. Once we know what we want to do, we can may be able to implement some changes as a preview feature in v6.
FYI, this is exactly how the big API Key changes and feed/package usage instructions came about!
https://forums.inedo.com/topic/3204/proget-feature-request-api-key-admin-per-user
So stay tuned :)
Hi @Stephen-Schaff ,
The API Keys changes in ProGet v6 involved changing some of the authentication code, so seeing bugs/regressions where a connected systems (build/CI server) reports authentication errors is not unexpected.
Based on the error message you're sending, it looks like you were using an X-ApiKey header to authenticate to the Docker registry API. That actually wasn't supposed to be supported before (Docker API requires token-based authentication), and must have only worked because of a bug / unclear specification in our old authentication code...
So the options from here:
We can consider adding/documenting support for using X-ApiKey header in the Docker API, but as it's not possible at the moment....
Hi @cronventis, just wanted to let you know that this is complicated and it's not something we cannot quickly debug/diagnose.
Based on our analysis, the data being returned from your Kubernetes API is different than our instance, and the instances we've seen in the field. Our instance's API is returning the configuration digest, but it looks like your instance is returning the manifest digest.
Which one is correct? Why is your instance doing that? Why is ours doing this? It's a mess 
Code-wise, it would be a trivial fix in ProGet to make. Basically we just change this...
var data = await new DB.Context(false).ContainerUsage_GetUsageAsync(Feed_Id: this.FeedId, Image_Id: this.Image.ContainerConfigBlob_Digest);
... to this...
var data = await new DB.Context(false).ContainerUsage_GetUsageAsync(Feed_Id: this.FeedId, Image_Id: this.Image.Image_Digest);
... except that would break our instance and the others that return configuration digests.
We're tempted to "munge" the data results (basically just concatenate both database resultsets), but it would be really nice to know (1) which is correct and (2) why one instance does one thing.
Anyways that's our latest thought. Do you have any insight into this? This is just so bizarre.
Well, we'll kepe thinking about it on our end as we have time. Just wanted to give you a sitrep.
Cheers,
Alana
Hi @araxnid_6067 ,
This behavior is expected, and it's handled via Garbage Collection for Docker Registries:
Unlike packages, a Docker image is not self-contained: it is a reference to a manifest blob, which in turn references a number of layer blobs. These layer blobs may be referenced by other manifests in the registry, which means that you can't simply delete referenced layer blobs when deleting a manifest blob.
This is where garbage collection comes in; it's the process of removing blobs from the package store when they are no longer referenced by a manifest. ProGet performs garbage collection on Docker registries through the "FeedCleanUp" scheduled job.
So basically, it will get deleted when the corresponding FeedCleanUp job runs. It's default to every night, and you can see the logs on the Admin > Manage Feed page.
Cheers,
Alana
Hi @colin_0011 , that certainly is an odd issue!
We've never seen it before, but it's coming from the library we're using (libgit2). I don't really know what it means, or what's causing it (is it a number of files in the repository, etc.), but I just have a couple ideas.
C:\ProgramData\BuildMaster\Temp\Service\GitWorkspaces)git.exe instead of the built-in libraryYou can do #3 by setting GitExePath parameter on the operation, or configuring a $DefaultGitExePath variable at the server or system level in BuildMaster; this will force all Git source control operations to use the CLI instead of the built-in library
It's possible the bug was already fixed in a newer version of the lbirary. What version of BuildMaster are you using?
Hi @robert_3065 ,
Glad it's working!
Good point about the error message; it's in a kind of general place, so I just replaced that unhelpful base64 decoding message with this (via PG-2069):
string userPassString;
try
{
userPassString = context.Request.ContentEncoding.GetString(Convert.FromBase64String(authHeader.Substring("Basic ".Length)));
}
catch (FormatException)
{
throw new HttpException(400, "Invalid Basic credential (expected base64 of username:password)");
}
Not the perfect solution, but better than now!
Cheers,
Alana
Hi @robert_3065,
Based on this, I think the _auth token in your .npmrc file isn't correct. That is the token that's sent to http://OURSERVER/npm/npm_internal/, which is supposed to be in a base64-formatted api:apikey or 'user:password format.
Here's some more information about it:
https://docs.inedo.com/docs/proget-feeds-npm#npm-token-authentication
Npm auth isn't so intuitive unfortunately :/
Alana
@Stephen-Schaff great to hear! And I guess another way to do it would be enabling/disabling the semver restrictions on the feed
Let us know if it keeps happening, and you can find a pattern - we'll see if we can identify what might be the cause of it
@mcascone
existing connection forcibly closed
I'm afraid this is more of the same; there's some sort of network policy that's blocking this connection. It could be the way your laptop is configured, but maybe it's also happening on the HTTPS/SSL level? Anyways, the remote server (not proget.inedo.com, but some intermediate) is disconnecting at some point.
The ProGet-5.3.43.upack would really only be useful for manual installation; but it also might be bad/corrupt/incomplete. You could try unzipping it to see.
Oh.... I probably should have said it before, but we have premade, single-exe offline installers for specific versions of ProGet: https://my.inedo.com/downloads/installers
Here is some more information about them; https://docs.inedo.com/docs/desktophub-offline
These 403 errors are all coming from your proxy server (firewall); unfortunately we/you have no visibility on that.
But it's clear that some requests are allowed, and others aren't. Maybe it doesn't like requests that are downloading a file called .upack. maybe it doesn't like the agent header. Maybe it tries to scan/verify contents with a virus check? It's a total guess 
From here, best bet is to check with IT to see if they can inspect the firewall/proxy logs.
@Stephen-Schaff said "Any ideas on how I can get the latest tag to be auto applied?"
the "virtual" tags are recomputed when a tag is added, so if you can try tagging your image 1.0.2 (or something), and then deleting that tag, you should see1, 1.0, and latest all applied to that image
Hi @mcascone,
Our products our built with .NET 4.5.2, which uses the Windows certificate chain.
I suspect that ZScaler is replacing the certificate, and that's causing a trust problem. Maybe you can try installing ZScaler certificate directly in the store, and there are some registry tweaks / hacks that might make it work. Unfortunately I don't have any specifics on what you can try.
You should see same errors if you log-in as service account user, and try to visit the site in IE or Edge. PowerShell would also exhibit the same errors.
In any case, I would search for like "ZSCaler Certificate TLS error Windws" and what not, and hopefully find some specific things to try...
Best,
Alana
Hi @marc-ledent_9164 ,
We're currently investigating this.
FYI: Based on the URL, I think that there is not a valid license... which would make sense if you just set up the instance. No license will trigger an automatic redirect to /administration/licensing to address the license. However, that page is marked is "no license required"... so it shouldn't be redirecting.
Please stay tuned.
@hwittenborn thanks for letting me know -- that was going to be my next suggestion. It's very possible the key had an unexpected combination of properties/permissions and that didn't translate to the v6 internal model
But easy enough to recreate in a case like that :)
@hwittenborn this can sometimes be tricky to get working...
Did this work in ProGet v5? We made a major change to the api keys in v6, so knowing if it's a regression would help track this down
@shaun-d-scott_2657 Inedo products do not use log4j, so it's not an issue for ProGet :)
See more information here: https://blog.inedo.com/log4shell-high-severity-vulnerabilities
@darkbasics_6739 thanks, of course that's not the problem then
We're still investigating this, and it's quite odd why the extension isn't getting transfered. The workaround you're doing is fine to at least evaluate/test, but we'll most certainly get this fixed ASAP
@v-makkenze_6348 thanks for the additional information
ProGet does not cache npm package lists/indexes. It's always generated from the information in the database, so if you pushed it - then it's in the database. You should be able to see it in the ProGet UI right after publishing.
It's very possible that the npm client is doing some sort of caching as well, but isn't doing that cache with authenticated requests. I'm not familiar enough, but that's my guess.
You could use a tool like Fiddler to verify / see what requests npm is actually making.
Hi @mike_4027 ,
I would definitely recommend upgrading; 5.1 is a couple major versions behind.
The issue is most certainly related to the connector; when you disable it, do you get a near-immediate 404? Under Admin > Diagnostic Center, you may see connector errors/warnings as well.
Once you confirm it's the connector, then the next best thing to do would be to monitor the traffic between ProGet and the internet. This involves setting up a Proxy (Fiddler works nicely), and then having ProGet connect to that proxy (Admin > Proxy). You should be able to identify corresponding requests, and maybe we'll see something there!
Let us know what you find,
Alana
Hi @darkbasics_6739 ,
That's really weird, but I can see how that error might happen now. It's been addressed as OT-446, and will be fixed in the next maintenance release.
We're still not sure why the SCripting extension isn't coming over. We're thinking, it might be due to a time out of sorts, since the file is now pretty large. Obviously that should throw a different error.... is there a lot of bandwidth between servers?
Cheers,
Alana
@v-makkenze_6348 it looks like you've enabled caching, which means that ProGet's responses won't be newly generated. Please disable this :)
@sbindra_9387 you can enter the activation code in ProGet, when you click the activate button
Here is the instruction for manual activation:
https://docs.inedo.com/docs/myinedo-activating-a-license-key
Hi @patrick-groess_2616 ,
The /symbols/<feed-name> URL is only for Visual Studio's Symbol Location setting, and is used to download symbols. Do not try to use it with nuget.exe, it will give that error.
You need to use /nuget/<feed-name> to push symbol or nuget.exe packages.
Thanks,
Alana