Thanks for letting us know, @hwittenborn
We must not have the http --> redirect setup in our DR server , we'll try to get that soon
Thanks for letting us know, @hwittenborn
We must not have the http --> redirect setup in our DR server , we'll try to get that soon
The sort-order for feeds is not configurable the ProGet UI, but some tools (like Visual Studio) do allow for sorting.
Otherwise, we haven't had any other requests for improving the ProGet UI for Chocolatey Feeds (especially not from customers / paid users that we're aware of), but if we hear more requests we'll definitely consider improvements like this.
Hi @alin-kovacs_4228 ,
We don't maintain documentation for that version I'm afraid; the Native API is available, and you can find it in the software with /reference/api
However, your best bet would be to work with our team to figure out what you need to do and get some help migrating to a newer version or something :)
Cheers,
Alana
The high-availability / cluster configuration can be a little tricky... but glad that changing it worked.
The message "More than half of the servers are in an error state, consider restarting the BuildMaster service." must be a cached error message? We try to detect if there's a major problem with the service / agents, and then trigger that message.
Can you try to restart the BuildMaster service on each of the nodes, and see if it goes away?
Cheers,
Alana
Hi @rmusick_7875 ,
No problem, happy to help.
[1] I'm afraid it's not really possible to sort the packages; they're displayed in the order that's returned from the remote feed (i.e. chocolatey.org), and sorting isn't possible when requesting results. Typically it's done by popularity, but sometimes they come back with recently-updated. I'm afraid it's not predictable. Internally, ProGet sorts a feed by recently updated.
[2] We used to do this, but it creates a lot of confusion because many package titles look like they are a package ID. It was especially confusing for packages with a title like Initech.Utils, but with a ID of InitechUtil2 or something. NuGet.org also changed similarly.
We may consider changing the behavior for Chocolatey feeds.
Hi @marc-ledent_9164 , sorry on the slow reply, I wasn't so familiar with OpenShift so I wanted to research a little.
First, I think your Service.MessengerEndpoint should be tcp://*:4242, because you don't know which node will be active. It might be buildmaster-0, but it might not.
What also isn't clear, do you need to "open" or otherwise map port 4242? I'm thinking the service messager is working on the node that can connect to itself, but the nodes aren't communicating over the internal network.
Cheers,
Alana
Hi @marc-ledent_9164 ,
BuildMaster doesn't use a reports directory... I'm guessing someone configured that to store reports generated from custom queries using something like SQL REporting or something? Or maybe PDFS/screenshots for auditing?
Cheers,
Alana
Hello @bkohler_1524,
Because packages are immutable, you need to decide the future-compatibility at the time you create the package. This is where Semantic Versioning dependencies can really help.
Following the rules of SemVer, you can say that "ProductA 1.0.0 has a dependency on ProductB (1.1.0-2.0.0]", what that means is that you can use every version of ProductB from 1.1.10 up to (but not including) 2.0.0.
This allows you to make as many minor versions of ProductB (such as 1.200.0) before "breaking" compatibility.
If these version numbers are business/marketing-driven, and it's impossible to change their mind on versioning, you can always maintain an internal version number - this is what many products (such as Microsoft) does.
Hi @jon-benson ,
We don't have a tutorial for that I'm afraid :(
However, you're on the right track -- it does involve using the Advanced Properties on the User Directory to specify the domain controller host, credentials, and so on.
Since you mentioned you have / will purchased ProGet, let me reach out to my colleague Gene, he is our Customer Advocacy Manager and often sets-up appointments to help with onboarding/configuration for new users. This could be a chance to get some assistance on that.
Thanks,
Alana
Hi @jon-benson
Sorry for the mix-up / confusion. I'll try to clarify a few points.
LDAP/AD integration is only available on paid editions of ProGet (Basic, Enterprise), and you can integrate with Azure AD using LDAP. This requires you to type in your Azure AD username/password in the ProGet login page.
SAML/Single-Sing-On is only available in ProGet Enterprise edition. This allows you to sign-in to ProGet without typing in your Azure AD username/password. Also, this is no longer a preview/beta feature (the docs were just incorrect; I've now updated).
For ProGet v2022 (aka 6.1), we are developing an improved Security Management user interface. It's basically the same, but just easier to work with. It's available as a preview feature, and that preview is enabled by default on new installations. We just added SAML support to this new interface, so on 6.0.12 you should be able to use it.
hope this helps clarify,
Alana
Hi @nuno-ildefonso_8876 ,
Thanks for that information; this will be fixed in the next maintenance release (BM-3788), later this week (on Friday).
Hi @pariv_0352 ,
Are you able to see the results of the "DockerGarbageCollection" job?
This is actually what's responsible for deleting those images, and it runs nightly by default.
Let me share the code to it; if you can understand the database structure already, then hopefully it will help you to identify why it's not workikng, and what you might be able to look at in the logs to help troubleshoot:
[ScheduledTaskProperties(
ScheduledTaskTypes.DockerGarbageCollection,
"Deletes unreferenced Docker blobs.")]
public sealed class DockerGarbageCollectionTask : ScheduledTaskBase
{
public override Task ExecuteAsync(ScheduledTaskContext context) => this.GarbageCollectAsync();
private async Task GarbageCollectAsync()
{
using var db = new DB.Context();
this.PercentComplete = 0;
var usedBlobs = new HashSet<DockerDigest>();
this.LogDebug("Gathering list of all rooted blobs...");
foreach (var image in db.DockerImages_GetImages(Feed_Id: null))
{
if (image.ContainerConfigBlob_Id.HasValue)
usedBlobs.Add(image.ContainerConfigBlobDigest);
DockerManifest manifest;
try
{
manifest = new DockerManifest(image.ManifestJson_Bytes);
}
catch (Exception ex)
{
this.LogError($"Image {image.Image_Digest} has invalid manifest: {ex.Message}");
continue;
}
// "Fat images" do not have blobs as layers.
if (manifest.Layers == null)
continue;
foreach (var l in manifest.Layers)
usedBlobs.Add(l.Digest);
}
this.LogDebug($"Found total of {usedBlobs.Count} rooted blobs; finding unreferenced blobs...");
var allBlobs = (await db.DockerBlobs_GetBlobsAsync(Feed_Id: null))
.Where(b => b.Feed_Id == null)
.Select(b => DockerDigest.Parse(b.Blob_Digest));
var unreferencedBlobs = allBlobs
.Where(d => !usedBlobs.Contains(d))
.ToList();
this.LogDebug($"Found {unreferencedBlobs.Count} unreferenced blobs.");
using var fileSystem = new DirectoryFileSystem(ProGetConfig.Storage.DockerBlobStorageLibrary);
for (int i = 0; i < unreferencedBlobs.Count; i++)
{
this.PercentComplete = (i + 1) * 100 / unreferencedBlobs.Count;
var digest = unreferencedBlobs[i];
this.LogInformation($"Deleting blob {digest}...");
if (!ProGetConfig.Feeds.RetentionDryRun)
{
await db.DockerBlobs_DeleteBlobAsync(Feed_Id: null, Blob_Digest: digest.ToString());
await fileSystem.DeleteDockerBlobAsync(digest);
}
}
}
}
Hi @NUt ,
LDAP / Active Directory integration is a paid feature, and should not be available in ProGet Free Edition. We don't intended to change that in our next version (v2022).
If you're able to log-in or use it, then it's most certainly a bug of the new Security Management preview features. Please don't rely on that, because it will probably not work in a newer version :)
Cheers,
Alana
@mcascone I don't think it's new, but it's just used to specify the dependencies field in the manifest file; I'm thinking, perhaps, it might be similar/identical to the consumes field you added?
https://docs.inedo.com/docs/upack-universal-packages-manifest
The only thing Inedo tools use it for today is just displaying information in ProGet, on the dependencies tab. That may / may not be helpful.
@ales-bahnik_2824 thanks; so that tells us the call is making it to ProGet OK, and there are no permissions issues. If there were permissions problems, you'd get an error like you did above.
Not sure what to do from here. I tried very quickly to reproduce, and the package deletes fine
PS C:\Users\atripp> Invoke-RestMethod -Method Delete -Uri "http://proget.localhost/nuget/nugets/package/Newtonsoft.Json/13.0.1" -Headers @{"X-NuGet-ApiKey"="058fca3993cde88d771b142b876913a0a126f16b"}
The delete will not produce an error if the package doesn't exist. So maybe it's already deleted.
Otherwise I'm not sure how else to debug. The method the API and Web page call are exact the same.
hi @ales-bahnik_2824 , what happens if you provide an incorrect API key? Do you get an error message?
@kichikawa_2913 the best way to troubleshoot this would be with using a tool like Fiddler to compare/contrast the request/responses from NuGet and ProGet. At this point, we're not sure why your ProGet is behaving different than our ProGet, or NuGet.org. If you can share a .saz archive, we can try to look as well.
The registrations-gz/selenium.webdriver.chromedriver/index.json is basically just asking for a list of versions. It's dynamically generated, That may not have the version NuGet wants/expects? It's really hard to guess...
@kichikawa_2913 said in Package not found restoring from ProGet but works from nuget.org:
I wasn't seeing anything in the logs before but this is what I get now. There's an initial query that looks to return a 401 then a second query after the LDAP lookup that returns a 200.
That's considered an "authentication challenge"; in the first request, Visual Studio doesn't send any credentials (and the 401 is returned), so then he retries again with credentials (hence the 200). So that isn't so unusual of a pattern, and seems to be operating fine.
What we're lookign for is which requests in Visual Studio are giving results with no packages, etc. It should be obvious from the URL, which package is being requested from proget.
Hi @kichikawa_2913,
Unfortunately I'm not able to reproduce this, and that packages are downloading just fine through ProGet, so it's a bit of guesswork as to where the issue could be.
Can you find errors on the ProGet-side around the time this is happening?
Can you isolate which specific requests are failing in ProGet? The easiest way to do this is with Fiddler (visual Studio should automatically connect to it) - and you can even share the failed requests as a .saz file so we can review as well...
Thanks,
Alana
Hi @ab-korneev_0401 ,
The npm API is very complicated behind the scenes, and third-party repositories often have bugs that cause odd behaviors when used via connector. Unfortunately it's really hard to say or figure out what's happening with out debugging.
If you can provide us with reproduction instructions like,
Then we can attach a debugger to ProGet and see where it's failing.
If http://upstream.address is sensitive or requires authentication, you can email details to support at inedo dot com. Make sure to put [QA-797] in the title, and tell us when you respond so we can fish out the address.
Cheers,
Alana
Hi @Russell-Kahler_4399 ,
It looks like there was an error running the upgrade scripts to v4.7. That would have been a long time ago, and it's hard to guess what it could be, and it'd require a bit of analysis to figure out.
I'd recommend migrating to a new instance; you can use Feed Importers in the latest version to pull all the content from your existing instance.
It's possible to bypass the errors with inedosql resolve-error --all but I really wouldn't recommend it, since it could lead to more problems.
Cheers,
Alana
@mcascone the drop path isn't really designed for this, and I wouldn't recommend that approach; it would have the effect of "overwriting" the packages, which may set new publish dates, and cause server-side metadata (like package counts) to reset
Also - if we change the path without changing anything else, will proget just start keeping new packages in the new path, and still be able to access the old path?
No; you'd get "package file not found errors", since the disk path is always constructed from those values.
@mcascone said in Proget: retention policy for branches in a package:
This means mybranch and mybranch2 can be reduced to mybranch2.
Sorry but wouldn't this be the reverse:
*mybranch*will match*mybranch2*?
What I mean to say.... because it's an AND conditional, the *mybranch* is effectively ignored. Everything that matches *mybranch2* will also match *mybranch*, but the opposite isn't true. E.g. mybranch1 won't match both conditions.
@mcascone said in Proget: retention policy for branches in a package:
in this feed, delete matches of 'mybranch', except the latest 3 versions of those matches, which would only impact the versions matching mybranch and leave all other non-matches untouched;
Correct. And do note that you can set retention policies to run in dry mode, where nothing is deleted, to verify it's the behavior you want.
Hi @mcascone ,
Good question; the documentation isn't very clear. How do you feel about this, I just updated the docs :)
If you want to change the directories your packages are stored, you'll also need move/copy the contents from the current location to the new location. We generally recommend:
You can also keep ProGet online the entire time; this will just cause a number of "package file not found" errors if anyone tries to download the package before the transfer is complete.
Depending on how many packages files you have, transferring may require a significant amount of time; you may not wish ProGet to be offline or for users to experience errors during the process. In this case, we recommend first mirroring the files using a tool like robocopy /MIR a few times (just in case packages were uploaded during the initial copy), and then changing the settings in ProGet.
Hi @mcascone!
To the "simple" question about the retention behavior, each retention rule starts by building a list of all packages. It loops over every package, and removes items from the list based on criteria you select. The packages not removed from the list are deleted. This ultimately has the effect of having everything be an "AND" in a single rule. This means *mybranch* and *mybranch2* can be reduced to*mybranch2*.
The rules run one after another. So the second rule would start with a new list, and elimate items based on what you checked off.
To the more complex question... why not just let the "dev" packages packages get messy? You can use a time- and/or usage-based rule. That might simplify things a lot. You can enable differential storage on Windows, which will reduce real space consumtion by like 90% or more.
Or maybe use a different feed? Just throwing ideas out :)
Cheers,
Alana
@can-oezkan_5440 thanks for letting us know what the issue was :)
Hopefully this is a trivial thing to fix in our code; we'll take a look and let you know!
@jan-primozic_9264 thanks for posting the update!
Please let us know if you can see a place for us to improve documentation :)
Hi @mcascone
Group names are optional with universal packages.
For example, we don't use them in this feed:
https://proget.inedo.com/feeds/BuildMasterTemplates
Unfortunately I'm not totally sure where the issue is, or how to troubleshoot the Jenkins plugin; it was created by the community, but I think it's possible to submit a pull request if there's an issue?
Perhaps thsi should check for a groupName, and not append property if null?
Are there any errors on the ProGet side?
Thanks,
Alana
Hi @p-boeren_9744 ,
The documentation isn't very clear 
I had to look this up myself in the code. If you set the allowed property, then a global rule is also created. Therefore, the following should work instead:
{
"licenseId": "package://@progress/kendo-react-grid/5.0.1/",
"title": "package://@progress/kendo-react-grid/5.0.1/",
"urls": [
"package://@progress/kendo-react-grid/5.0.1/package/LICENSE.md"
],
"allowedFeeds": ["NpmLicenseTest"]
}
Cheers,
Alana
Hi @bryan-ellis_2367 ,
I'm not an Azure DevOps expert, but last I checked, it's not possible to add NuGet package sources other than it's own ADO Packages product or the public repositories. That may just refer to "upstream sources", but I'm not totally sure.
However, if you want to use the ADO Pipeline's built-in NuGet commands to publish packages, I guess you can set up a service connection using this?
Not totally sure -- but please let us know what you find :)
Cheers,
Alana
Hi @moriah-morgan_0490 ,
Glad to see the environment un-scoping worked! It's definitely possible to get environment-scoping of credentials to work... but it'd probably be best to confirm what you're looking to accomplish.
The main purpose of the environment-scoping is to enable limited management access to Otter. For example, users can edit/maintain all configuration except for production servers. Take a look at Multiple Environments per Server to see how the behavior works.
I assume that you're running the Inedo Agent? We're still learning the exact privileges ourselves 
We've seen systems restricted in very unexpected ways (and only in the field of course, never or own environments), and they don't give any logical error messages. But we'd love to help you get this working, so we can document it.
Here's what we know so far:
Let us know what you find :)
Hi @mcascone,
We don't have that functionality in ProGet, but it should be pretty easy to do with a pair of Invoke-WebRequest PowerShell commands :)
You could probably parse/scrape the HTML and download / upload in bulk as well.
Please share the script if you end up doing that, it might be useful for other usecases as well!
Cheers,
Alana
Hi @moriah-morgan_0490 ,
If you've created a Secure Credential (Username & Password) under Admin > Secure Credentials that's named TestCred2 then the OtterScript you presented should work. There are several challenges with getting impersonation to work Windows in general (as I'm sure you've learned as a Windows admin), so we don't recommend using it unless you have to.
In your Otter configuration, my guess is that there is a problem with environment scoping; that can get a bit tricky - I would just set it to use (Any) environment for now.
As for "Secure" part of "Secure Credentials" - the password fields are stored as encrypted data, and you can only decrypt them with the Encryption Key (stored separately). They're also held in memory as securely as possible, and are "handed" to PowerShell as a PSCredential object.
The "Allow encrypted properties to be accessed" prevents OtterScript from exposing the passwords using the $CredentialProperty() function. For example, this would cause an error unless that was checked:
Log-Information $CredentialProperty(TestCred2, Password);
However, PowerShell has no such restriction surrounding credentials. For example, you can always just do this:
$credtest = Get-Credential
Write-Host $credtest.GetNetworkCredential().Password
Microsoft's recommended way to handle this is to use Windows Authentication.
Our recommendation is to generally avoid passing important credentials to scripts (API keys, etc. seem fine)... but if you must, use a change process to ensure that you aren't running scripts that dump passwords like that.
Hope that helps,
Alana
Hi @mail_6495 ,
Looks like this was a regression with API Key Authentication; the uploader control improperly required an API key. This will be fixed in PG-2104 on this Friday's maintenance key update.
Cheers,
Alana
Hi @mcascone
Looking closer, it doesn't appear that https://services.gradle.org/distributions/ is a maven repository after all (no folder structure, missing metadata xml files)? It looks just a regular web page (HTML) with links to files that can be downloaded (i.e. there's no API)
This seems like something you should make an asset directory for (but obviously a connector wouldn't be possible, since there's no API ). They probably just prepend distributionUrl to a known file name, like gradle-7.3-bin.zip?
The error is definitely related to the SSL/HTTPS connection from Java (Gradle) to IIS (ProGet). It's certainly something you need to configure in Java, but I'm afraid I have no idea how to do that -- it does seem to be a common question people ask about (found on Stack overflow -- https://stackoverflow.com/questions/9210514/unable-to-find-valid-certification-path-to-requested-target-error-even-after-c)
After you fix that, you could make an asset directory probably. Please let us know, would be nice to document!
Cheers.
Alana
Hi @mcascone ,
I'm almost certain that you can just set up a Maven feed/connector for this purpose -- please let us know, I'd love to update the docs to clarify.
You probaly won't be able to "see" the packages from searching (this requires an index that many repos don't have), but only navigating to artifacts directly.
Cheers,
Alana
Hi @janne-aho_4082 ,
would it be possible to cache the authentication request and LDAP response for a short time
That definitely seems possible, but that's the sort of thing we'd want to implement in the "v4" of this directory provider (not as a patch in a maintenance release). I meant to link to that last time, but here it is: https://docs.inedo.com/docs/installation-security-ldap-active-directory#ldap-ad-user-directories-versions --- but v4 is a little whiles out.
switching from account credentials to api keys wouldn't happen over night
We definitely recommend this path going forward, in particular from a security standpoint. Generally a smaller attack surface in case the API key gets leaked (compared to LDAP credentials).
Hi @janne-aho_4082 ,
Looking at your CEIP sessions, there's a lot of factors going on.
The biggest issue is that your LDAP response is incredibly slow. We can see that a basic a query to [1] find a user is taking 500-900ms, and a query to [2] find user groups is taking upwards of 7500ms. This is compounded by thousands of incoming requests, thousands of outgoing requests, relatively slow download times, and minimum hardware requirements. This all yields different/unpredictable performance, which is why you're seeing varying results so much.
All told, it looks like ~70% of the time is going to LDAP queries (each request does the find user query), ~18% is going to outbound connections, and ~8% is going to the database (most to the "get package metadata" procedure).
There's a few "overload" points, where the OS is spending more time managing multiple things than it is doing those things, and increasing CPUs ought to help.
So, at this point, I would recommend:
username:password key or "Personal API Key"This should yield a significant performance improvement overall. We can consider new ways of caching things in v4 of this directory provider.... but if you have this kind of latency on your LDAP queries, it's best to just use Feed API keys...
Alana
Hi @mcascone ,
The ProGet Jenkins Plugin is designed for creating and publishing universal packages, so it won't work for assets.
The Asset Directory API is really simple though, and a simple PUT with curl or Invoke-WebRequest will do the trick. hopefully that's easy enough to implement :)
Cheers,
Alana
Hi @moriah-morgan_0490 ,
We are working on documenting this all much better, so thank you for bringing it up. But the scenario you describe (using Otter as a script repository/execution center) is definitely possible and is something we are actively working on improving and making easier.
Otter can pass in variables, read your existing Comment-Based Help, and you can then build Job Templates around the variables. We have a tutorial about that here: https://docs.inedo.com/docs/otter-create-a-gui-for-scripts-with-input-forms
As for Secure credentials, no problem. Behind the scenes, this is handled through the $PSCredential function in OtterScript, and now that I write this, I think we should add support to Job Templates for this.
Anyways, after uploading a script named MyScriptThatHasCredentials.ps1 to Otter, and creating a SecureCredential in Otter named defaultAdminAccount, you would just need to write a "wrapper" in OtterScript for it:
PSCall MyScriptThatHasCredentials
(
User: $PSCredential(defaultAdminAccount)
);
Do you want the Otter Service and/or Inedo agent to run as a GMSAs? Sure, there's no problem as long as there's access; https://inedo.com/support/kb/1077/running-as-a-windows-domain-account
Cheers,
Alana
@janne-aho_4082 thanks!
The timing might be okay then.
npmjs.org will most certainly be faster. Not just because they have a massive server farm compared to you, but their content is static and unauthenticated.
ProGet content isn't static -- and also needs to proxy most requests to the connectors, because they are "what is latest version of this package". Turning on metadata caching in the connector will help, but I still would expect slower response time.
@janne-aho_4082 great, thanks!
Do you know what the old times were? I really don't know if 2-3 minutes for installing 1400 packages is unreasonable... that doesn't sound so bad to me , but I don't know.
If it's easy to try the older version , we can try to compare CEIP data on both.
Oh and the easiest way to find your CEIP data is from the server/machine name... but it's probably best to submit to the EDO-8231 ticket since it's perhaps sensitive data.
@janne-aho_4082 I'm not really sure what always-auth does, but my guess is that it first tries a request with no authorization, receives a 401, then spends the authorization header My guess is that it's unrelated; that initial 401 should be really quick if anonymous access isn't enabled.
rc.4 seems to only have PG-2094 and PG-2098... both unrelated to LDAP, but prety minor. And you'll now have a "copy" button on the console :)
Hi @albert-pender_6390 ,
This is an internal Windows Error, and happens when another process (usually a UI window) has an open session to hive within the Windows Registry. It's a long-standing bug/issue with COM+ services (which Active Directory uses), and is not really ProGet specific.
It's a side-effect of the ProGet upgrade process, which often stops/starts Windows Services and IIS Application pools. Ultimately restarting will fix it (as you've notcied), but changing the "Load User Profile" to "true" on the application pool is also known to fix it as well.
Best,
Alana
@paul_6112 well, as it turns out... this was actually trivial to fix 
It will make it in 7.0.19, scheduled for Feb-25th
Hi @galaxyunfold ,
The "Timeout expired" errors are indeed a result of database or network connectivity issues. It's possible to create connector loops (A ->B -> C -> A) that will yield this behavior as well.
The "server too busy" is an internal IIS error, and it can be much more complicated. It's rarely related to load, and is more related to performing an operation during an application pool recycle. Frequently crashing application pools will see this error frequently.
There are a lot of factors that determine load, and how you configure ProGet (especially with connectors and metadata caching) makes a big difference. But in general, it starts to make sense at around 50 engineers. At 250+ engineers, it makes sense not to go load-balanced / high-availability.
Here is some more information: https://blog.inedo.com/proget-free-to-proget-enterprise
Cheers,
Alana
Hi @paul_6112 ,
Just FYI is that selecting the "Application:" isn't refreshing/cascading to the list of releases or builds.
As a workaround, you can select Application, then hit the refresh button in your browser. This is a nontrivial update, but one we'll get fixed via BM-3777 in an upcoming release.
Cheers,
Alana
Hi @galaxyunfold ,
Based on the symptoms your describing, it sounds like the problem is load-related. How many developers/machines are using this instance of ProGet?
When you have a handful of engineers doing package restores with tools like npm, it's similar to a "DDoS" on the server -- the npm client tool makes hundreds of simultaneous requests to the server. And the server then has to make database connections, and often connections out to npmjs.org, etc. The network queues get overloaded, and then you get symptoms like this.
See How to Prevent Server Overload in ProGet to learn more.
Ultimately load-related issues are from a lack of network resources, not CPU/ram. You can reduce connections (throttle end-users, remove connectors, etc.), but the best bet is going with a high-availability / load-balanced configuration.
I would also recommend upgrading, as there's been a lot of performance improvements in the 4-5 years since ProGet v4 was released.
Alana