I've identified this and we'll get this fixed in the next maintenance release (shipping end of week, at the latest) as BM-3552
Welcome to the Inedo Forums! Check out the Forums Guide for help getting started.
If you are experiencing any issues with the forum software, please visit the Contact Form on our website and let us know!

Posts made by atripp
-
RE: Not able to connect to YouTrack
-
RE: Not able to connect to YouTrack
We'll investigate this and get it working soon -- I think it's a UI-related bug with the new 6.2 changes to resource/credentials.
There's no configuration file to save -- but hopefully it will let you save, despite that error message? We use this version, and this extension internally as well.
-
RE: Understanding the API for NuGet Packages
The
/api/json/NuGetPackages_GetPackages
endpoint is a Native API endpoint; it wraps a stored procedure, and the easiest way to see exactly what data to pass into it (and how it behaves) is to check the database, and see what the stored proc is doing, look at the underlying views, and try calling it.For what you're looking to do, the NuGet API (
/nuget/halo/Packages()
) is probably what you want, and it uses connectors and the other configuration you've setup.Why is it taking 11s? Maybe there's a bad connector you've configured. Those timeout after 10seconds by default.
-
RE: Not able to connect to YouTrack
What version of BuildMaster is this... 6.2.5? I believe it's just a UI problem with the editor...
Is it v1.0.1 of the YouTrack extension? This one still needs to be updated to use BuildMaster 6.2's Secure Resource + Secure Credentials.
-
RE: Support for R and CRAN
Thanks, noted :)
There was more demand for RPM/Yum packages, so we recently added those. Now, we are focusing on ProGet 5.3, so perhaps after that we can reconsider this --- if we get more community interest that will go a long way.... so if anyone elee is reading these and interested, let us know.
-
RE: breaking forward slash inserted in downloadprogetpackage jenkins plugin
I'm not really sure; we don't maintain this plug-in (it's by the community), but you might find the answer by poking around through the source-code:
-
RE: Unauthorized - You must log in to perform this action
Unfortunately, the npm client does not support using Windows Integrated Authentication.
This means that, to getting this working, you will need to create a second web site in IIS (pointing to the same directory) without Windows Authentication enabled.
-
RE: Retention rule quota is backwards. "Run only when a size is exceeded" deletes the specified amount of packages, instead of until feed is specified size.
No need to share your retention logs; another user submitted them via a ticket.
This will be fixed under PG-1671, scheduled for next release (two weeks from today).
-
RE: Helm 3 support
Hello; this is already a planned change (PG-1657) , and it looks like it might make it in tomorrow's release!
-
RE: Retention rule quota is backwards. "Run only when a size is exceeded" deletes the specified amount of packages, instead of until feed is specified size.
Well, actually, now that I look closer (code shared below... ), maybe there is more to it. I see, in the last part of the code, some logic that seems to stop the deletion once the size trigger is met...
What do your retention logs say? I guess, that might give us some more info. I wonder if it's just stopping, like you say, after 20GB is met.
The code is pretty old, and maybe it's a bug that's gone unnoticed because, I suppose, over time, this would eventually reduce it down to 20GB or so....private async Task RunRetentionRuleAsync(Tables.FeedRetentionRules_Extended rule) { long feedSize = 0; if (rule.SizeTrigger_KBytes != null && !rule.SizeExclusive_Indicator) { this.LogDebug($"Rule has an inclusive size trigger of {rule.SizeTrigger_KBytes} KB."); this.LogInformation("Calculating feed size..."); this.StatusMessage = "Calculating feed size..."; feedSize = this.GetFeedSize(); this.LogDebug($"Feed size is {feedSize / 1024} KB."); if ((feedSize / 1024) <= rule.SizeTrigger_KBytes.Value) { this.LogInformation("Feed is not taking up enough space to run rule. Skipping..."); return; } } bool cachedOnly = rule.DeleteCached_Indicator; if (cachedOnly) this.LogDebug("Only delete cached packages."); bool prereleaseOnly = rule.DeletePrereleaseVersions_Indicator; if (prereleaseOnly) this.LogDebug("Only delete prerelease packages."); Regex keepRegex = null; if (!string.IsNullOrWhiteSpace(rule.KeepPackageIds_Csv)) { this.LogDebug("Never delete packages that match " + rule.KeepPackageIds_Csv); keepRegex = BuildRegex(rule.KeepPackageIds_Csv); } Regex deleteRegex = null; if (!string.IsNullOrWhiteSpace(rule.DeletePackageIds_Csv)) { this.LogDebug("Only delete packages that match " + rule.DeletePackageIds_Csv); deleteRegex = BuildRegex(rule.DeletePackageIds_Csv); } Regex keepVersionRegex = null; if (!string.IsNullOrWhiteSpace(rule.KeepVersions_Csv)) { this.LogDebug("Never delete packages that match " + rule.KeepVersions_Csv); keepVersionRegex = BuildRegex(rule.KeepVersions_Csv); } Regex deleteVersionRegex = null; if (!string.IsNullOrWhiteSpace(rule.DeleteVersions_Csv)) { this.LogDebug("Only delete packages that match " + rule.DeleteVersions_Csv); deleteVersionRegex = BuildRegex(rule.DeleteVersions_Csv); } bool lastUsedCheck = false; var keepSinceDate = default(DateTime); if (rule.KeepUsedWithin_Days != null) { keepSinceDate = DateTime.UtcNow.AddDays(-rule.KeepUsedWithin_Days.Value); lastUsedCheck = true; this.LogDebug($"Only delete packages that have not been requested in the last {rule.KeepUsedWithin_Days} days (since {keepSinceDate.ToLocalTime()})"); } bool downloadCountCheck = false; int minDownloadCount = 0; if (rule.TriggerDownload_Count != null) { minDownloadCount = rule.TriggerDownload_Count.Value; downloadCountCheck = true; this.LogDebug($"Only delete packages that have been downloaded fewer than {minDownloadCount} times."); } if (rule.KeepVersions_Count != null) this.LogDebug($"Never delete the most recent {rule.KeepVersions_Count} versions of packages."); var matchingPackages = new Dictionary<string, List<TinyPackageVersion>>(); var versionPool = new InstancePool<string>(); this.LogInformation($"Finding packages that match retention rule {rule.Sequence_Number}..."); this.StatusMessage = $"Finding packages that match retention rule {rule.Sequence_Number}..."; foreach (var package in this.EnumeratePackages(cachedOnly, prereleaseOnly)) { // skip noncached if (cachedOnly && !package.Cached) continue; // skip stable if (prereleaseOnly && !package.Prerelease) continue; // skip ids that match keep filter if (keepRegex != null && keepRegex.IsMatch(package.Id)) continue; // skip ids that do not match delete filter if (deleteRegex != null && !deleteRegex.IsMatch(package.Id)) continue; // skip ids that match keep filter if (keepVersionRegex != null && keepVersionRegex.IsMatch(package.Version)) continue; // skip ids that do not match delete filter if (deleteVersionRegex != null && !deleteVersionRegex.IsMatch(package.Version)) continue; // skip recently used packages if (lastUsedCheck && package.LastUsed >= keepSinceDate) continue; // skip packages that have been downloaded enough times if (downloadCountCheck && package.Downloads >= minDownloadCount) continue; List<TinyPackageVersion> versions; if (!matchingPackages.TryGetValue(package.Id, out versions)) { versions = new List<TinyPackageVersion>(10); matchingPackages.Add(package.Id, versions); } versions.Add(new TinyPackageVersion(versionPool.Intern(package.Version), package.Size, package.Cached, package.Prerelease, package.Downloads, package.Extra)); } int keepRecentVersionCount = rule.KeepVersions_Count ?? 0; Comparison<TinyPackageVersion> versionComparison = (p1, p2) => this.CompareVersions(p1.Version, p2.Version); foreach (var versions in matchingPackages.Values) { if (keepRecentVersionCount > 0 && versions.Count <= keepRecentVersionCount) { // make sure none of the versions are considered for deletion versions.Clear(); } else { // sort from lowest to highest versions.Sort(versionComparison); if (keepRecentVersionCount > 0 && versions.Count >= keepRecentVersionCount) { // remove recent versions versions.RemoveRange(versions.Count - keepRecentVersionCount, keepRecentVersionCount); } } } if (rule.SizeTrigger_KBytes != null && rule.SizeExclusive_Indicator) { // finally have enough info to calculate matching size this.LogDebug($"Rule has an exclusive size trigger of {rule.SizeTrigger_KBytes} KB."); this.LogInformation("Calculating size of matching packages..."); this.StatusMessage = "Calculating size of matching packages..."; feedSize = matchingPackages.Values .SelectMany(v => v) .Sum(v => v.Size); this.LogDebug($"Size of matching packages is {feedSize / 1024} KB."); if ((feedSize / 1024) <= rule.SizeTrigger_KBytes.Value) { this.LogInformation("Matching packages are not taking up enough space to run rule. Skipping..."); return; } } this.LogInformation("Getting count of matching packages..."); this.StatusMessage = "Getting count of matching packages..."; int matchCount = matchingPackages.Values.Sum(v => v.Count); this.LogDebug($"{matchCount} packages qualify for deletion under this rule."); var sortedMatches = from p in matchingPackages from v in p.Value.Select((v2, i) => new { Id = p.Key, Version = v2, VersionIndex = i }) orderby v.Version.Cached descending, v.Version.Prerelease descending, v.VersionIndex select v; this.LogInformation("Deleting matching packages..."); this.StatusMessage = "Deleting matching packages..."; if (this.retentionDryRun) this.LogDebug("Dry run mode is set; nothing will actually be deleted."); long kbToDelete = rule.SizeTrigger_KBytes ?? -1; long bytesDeleted = 0; int deletedCount = 0; foreach (var match in sortedMatches) { bytesDeleted += match.Version.Size; deletedCount++; this.LogDebug($"Deleting {match.Id} {match.Version.Version}..."); if (this.retentionDryRun) { this.DryRunDeleted.Add((match.Id, match.Version)); } else { try { await this.DeletePackageAsync(match.Id, match.Version); } catch (Exception ex) { this.LogWarning($"Could not delete {match.Id} {match.Version.Version}: {ex}"); } } if (kbToDelete >= 0 && (bytesDeleted / 1024) >= kbToDelete) { this.LogDebug("Trigger size reached; stopping."); break; } } this.LogInformation($"Deleted {deletedCount} packages ({bytesDeleted / 1024} KB total)."); }
-
RE: Retention rule quota is backwards. "Run only when a size is exceeded" deletes the specified amount of packages, instead of until feed is specified size.
With a value of
20000
, the retention rule will run only if there are at least ~20GB of packages. But how many actual packages/images actually get deleted... well, it really depends on the other rules.Perhaps, after the run, the disk usage will still be more than 20GB (e.g. if no images tags matching
*alpha*
or*beta*
). Or perhaps it goes down to 0GB (because it's exclusively unused alpha/beta images).Here is the code, for reference:
long feedSize = 0; if (rule.SizeTrigger_KBytes != null && !rule.SizeExclusive_Indicator) { this.LogDebug($"Rule has an inclusive size trigger of {rule.SizeTrigger_KBytes} KB."); this.LogInformation("Calculating feed size..."); this.StatusMessage = "Calculating feed size..."; feedSize = this.GetFeedSize(); this.LogDebug($"Feed size is {feedSize / 1024} KB."); if ((feedSize / 1024) <= rule.SizeTrigger_KBytes.Value) { this.LogInformation("Feed is not taking up enough space to run rule. Skipping..."); return; } }
If you can think of a way to improve the documentation, please share it! WE really want it to be clear so you don't have to waste time asking us or getting frustrated in the software :)
Maybe we can even link to this discussion in the docs page...
-
RE: Clean up Docker images
Not yet; I saw an internal presentation on it, but I don't know the communication plan.
Feel free to check with @apxltd directly... email or slack seem to be best ;)
-
RE: Docker: Need to verify which digest I need to remove a manifest related to a tag
Option 1. That digest references the
blob
which represents your manifest.According to Docker's Content Digest Docs, option 2 (the
Docker-Content-Digest
header) does not reference ablob
, it's just a hash of the response itself. -
RE: Does ProGet support Azure SQL databases?
We made and tested several changes to the installer a while back, but it's not something we regularly test/verify.
Please share what you find work! Thanks.
-
RE: How to find out package disk space?
In ProGet 5.3, we plan to have a couple tabs on each
Tag
(i.e. container image) that would provide this info: Metadata (will be a key/value pair of a bunch of stuff), andLayers
will show details about each of these layers.That might help, but otherwise, we have retention policies which are designed to clean up old and unused images.We'll also have a way to detect which images are actually being used :)
-
RE: [BUG - ProGet] Not able to remove container description
As @apxltd mentioned, we've got a whole bunch planned for ProGet 5.3.
I've logged this to our internal project document, and if it's easy to implement in ProGet 5.2 (I can't imagine it wouldn't be), we'll log it as a bug and ship in a maintence release.
Do note, this is not an IMAGE description, it's a REPOSITORY (i.e. a collection of images with the same name, like
MyCoolContainerApp
) description; so this means the description will be there on all images/tags in the repository. -
RE: [Question - ProGet] Are versions amount wrong ?
You're right, I guess that's showing the "layers" instead of the "tags"; I think it should be showing container registries separately (they're not really feeds), but that's how it's represented behind the scenes now.
Anyways we are working on ProGet 5.3 now; there's a whole bunch of container improvements coming, so I've noted this on our internal project document, to make sure we get a better display for container registries.
-
RE: 1 Warning, 1 Error: Connector Error, Unable to update cached data
Hello;
This error indicates that nuget.org is having some kind of networking/performance problems, and not responding to that request. NuGet.org is owned/maintained by Microsoft, so there's really nothing you can do, aside from wait for the problem to go away on their end.
-
RE: How to always execute Get-Asset in Role?
Great question!
The answer is, unfortunately, buried in the Formal Specifications. But long story short, you'll want to wrap the
Get-Asset
operation in awith executionPolicy = always
block.For more information, note that there are three modes of executions:
- Collect Only - only ICollectingOperation operations will run; if the operation is a IComparingOperation, then drift may be indicated. All ensure operations implement both interfaces.
- Collect then Execute - a collection pass is performed as described above; if any drift is indicated, an execution pass is performed that runs:
- operations that indicated drift
- IExecutingOperation operations in the same scope as a drift-indicating operation that do not implement - IComparingOperation; this is all execution actions
- operations with an execution policy of AlwaysExecute; this can only be set on a Context Setting Statement
- Execute Only- only IExecutingOperation operations will run; all ensure and execute operations implement this interface
So what's happening is that
Get-Asset
will never run in a Collect pass, where asEnsure-DscResource
will always run in a Collect pass (but only in Collection mode). By forcing Get-Asset to always execute, it will run even in the collect pass.By the way: I would love to find a way to properly document the answer to this, so users don't get frustrated; any suggestions on where to edit the contents?
-
RE: Combine strings BuildMaster
Nice OtterScript :)
This will work, the variables won't "leak over" or anything like that.
-
RE: Combine strings BuildMaster
I think, you want to use
$Eval
function. Note that grave apostrophe (`) is an escape character.set $BuildMaster_Test_1 = Test; set $Number = 1; Log-Debug `$BuildMaster_Test_$Number; Log-Debug $Eval(`$BuildMaster_Test_$Number);
So the output would be:
$BuildMaster_Test_1 Test
-
RE: Clean up Docker images
We've got some major container improvements coming in ProGet 5.3, and will revamp our product; hopefully we'll be able to present this pretty soon!
I think, once you see what we have planned, you'll want to change/improve your workflows to simplify things, and this may not even be necessary... anyways, stay tuned.
-
RE: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool
The ProGet Dockerfile is based on
mono
, which is the latest stable version; so, every maintenance release it's whatever the latest version is at the time. -
RE: Combine strings BuildMaster
Hi Ali,
Sure, it would just be like
$Variable2$Variable1
or${Variable 2}${Variable 1}
.Check out documentation on Strings & Values in OtterScript to learn more.
-
RE: Can proget be upgraded using an account that doesnt have db access?
The user running the install/upgrade needs to have dbowner rights on the ProGet database. The installer will give a database access error.
-
RE: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool
@abm_4780 said in Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool:
Regarding using nginx, not exactly sure how this would help with Mono's network handling.
I really don't remember the details, it was a while ago; it had something to do with keep-alive connections, perhaps? It made no sense, at all, but ngix fixed it. Later on, mono fixed whatever bug caused it.
-
RE: docker pull from proget not working
Hello; you should be able to get the error message from Admin > Diagnostic center, inside of PRoGet. Hopefully that will give some insight as to where the underlying problem is...
-
RE: Error during package Upload
The error message is "cvs.badguy does not have the Feeds_AddPackage privilege", which means that the api key you've configured does not have that privilege. Please add it.
-
RE: ProGet in docker/linux hanging after using all memory
@patchlings said in ProGet in docker/linux hanging after using all memory:
Most "linux container progets" run mono.exe right?
Actually, all of them run
mono.exe
. We've had some people build a WINE-version of our products as a container, but I'm not sure if that's any better... -
RE: Cannot install Otter after uninstalling
Great, thanks for letting us know; the installer crashes when trying to read/parse the URL reservation (
new CommonSecurityDescriptor()
from the stack trace) to search if there's a conflicting one for the port you selected.It shouldn't be possible, but it clearly is happening. So I guess, we will add a try/catch around that.
-
RE: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool
We don't document how to set-up the ngix proxy, but it's a fairly common setup, and the way to support HTTPS on Linux.
Yes, our plan is to move to .NET5 as Microsoft comes closer to releasing that and it's proven stable (likely next year).
-
RE: ProGet in docker/linux hanging after using all memory
Hello; we haven't had any other users report this, so I'm afraid I don't have any idea how to help. It certainly sounds like a memory leak, and it'st most definitely a
mono
-specific bug; unfortunately these are extremely hard to track down, and sometimes are even platform specific (i.e. depending on host operating system).I would make sure to upgrade to latest version of container image.
If you're not already using SQL Server for Linux (and you're using Postgres), then switch to SQL Server.
I would simplify configuration; if you have a lot of connectors, etc.
Try putting a ngix proxy infront of it.
Once we have a clue about where the mono-bug is, we can at least consider ways to work-around it.
-
RE: Clean up Docker images
Hello; sorry on the slow reply, we are still not get notifications on replies to old posts... we may block replying to them, but in the meantime...
I think ProGet does support the deletion endpoint now (PG-1632), but just for manifests. Is there an official DELETE tag API?
-
RE: Import/Export Application from Buildmaster Enterprise to Buildmaster Free Version
To simplify the import/export options, BuildMaster 6.2 only supports backing up / restoring to a "Package Source" (i.e. a ProGet feed); we may support for using a disk-based package source instead, but for now it's only a ProGet Universal feed.
BuildMaster 6.1.5 lets you back-up to a feed URL.
Note that BuildMaster 6.1.25 also has "Package Sources" (as a preview feature), which you can use to back-up all of your applications if you'd like.
-
RE: Cannot install Otter after uninstalling
This error is related to URL reservations; sometimes this happens when programs interact with the url registry.
You can use
netsh http show urlacl
to help identify where the problems are, andnetsh http delete urlacl <bad-url>
to try to remove themHere are some links that might be helpful:
- https://serverfault.com/questions/822207/how-does-url-reservation-actually-work-in-windows-particularly-the-acls
- https://windowsserver.uservoice.com/forums/295071-management-tools/suggestions/36083521-wac-is-causing-503-the-service-is-unavailable-erro
Please let us know what you find!
-
RE: Composer/Packagist feeds
Here's the current state of this feed type:
We did a pretty deep dive into PHP/Composer packages a while back, and our conclusion was that they were very difficult to implement due to the way the tightly integrate with git repositories.
However, we did this assessment without any user partners, and we know next to nothing about PHP, so it could be we misunderstood or looked at the wrong things. Maybe not everyone uses the tight git-repository integration? Hard to say. This is why we partner with customers now.
Since then, there haven’t been too many requests for it, and we have no idea what the level of interest is for. Please add to QA#2690 if you've got some insight.You're the first person to inquire about it in over two years... but that same document talks about how we partner with users, and I'd encourage you to check out the RPM Thread -- we've got some great user partners in that!
-
RE: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool
Hello; this is a sign of network connectivity being overloaded.
Ultimately your best bet is to use load balancing; see How to Prevent Server Overload in ProGet to learn more.
But I've heard that putting a NGIX reverse proxy in-front of the Linux container helps (due to some poor network handling/bugs by Mono's code) or moving to the Windows/IIS stack.
-
RE: Import/Export Application from Buildmaster Enterprise to Buildmaster Free Version
Oh I see! Thanks; that would be a nice place to put it; we have a lot of links on that page, and are trying to reorganize it...
-
RE: Import/Export Application from Buildmaster Enterprise to Buildmaster Free Version
BuildMaster is licensed per user, so if the same group of users will be using these instances, then you can use the same key.
In any case, import/export is also in the free edition. What version are you using? It should be on the Admin page.
-
RE: Upgrade to Buildmaster 6.2 not possible :(
@PhilippeC sorry about that, but it should be available now;
It's a very exciting release, but we really wanted to roll the upgrades out slowly, and there was an inconsistency in the Hub's upgrade availability logic (for installation), and BuildMaster 6.1's logic (for notification)
Don't forget to check out the upgrade notes - https://inedo.com/support/kb/1766/buildmaster-6-2-upgrade-notes
-
RE: Polling Inedo Agents
Hello; the agents don't currently support this, though this is something that we've considered for a long time -- some of our key customers have requested this as well.
However, we've developed some interesting technical alternatives that make the pull-based agents largely moot (at least according to the folks who requested it originally); for example Romp and universal packages allow the client to self-install, or at least have an in-house BuildMaster or Otter instance that can manage installations based on packages.
-
RE: Error attempting to set GitHub Build status
Hello; this was fixed in GitHub-1.4.3 extension, but as a work-around you can just set BaseUrl in Admin > Advanced Settings
-
RE: Proget filesystem access
HTTP should be about same speed as FTP; you'll most certainly need to use chunked uploads.
But Asset directories don't support drop folders, and we don't have a reindexing function for asset directories. So unfortunately there's no supported way to handle this.
You might be able to "hack" something by going to the database and filesystem directly, but we obviously can't recommend it.
-
RE: Proget docker delete manifest api request fails
I think so; that's a Postgres error message.
We deprecated Postgres a long while back, and new features aren't tested against Postgres database code.
-
RE: PyPI package not shown in search results accessible via url
Interesting; I know it's not ideal, but it works, and it may only be a slight inconvenience at best, since not many search for packages from the UI, I think.
https://github.com/Inedo/inedo-docs/blob/master/ProGet/feeds/pypi.md
If anyone finds more issues with this, please let us know and we can consider investing in a proper fix.
-
RE: Error attempting to Tag a Docker Image
What user principle are you running the Inedo Agent under?
The default is
LOCAL SYSTEM
. -
RE: PyPI package not shown in search results accessible via url
Unfortunately we didn't totally understand that detail either when implementing the feed, either... so unfortunately it's not trivial to fix.
We'd like to gauge the impact of not changing it; aside from this search odditiy, was there any other problems? Are packages not installing?
-
RE: PyPI package not shown in search results accessible via url
I'm not very familiar with PyPi packages, but I know there are some oddities with
-
and_
, and that they are sometimes supposed to be treated the same, and sometimes not. We don't totally understand all the rules, to be honest (even after reading PEP503 specifications).In this case, the package is actually
websocket_client
, notwebsocket-client
.See: https://pypi.org/project/websocket_client/
When you search for
websocket_client
in ProGet, it shows up, as expected. -
RE: Maven Feed can't find maven-metadata.xml
Hello; I think this may have already been addressed in PG-1477, which was shipped in ProGet 5.2.0.
Can you try upgrading to latest version and try again?
-
RE: ProGet return unlisted packages in visual studio
Hi;
It's hard to say why this is happening, if you don't see it in the UI, you souldn't see it in Visual Studio.
it could be related to cached packages in visual studio. The best way to diagnose this would be attach Fiddler or Wireshark, so that Visual STudio is going through that, and then monitor the exact queries that Visual STudio is sending to ProGet, and find if anyone is actually returning those packages. If so, then please share the details and we can try to investigate.
Otherwise clear all of your local NuGet caches.
Best,
Alana