Navigation

    Inedo Community Forums

    Forums

    • Login
    • Search
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    1. Home
    2. atripp
    3. Posts
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    Posts made by atripp

    • RE: Proget: docker login returns unauthorized

      It's really hard to say. This doesn't seem to be impacting others... we can't repro it... and I'm not sure what else it could be.

      If you can get us some more very specific details about it, perhaps using some sort of fidler trace, then we can try to investigate further.

      posted in Support
      atripp
      atripp
    • RE: RPM upload fails - 42P01: missing FROM-clause entry for table "rpv"

      Is there an error in ProGet > Admin > Diagnostic Center that corresponds to this? Can you provide the full stack trace?

      posted in Support
      atripp
      atripp
    • RE: Update failed to Proget 5.1.23

      Hello;

      It's a problem with the database... there was probably some failed change script, a while ago. But beyond that diagnosing/troubleshooting isn't going to be trivial and not worth the time, especially since... Postgres has been deprecated in 5.2. It won't be available in 5.3.

      So, I would recommend to take the opportunity to first move to SQL Server (i.e. install a new instance w/ same version that uses SQL Server). You can use this guide to migrate feeds: https://inedo.com/support/kb/1168/proget-feed-migration

      Then, you can perform the upgrade to 5.2 on the new instance.
      You can migrate feed-by-feed.

      posted in Support
      atripp
      atripp
    • RE: `dotnet nuget push --skip-duplicate` does not work as expected

      We have a new major release coming up, 5.3. So perhaps, we'll just change the response there.

      And if it actually makes a problem, and others have a trouble updating their own scripts, then we can consider adding a flag.

      posted in Support
      atripp
      atripp
    • RE: $PSCredential- round two

      @Jonathan-Engstrom said in $PSCredential- round two:

      Same code, same machine.

      Same user?

      @Jonathan-Engstrom said in $PSCredential- round two:

      it would help to understand why.

      FYI -- Otter doesn't format PowerShell; it parses the PowerShell script (using MIcrosoft's parser), looks for variable tokens, and "injects" a variable into the runtime if there's a matching variable.

      The interactive Powershell Host (which you're using, ps.exe) also does things differently. There's a ton of layers-upon-layers with active directory. so it'll take some trial/error to find out what's happening.

      posted in Support
      atripp
      atripp
    • RE: `dotnet nuget push --skip-duplicate` does not work as expected

      Overwriting packages in NuGet.org is totally impossible, but it's possible in ProGet if you have the correct privileges. This is why ProGet returns a 401 (not authorized). We're a bit skeptical to change, but will consider it in a major release.

      To be honest, this flag on dotnet nuget push doesn't make much sense, even for NuGet.org. What's the usecase? What sort of Build process is used that doesn't generate new package numbers on build?

      We don't like encouraging poor workflows. For example, this is why we don't have a "easy package delete" function; this isn't something you should be doing very often that you'd need a quick-delete, and if you are, your'e probably doing it wrong.

      So hopefully you can help us understand this workflow and why it would make sense to use?

      posted in Support
      atripp
      atripp
    • RE: NPM Connector to Azure DevOps

      Unfortunately we don't have any documentation specifically for Azure DevOps NPM feeds; they change fairly often for us to to keep track. We did try/test it at one point, a while back, but our code for this feature hasn't substantially changed since then.

      It's supposed to be as simple as an empty username (ENAME) and a token as your password (PAT). That's used to request a Bearer token from the NPM api, and send that back in a header.

      I looked at their docs, and it says "username (can be anything except empty), PAT, and email". Not sure why they require username. Do they look you up by email? Weird.

      Anyways, that's strange. So I guess, I'd also try EMAIL + PAT. That should
      also work.

      posted in Support
      atripp
      atripp
    • RE: Getting windows service status into variable

      Great to hear!

      Well, of course, you could store the information about which services are active in BuildMaster. I would imagine it's not something that changes very often, so perhaps this is OK?

      One approach might be to make a server role called hdars-service and then add a variable to that role called $ActiveServer with the server name. You would set a pipeline target to run deploy-hdars to hdards-service role, and in that plan, just do a if $ServerName == $ActiveServer to decide whether to start/stop service, or whatever.

      Otherwise, I think using powershell of some sort, whether $PSEval or a making an OtterSCript module with an output variable.

      posted in Support
      atripp
      atripp
    • RE: Nuget install prompting for credentials

      The first thing that comes to mind is Integrated Windows Authentication; you often won't be able to authenticate unless your computer is on the domain. This is a limitatino/feature of Windows (i.e. it's by design to work this way).

      As a work-around, you can setup a second site that doesn't have IWS enabled, and point to the same folder on disk.

      posted in Support
      atripp
      atripp
    • RE: NPM Connector to Azure DevOps

      Hello;

      I'm not really familiar with how Azure DevOps implements npm authentication. but it seems there's definitely a problem. The endpoint is not returning JSON, and it's giving a 400 error; my guess is that it's an empty response, so who knows what the error message is. I wonder, if the URL is wrong? Are there AzureDevOps logs you can inspect?

      Otherwise it's hard to say where the problem is with Azure DevOps because the behavior changes all the time. In the past, I've seen 400 errors come and go, and sometimes it's their way of telling you that you've configured a token wrong. Or, they have a bug implementing NPM's api (this is common), so maybe try a different request? Both errors are happening while trying to index the connector. Try searching, etc.

      In any case, the easiest thing to do would be to replicate the way ProGet does authentication. Here's how we handle it in ProGet; first, the logic to determine if a bearer token should be used...

          protected override async Task<HttpWebRequest> CreateWebRequestAsync(string url)
          {
              var request = await base.CreateWebRequestAsync(url).ConfigureAwait(false);
      
              if (this.Password != null && (string.IsNullOrEmpty(this.UserName) || this.UserName.Contains('@')))
              {
                  var bearerToken = await this.BearerAuthToken.ValueAsync.ConfigureAwait(false);
                  if (bearerToken != null)
                  {
                      request.Headers.Set(HttpRequestHeader.Authorization, "Bearer " + bearerToken);
                  }
              }
      
              request.Accept = "application/json";
      
              return request;
          }
      

      And the bearer-token acquisition logic....

          private async Task<string> RequestBearerAuthTokenAsync()
          {
              if (string.IsNullOrEmpty(this.UserName))
              {
                  return AH.Unprotect(this.Password);
              }
      
              var request = await base.CreateWebRequestAsync(this.ResolveUrl("-/user/org.couchdb.user:" + Uri.EscapeDataString(this.UserName))).ConfigureAwait(false);
              request.Accept = "application/json";
              request.ContentType = "application/json";
              request.Method = "PUT";
      
              using (var requestStream = await request.GetRequestStreamAsync().ConfigureAwait(false))
              using (var writer = new StreamWriter(requestStream, InedoLib.UTF8Encoding))
              using (var jsonWriter = new JsonTextWriter(writer))
              {
                  jsonWriter.WriteStartObject();
                  jsonWriter.WritePropertyName("_id");
                  jsonWriter.WriteValue("org.couchdb.user:" + this.UserName);
                  jsonWriter.WritePropertyName("name");
                  jsonWriter.WriteValue(this.UserName);
                  jsonWriter.WritePropertyName("password");
                  jsonWriter.WriteValue(AH.Unprotect(this.Password));
                  jsonWriter.WritePropertyName("email");
                  jsonWriter.WriteValue(this.UserName);
                  jsonWriter.WritePropertyName("type");
                  jsonWriter.WriteValue("user");
                  jsonWriter.WritePropertyName("roles");
                  jsonWriter.WriteStartArray();
                  jsonWriter.WriteEndArray();
                  jsonWriter.WritePropertyName("date");
                  jsonWriter.WriteValue(DateTimeOffset.UtcNow);
                  jsonWriter.WriteEndObject();
              }
      
              using (var response = await request.GetResponseAsync().ConfigureAwait(false))
              using (var responseStream = response.GetResponseStream())
              using (var reader = new StreamReader(responseStream, InedoLib.UTF8Encoding))
              using (var jsonReader = new JsonTextReader(reader))
              {
                  var result = JObject.Load(jsonReader);
                  return (string)result.Property("token")?.Value;
              }
          }
      

      It doesn't look like the error is happening when requesting a token, just when doing the request. hope this is a good start at least!

      posted in Support
      atripp
      atripp
    • RE: Feature Request: Please add inside each Role which servers have drifted, and which ones are compliant.

      Thanks for tracking those down, I updated owner on them :)

      posted in Support
      atripp
      atripp
    • RE: Feature Request: Please add inside each Role which servers have drifted, and which ones are compliant.

      Yes; this was a result of the forums migration. If you find other posts we can change the owner.

      posted in Support
      atripp
      atripp
    • RE: Simplify running script assets in Otter Configurations (PSEnsure)

      It's designed to download a file to disk; so naming aside (pretend it's called Write-Asset), it's still writing a file to disk, and thus it won't run in the collect phase.

      As I mentioned, the feature wasn't designed to behave like you'd like it; most users will create/publish PowerShell module packages.

      What your describing is nontrivial, from a design perspective, and we'd like to really consider it for the next major release of Otter, which is going to be focusing on things like "Compliance as Code". Based on some examples you shared, and how I've seen your Otter usage, I think you'll find those a lot more useful. We'll be exploring this mid-Q2.

      In the meantime, I recommend you to just maintain the assets on disk somehow, and import them that way.

      posted in Support
      atripp
      atripp
    • RE: WinRM issues

      There haven't been any other feature requests for this, and we haven't researched the feasibility of doing it; however we did add Impersonation and Process Isolation, both which were in-demand features.

      posted in Support
      atripp
      atripp
    • RE: API Endpoint URL Errors: Could not establish trust relationship for the SSL/TLS secure channel.

      I don't know if it will be sufficient to provide access to only http://files.pythonhosted.org/ and https://pypi.org/ - it's very possible that the file hosting locations will change. This is the case on a lot of other package galleries, including NuGet.org.

      If ProGet can only download a portion of the files, then, there will probably be some strange errors. It's too difficult to generalize, so if you can provide us with a specific package and a specific reproduction situation, we'll be happy to try it.

      But, as the error says, ProGet does only support those wheel/source packages; the legacy "egg" format (10+ years old I think) is not supported. Your developers should be able to help convert it, and you can rehost the handful of "egg" packages as needed.

      As far as other errors you may see.... don't think of the "Diagnostic Center" as "checklist of things to fix", it's there to help diagnose a problem... and unless you have one reported by a user, it's probably fine. They can come from so many sources (including temporary network outages, users typing in wrong urls / passwords, etc).

      posted in Support
      atripp
      atripp
    • RE: API Endpoint URL Errors: Could not establish trust relationship for the SSL/TLS secure channel.

      This error means that the package file data (i.e. what is being returned by the URL that ProGet is instructed to download from) is invalid. So I'm thinking some sort of intermediary is blocking/rewriting these requests.

      Sometimes, I see firewalls / proxies inspecting the contents, and then displaying "this content is blocked by corporate firewall" instead. The proxy should produce an error, but sometimes it's just 200. So, ProGet expects package data, but instead gets random HTML.

      You should be able to do this.

      1. Create PyPi Feed
      2. Add Connector to PyPi.org
      3. Pull package (I used girth since it was at the top of the list)

      If that's not working, then something is blocking the download from pypi.org.

      posted in Support
      atripp
      atripp
    • RE: Proget: docker login returns unauthorized

      Hello; this should get resolved in PG-1676, which is scheduled for the next maintence release.

      posted in Support
      atripp
      atripp
    • RE: docker pull from proget not working

      Hello; this should get resolved in PG-1676, which is scheduled for the next maintence release.

      posted in Support
      atripp
      atripp
    • RE: Simplify running script assets in Otter Configurations (PSEnsure)

      @Jonathan-Engstrom said in Simplify running script assets in Otter Configurations (PSEnsure):

      I am not sure why they are so different, or would be designed in such a manner to disallow this to work.

      The collect pass is "read only", and isn't supposed to change any server configuration at all. A Get-Asset operation always changes configuration (it always puts a file on disk), so by design, it doesn't run during the collect-pass.

      I know a lot of people use PowerShell modules packages (i.e. things you install on a server), but I can see why using assets for this would be nice. It was never a design we had considered, which is why we'd need to modify the operations to do an automatic import.

      I guess the only alternative i think of now, is to store the assets on disk, and use a sort of Ensure-Asset in another plan to do that. Then refer to it as like `c:\modulestore\myasset.ps1' or something

      posted in Support
      atripp
      atripp
    • RE: Proget Docker Nuget creating extra empty folder with different case.

      While this is sub-optimal behavior, it's the first report of this issue/bug, and the impact seems relatively small; tracking it down and fixing it will may not be trivial, and might even introduce a regression.

      Prior to back-up, how about running a script that just deletes the empty folders? If we get more reports of this, or hear about a wider impact, we'll absolutely investigate further.

      posted in Support
      atripp
      atripp
    • RE: Simplify running script assets in Otter Configurations (PSEnsure)

      Hi Jonathan,

      First, I could see how this could be useful as a first-class feature, perhaps as an option on a PSEnsure operation. But that's probably more complicated, and I think what you're doing may be close. If you can get it working, you could always write a OtterSCript Module that has a similar outcome. Like, something like this...

      call PSExecWithMods(>>
         powershell-here-that-uses-assets-as-modules
      >>);
      

      But before getting there, can you shed some more light on this?

      they refuse to load as PowerShell Modules sometimes for no apparent reasons

      Are you saying that a pattern like the sample script you shared, sometimes works, and sometimes doesn't? If so, the first thing that comes to mind is the dual-pass run nature of configuration plan executions; perhaps this may explain it, but also the code feels a little "strange" to me.

      • Get-Asset will never run in the first pass the (Collection), because it's an Execute-only operation
      • The configuration you have for PSEnsure will always cause drift because Collect returns "false" and the expected value is True

      If you always want the block to run the Execution pass, then you could do this...

      with executionPolicy=always
      {
          Get-Asset $ModuleName.ps1
          (
              Type: Script    
          );
          set $ModuleNamePath = $PathCombine($WorkingDirectory, $ModuleName.ps1);
          PSExec >> 
              Import-Module -Name "$ModuleNamePath" -Verbose
              ....
          >>;
      }
      ```
      posted in Support
      atripp
      atripp
    • RE: API Endpoint URL Errors: Could not establish trust relationship for the SSL/TLS secure channel.

      Hello;

      Perhaps this is it? https://inedo.com/support/kb/1161/tls-v12-configuration-and-connection-errors

      If not, it must be a sort of internal connection problem. ProGet does not operate at the SSL/TLS level; that's handled all by the operating system. So hopefully, the network admins can help to diagnose it some more. I've seen problems with certificate servers and trust causing a problem as well.

      posted in Support
      atripp
      atripp
    • RE: Migrating Otter and Git repository to new Server 2016 machine

      Hello;

      For this, we recommend just doing a Backup and Restore. Of course, the Detatch/Reattach is effectively the same thing, but don't forget the config.

      For the Git-based raft, the Database just contains the connection information to the Bitbucket repository... so it will just come over with the database.

      posted in Support
      atripp
      atripp
    • RE: docker pull from proget not working

      Hello; can you try updating ProGet? Actually, it seems 5.2.5 wasn't pushed/available at the until just a few days ago.

      posted in Support
      atripp
      atripp
    • RE: Python Conda Channels Support

      Thank you very much for the additional insight, it will go a long way in making a business case for investing resources in developing such a new feed type.

      This isn't on our short-term roadmap in any case, but we will review in the coming months; hopefully within this time other folks in the community will respond.

      posted in Support
      atripp
      atripp
    • RE: Creating a v6 application to monitor Azure DevOps repository and perform a simple build; can anyone help or offer suggestions?

      Not a basic question, we're always working to make BuildMaster as easy to use as possible, so getting feedback on how to improve our docs and tutorials will be great :)

      If you haven't seen it already, we've recently updated our Azure DevOps Integration Documentation; there's a lot of content there, and we hope to continue to make it easier and include tutorials at some point.

      Basically though, I recommend you start with Create New Application > Azure DevOps CI/CD. That will create an application with the Azure DevOps CI/CD Template, and you can see how a "full CI/CD" would work. It should just build "out of the box", and use our publicly-available AzureDevOps repository.

      What your'e describing, it seems like just a small portion of that. Once you've got the manual process working, it's just a matter of creating a Git repository monitor. you can configure that to monitor the branches in your AzureDevOps repository.

      https://docs.inedo.com/docs/buildmaster/ci-cd/continuous-integration/build-triggers/repository-monitors

      posted in Support
      atripp
      atripp
    • RE: Not able to connect to YouTrack

      I've identified this and we'll get this fixed in the next maintenance release (shipping end of week, at the latest) as BM-3552

      posted in Support
      atripp
      atripp
    • RE: Not able to connect to YouTrack

      We'll investigate this and get it working soon -- I think it's a UI-related bug with the new 6.2 changes to resource/credentials.

      There's no configuration file to save -- but hopefully it will let you save, despite that error message? We use this version, and this extension internally as well.

      posted in Support
      atripp
      atripp
    • RE: Understanding the API for NuGet Packages

      The /api/json/NuGetPackages_GetPackages endpoint is a Native API endpoint; it wraps a stored procedure, and the easiest way to see exactly what data to pass into it (and how it behaves) is to check the database, and see what the stored proc is doing, look at the underlying views, and try calling it.

      For what you're looking to do, the NuGet API (/nuget/halo/Packages()) is probably what you want, and it uses connectors and the other configuration you've setup.

      Why is it taking 11s? Maybe there's a bad connector you've configured. Those timeout after 10seconds by default.

      posted in Support
      atripp
      atripp
    • RE: Not able to connect to YouTrack

      What version of BuildMaster is this... 6.2.5? I believe it's just a UI problem with the editor...

      Is it v1.0.1 of the YouTrack extension? This one still needs to be updated to use BuildMaster 6.2's Secure Resource + Secure Credentials.

      posted in Support
      atripp
      atripp
    • RE: Support for R and CRAN

      Thanks, noted :)

      There was more demand for RPM/Yum packages, so we recently added those. Now, we are focusing on ProGet 5.3, so perhaps after that we can reconsider this --- if we get more community interest that will go a long way.... so if anyone elee is reading these and interested, let us know.

      posted in Support
      atripp
      atripp
    • RE: breaking forward slash inserted in downloadprogetpackage jenkins plugin

      I'm not really sure; we don't maintain this plug-in (it's by the community), but you might find the answer by poking around through the source-code:

      https://github.com/jenkinsci/inedo-proget-plugin

      posted in Support
      atripp
      atripp
    • RE: Unauthorized - You must log in to perform this action

      Unfortunately, the npm client does not support using Windows Integrated Authentication.

      This means that, to getting this working, you will need to create a second web site in IIS (pointing to the same directory) without Windows Authentication enabled.

      posted in Support
      atripp
      atripp
    • RE: Retention rule quota is backwards. "Run only when a size is exceeded" deletes the specified amount of packages, instead of until feed is specified size.

      No need to share your retention logs; another user submitted them via a ticket.

      This will be fixed under PG-1671, scheduled for next release (two weeks from today).

      posted in Support
      atripp
      atripp
    • RE: Helm 3 support

      Hello; this is already a planned change (PG-1657) , and it looks like it might make it in tomorrow's release!

      posted in Support
      atripp
      atripp
    • RE: Retention rule quota is backwards. "Run only when a size is exceeded" deletes the specified amount of packages, instead of until feed is specified size.

      Well, actually, now that I look closer (code shared below... ), maybe there is more to it. I see, in the last part of the code, some logic that seems to stop the deletion once the size trigger is met...🤔

      What do your retention logs say? I guess, that might give us some more info. I wonder if it's just stopping, like you say, after 20GB is met.
      The code is pretty old, and maybe it's a bug that's gone unnoticed because, I suppose, over time, this would eventually reduce it down to 20GB or so....

          private async Task RunRetentionRuleAsync(Tables.FeedRetentionRules_Extended rule)
          {
              long feedSize = 0;
              if (rule.SizeTrigger_KBytes != null && !rule.SizeExclusive_Indicator)
              {
                  this.LogDebug($"Rule has an inclusive size trigger of {rule.SizeTrigger_KBytes} KB.");
                  this.LogInformation("Calculating feed size...");
                  this.StatusMessage = "Calculating feed size...";
      
                  feedSize = this.GetFeedSize();
                  this.LogDebug($"Feed size is {feedSize / 1024} KB.");
      
                  if ((feedSize / 1024) <= rule.SizeTrigger_KBytes.Value)
                  {
                      this.LogInformation("Feed is not taking up enough space to run rule. Skipping...");
                      return;
                  }
              }
      
              bool cachedOnly = rule.DeleteCached_Indicator;
              if (cachedOnly)
                  this.LogDebug("Only delete cached packages.");
      
              bool prereleaseOnly = rule.DeletePrereleaseVersions_Indicator;
              if (prereleaseOnly)
                  this.LogDebug("Only delete prerelease packages.");
      
              Regex keepRegex = null;
              if (!string.IsNullOrWhiteSpace(rule.KeepPackageIds_Csv))
              {
                  this.LogDebug("Never delete packages that match " + rule.KeepPackageIds_Csv);
                  keepRegex = BuildRegex(rule.KeepPackageIds_Csv);
              }
      
              Regex deleteRegex = null;
              if (!string.IsNullOrWhiteSpace(rule.DeletePackageIds_Csv))
              {
                  this.LogDebug("Only delete packages that match " + rule.DeletePackageIds_Csv);
                  deleteRegex = BuildRegex(rule.DeletePackageIds_Csv);
              }
      
              Regex keepVersionRegex = null;
              if (!string.IsNullOrWhiteSpace(rule.KeepVersions_Csv))
              {
                  this.LogDebug("Never delete packages that match " + rule.KeepVersions_Csv);
                  keepVersionRegex = BuildRegex(rule.KeepVersions_Csv);
              }
      
              Regex deleteVersionRegex = null;
              if (!string.IsNullOrWhiteSpace(rule.DeleteVersions_Csv))
              {
                  this.LogDebug("Only delete packages that match " + rule.DeleteVersions_Csv);
                  deleteVersionRegex = BuildRegex(rule.DeleteVersions_Csv);
              }
      
              bool lastUsedCheck = false;
              var keepSinceDate = default(DateTime);
              if (rule.KeepUsedWithin_Days != null)
              {
                  keepSinceDate = DateTime.UtcNow.AddDays(-rule.KeepUsedWithin_Days.Value);
                  lastUsedCheck = true;
                  this.LogDebug($"Only delete packages that have not been requested in the last {rule.KeepUsedWithin_Days} days (since {keepSinceDate.ToLocalTime()})");
              }
      
              bool downloadCountCheck = false;
              int minDownloadCount = 0;
              if (rule.TriggerDownload_Count != null)
              {
                  minDownloadCount = rule.TriggerDownload_Count.Value;
                  downloadCountCheck = true;
                  this.LogDebug($"Only delete packages that have been downloaded fewer than {minDownloadCount} times.");
              }
      
              if (rule.KeepVersions_Count != null)
                  this.LogDebug($"Never delete the most recent {rule.KeepVersions_Count} versions of packages.");
      
              var matchingPackages = new Dictionary<string, List<TinyPackageVersion>>();
              var versionPool = new InstancePool<string>();
      
              this.LogInformation($"Finding packages that match retention rule {rule.Sequence_Number}...");
              this.StatusMessage = $"Finding packages that match retention rule {rule.Sequence_Number}...";
              foreach (var package in this.EnumeratePackages(cachedOnly, prereleaseOnly))
              {
                  // skip noncached
                  if (cachedOnly && !package.Cached)
                      continue;
      
                  // skip stable
                  if (prereleaseOnly && !package.Prerelease)
                      continue;
      
                  // skip ids that match keep filter
                  if (keepRegex != null && keepRegex.IsMatch(package.Id))
                      continue;
      
                  // skip ids that do not match delete filter
                  if (deleteRegex != null && !deleteRegex.IsMatch(package.Id))
                      continue;
      
                  // skip ids that match keep filter
                  if (keepVersionRegex != null && keepVersionRegex.IsMatch(package.Version))
                      continue;
      
                  // skip ids that do not match delete filter
                  if (deleteVersionRegex != null && !deleteVersionRegex.IsMatch(package.Version))
                      continue;
      
                  // skip recently used packages
                  if (lastUsedCheck && package.LastUsed >= keepSinceDate)
                      continue;
      
                  // skip packages that have been downloaded enough times
                  if (downloadCountCheck && package.Downloads >= minDownloadCount)
                      continue;
      
                  List<TinyPackageVersion> versions;
                  if (!matchingPackages.TryGetValue(package.Id, out versions))
                  {
                      versions = new List<TinyPackageVersion>(10);
                      matchingPackages.Add(package.Id, versions);
                  }
      
                  versions.Add(new TinyPackageVersion(versionPool.Intern(package.Version), package.Size, package.Cached, package.Prerelease, package.Downloads, package.Extra));
              }
      
              int keepRecentVersionCount = rule.KeepVersions_Count ?? 0;
      
              Comparison<TinyPackageVersion> versionComparison = (p1, p2) => this.CompareVersions(p1.Version, p2.Version);
              foreach (var versions in matchingPackages.Values)
              {
                  if (keepRecentVersionCount > 0 && versions.Count <= keepRecentVersionCount)
                  {
                      // make sure none of the versions are considered for deletion
                      versions.Clear();
                  }
                  else
                  {
                      // sort from lowest to highest
                      versions.Sort(versionComparison);
      
                      if (keepRecentVersionCount > 0 && versions.Count >= keepRecentVersionCount)
                      {
                          // remove recent versions
                          versions.RemoveRange(versions.Count - keepRecentVersionCount, keepRecentVersionCount);
                      }
                  }
              }
      
              if (rule.SizeTrigger_KBytes != null && rule.SizeExclusive_Indicator)
              {
                  // finally have enough info to calculate matching size
                  this.LogDebug($"Rule has an exclusive size trigger of {rule.SizeTrigger_KBytes} KB.");
                  this.LogInformation("Calculating size of matching packages...");
                  this.StatusMessage = "Calculating size of matching packages...";
      
                  feedSize = matchingPackages.Values
                      .SelectMany(v => v)
                      .Sum(v => v.Size);
      
                  this.LogDebug($"Size of matching packages is {feedSize / 1024} KB.");
      
                  if ((feedSize / 1024) <= rule.SizeTrigger_KBytes.Value)
                  {
                      this.LogInformation("Matching packages are not taking up enough space to run rule. Skipping...");
                      return;
                  }
              }
      
              this.LogInformation("Getting count of matching packages...");
              this.StatusMessage = "Getting count of matching packages...";
              int matchCount = matchingPackages.Values.Sum(v => v.Count);
      
              this.LogDebug($"{matchCount} packages qualify for deletion under this rule.");
      
              var sortedMatches = from p in matchingPackages
                                  from v in p.Value.Select((v2, i) => new { Id = p.Key, Version = v2, VersionIndex = i })
                                  orderby v.Version.Cached descending, v.Version.Prerelease descending, v.VersionIndex
                                  select v;
      
              this.LogInformation("Deleting matching packages...");
              this.StatusMessage = "Deleting matching packages...";
      
              if (this.retentionDryRun)
                  this.LogDebug("Dry run mode is set; nothing will actually be deleted.");
      
              long kbToDelete = rule.SizeTrigger_KBytes ?? -1;
              long bytesDeleted = 0;
              int deletedCount = 0;
      
              foreach (var match in sortedMatches)
              {
                  bytesDeleted += match.Version.Size;
                  deletedCount++;
                  this.LogDebug($"Deleting {match.Id} {match.Version.Version}...");
                  if (this.retentionDryRun)
                  {
                      this.DryRunDeleted.Add((match.Id, match.Version));
                  }
                  else
                  {
                      try
                      {
                          await this.DeletePackageAsync(match.Id, match.Version);
                      }
                      catch (Exception ex)
                      {
                          this.LogWarning($"Could not delete {match.Id} {match.Version.Version}: {ex}");
                      }
                  }
      
                  if (kbToDelete >= 0 && (bytesDeleted / 1024) >= kbToDelete)
                  {
                      this.LogDebug("Trigger size reached; stopping.");
                      break;
                  }
              }
      
              this.LogInformation($"Deleted {deletedCount} packages ({bytesDeleted / 1024} KB total).");
          }
      posted in Support
      atripp
      atripp
    • RE: Retention rule quota is backwards. "Run only when a size is exceeded" deletes the specified amount of packages, instead of until feed is specified size.

      With a value of 20000, the retention rule will run only if there are at least ~20GB of packages. But how many actual packages/images actually get deleted... well, it really depends on the other rules.

      Perhaps, after the run, the disk usage will still be more than 20GB (e.g. if no images tags matching *alpha* or *beta*). Or perhaps it goes down to 0GB (because it's exclusively unused alpha/beta images).

      Here is the code, for reference:

              long feedSize = 0;
              if (rule.SizeTrigger_KBytes != null && !rule.SizeExclusive_Indicator)
              {
                  this.LogDebug($"Rule has an inclusive size trigger of {rule.SizeTrigger_KBytes} KB.");
                  this.LogInformation("Calculating feed size...");
                  this.StatusMessage = "Calculating feed size...";
      
                  feedSize = this.GetFeedSize();
                  this.LogDebug($"Feed size is {feedSize / 1024} KB.");
      
                  if ((feedSize / 1024) <= rule.SizeTrigger_KBytes.Value)
                  {
                      this.LogInformation("Feed is not taking up enough space to run rule. Skipping...");
                      return;
                  }
              }
      

      If you can think of a way to improve the documentation, please share it! WE really want it to be clear so you don't have to waste time asking us or getting frustrated in the software :)

      Maybe we can even link to this discussion in the docs page...

      posted in Support
      atripp
      atripp
    • RE: Clean up Docker images

      Not yet; I saw an internal presentation on it, but I don't know the communication plan.

      Feel free to check with @apxltd directly... email or slack seem to be best ;)

      posted in Support
      atripp
      atripp
    • RE: Docker: Need to verify which digest I need to remove a manifest related to a tag

      Option 1. That digest references the blob which represents your manifest.

      According to Docker's Content Digest Docs, option 2 (the Docker-Content-Digest header) does not reference a blob, it's just a hash of the response itself.

      posted in Support
      atripp
      atripp
    • RE: Does ProGet support Azure SQL databases?

      We made and tested several changes to the installer a while back, but it's not something we regularly test/verify.

      Please share what you find work! Thanks.

      posted in Support
      atripp
      atripp
    • RE: How to find out package disk space?

      In ProGet 5.3, we plan to have a couple tabs on each Tag (i.e. container image) that would provide this info: Metadata (will be a key/value pair of a bunch of stuff), and Layers will show details about each of these layers.

      That might help, but otherwise, we have retention policies which are designed to clean up old and unused images.We'll also have a way to detect which images are actually being used :)

      posted in Support
      atripp
      atripp
    • RE: [BUG - ProGet] Not able to remove container description

      As @apxltd mentioned, we've got a whole bunch planned for ProGet 5.3.

      I've logged this to our internal project document, and if it's easy to implement in ProGet 5.2 (I can't imagine it wouldn't be), we'll log it as a bug and ship in a maintence release.

      Do note, this is not an IMAGE description, it's a REPOSITORY (i.e. a collection of images with the same name, like MyCoolContainerApp) description; so this means the description will be there on all images/tags in the repository.

      posted in Support
      atripp
      atripp
    • RE: [Question - ProGet] Are versions amount wrong ?

      You're right, I guess that's showing the "layers" instead of the "tags"; I think it should be showing container registries separately (they're not really feeds), but that's how it's represented behind the scenes now.

      Anyways we are working on ProGet 5.3 now; there's a whole bunch of container improvements coming, so I've noted this on our internal project document, to make sure we get a better display for container registries.

      posted in Support
      atripp
      atripp
    • RE: 1 Warning, 1 Error: Connector Error, Unable to update cached data

      Hello;

      This error indicates that nuget.org is having some kind of networking/performance problems, and not responding to that request. NuGet.org is owned/maintained by Microsoft, so there's really nothing you can do, aside from wait for the problem to go away on their end.

      posted in Support
      atripp
      atripp
    • RE: How to always execute Get-Asset in Role?

      Great question!

      The answer is, unfortunately, buried in the Formal Specifications. But long story short, you'll want to wrap the Get-Asset operation in a with executionPolicy = always block.

      For more information, note that there are three modes of executions:

      • Collect Only - only ICollectingOperation operations will run; if the operation is a IComparingOperation, then drift may be indicated. All ensure operations implement both interfaces.
      • Collect then Execute - a collection pass is performed as described above; if any drift is indicated, an execution pass is performed that runs:
        • operations that indicated drift
        • IExecutingOperation operations in the same scope as a drift-indicating operation that do not implement - IComparingOperation; this is all execution actions
        • operations with an execution policy of AlwaysExecute; this can only be set on a Context Setting Statement
      • Execute Only- only IExecutingOperation operations will run; all ensure and execute operations implement this interface

      So what's happening is that Get-Asset will never run in a Collect pass, where as Ensure-DscResource will always run in a Collect pass (but only in Collection mode). By forcing Get-Asset to always execute, it will run even in the collect pass.

      By the way: I would love to find a way to properly document the answer to this, so users don't get frustrated; any suggestions on where to edit the contents?

      posted in Support
      atripp
      atripp
    • RE: Combine strings BuildMaster

      Nice OtterScript :)

      This will work, the variables won't "leak over" or anything like that.

      posted in Support
      atripp
      atripp
    • RE: Combine strings BuildMaster

      I think, you want to use $Eval function. Note that grave apostrophe (`) is an escape character.

      set $BuildMaster_Test_1 = Test;
      set $Number = 1;
      Log-Debug `$BuildMaster_Test_$Number;
      Log-Debug $Eval(`$BuildMaster_Test_$Number);
      

      So the output would be:

      $BuildMaster_Test_1
      Test
      posted in Support
      atripp
      atripp
    • RE: Clean up Docker images

      We've got some major container improvements coming in ProGet 5.3, and will revamp our product; hopefully we'll be able to present this pretty soon!

      I think, once you see what we have planned, you'll want to change/improve your workflows to simplify things, and this may not even be necessary... anyways, stay tuned.

      posted in Support
      atripp
      atripp
    • RE: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool

      The ProGet Dockerfile is based on mono, which is the latest stable version; so, every maintenance release it's whatever the latest version is at the time.

      posted in Support
      atripp
      atripp
    • RE: Combine strings BuildMaster

      Hi Ali,

      Sure, it would just be like $Variable2$Variable1 or ${Variable 2}${Variable 1}.

      Check out documentation on Strings & Values in OtterScript to learn more.

      posted in Support
      atripp
      atripp
    • 1
    • 2
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 31 / 36