Navigation

    Inedo Community Forums

    Forums

    • Login
    • Search
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    1. Home
    2. atripp
    3. Posts
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    Posts made by atripp

    • RE: Constant timeouts doing NuGet restore

      Unfortunately, this this behavior is to be expected with "dotnet restore"; it's massively parallel, and will issue as many parallel requests as the build agent allows. Typically, this is more than the server can handle. The end result of this is that the "network stack" gets overloaded. This is why the server is unreachable.

      The reason is that, each request to a ProGet feed can potentially open another requests to each connector configured (maybe NuGet.org, maybe others), and if you are doing a lot of requests at once, you'll get a lot of network activity queuing up. SQL Server also running on the network, so those just get added to the queue, and eventually you run out of connection.

      One way to solve this is by reducing network traffic (removing connectors to nuget.org, restricting the build agent if possible, etc.), but the best bet is to move to load-balancing with ProGet Enterprise. See How to Prevent Server Overload in ProGet to learn more.

      Another option is make sure you're not using --no-cache when you use dotnet restore command. NuGet typically will create a local cache of downloaded packages which would help alleviate some of the load on the ProGet server. Passing --no-cache will bypass that local cache and it will cause it to always pull from the server.

      Another thing that might help is using the --disable-parallel option in dotnet restore. That will prevent restoring multiple projects in parallel, which will also reduce the load on ProGet.

      Fortunately and unfortunately, NuGet does a lot of parallel options that can saturate a NuGet server. When you are restoring a lot of simultaneous builds and solutions with a large number of projects, it can really affect the performance of a server.

      This is ultimately where load balancing will come in.

      posted in Support
      atripp
      atripp
    • RE: Permissions to all feeds Except one

      @Stephen-Schaff what was the restriction that you added? The "Publish Packages" and "View & Download Packages" tasks are made up of a collection of attributes, and those are what's tested. So you'd want to Restrict both of those.

      Restrictions override grants, but more a more-specific grant will override a less-specific deny. But in any case, a Grant at the system level, with a Deny at the feed level should accomplish what you're doing.

      If you can share a screenshot that might help us see it as well

      posted in Support
      atripp
      atripp
    • RE: API Key "impersonate user" doesn't work when impersonating an LDAP user

      @scroak_6473 we could definitely try a screen share, but in a case like this (where we have no idea what's wrong), it's mostly digging in the code and trying to think of things to try to get a clue for more information. Currently, I'm at a loss... because the error you have shouldn't be happening, but it clearly is.

      So now, I had a new idea. I would like to eliminate Docker from equation, as it handles the
      "api" username slightly differently than everywhere else. Plus you can do this all in your browser.

      Can you try to visit a restricted (i.e. not anonymous view) NuGet endpoint using the "api" user name, and a password?

      For example, it should look like /nuget/NuGetLibraries/v3/index.json, and then your browser should prompt for a Username/Password.

      Depending on the result of this, we will explore different code paths, and then might need to add some more debugging codes.

      Best,
      Alana

      posted in Support
      atripp
      atripp
    • RE: API Key "impersonate user" doesn't work when impersonating an LDAP user

      Hi Simon,

      That is strange, it's basically your browser "hiding" the underlying error. Sometimes that happens if the response body is too short.... which could happen if the server got in some really bizare state.

      I could find the logs you sent, but they were very random and they also don't make sense; it's random ASP.NET errors, but we can't see the full situation. In general the 500 errors should be logged in ProGet > Admin; this will provide a stack trace as to what errors are happening.

      If you can't get to the admin page, then something is really wrong with the server. I would try restrating your container.

      Alana

      posted in Support
      atripp
      atripp
    • RE: API Key "impersonate user" doesn't work when impersonating an LDAP user

      @scroak_6473 is it the exact same message? Basically, the "api" user not found in the directory?

      posted in Support
      atripp
      atripp
    • RE: Anonymous user can see list of packages and containers

      @Stephen-Schaff thanks for the bug report, I verified that this may happen depending on permission of user, and which feeds they can/can't use --- but it seems an easy enough fix that we can do via PG-1894 (targeted to next release) - the packages can't be viewed upon clicking, but it's a sub-optimal experience for showing packages they can't see

      posted in Support
      atripp
      atripp
    • RE: OTTER - Capture Logs from block execution and assign to variables ?

      @philippe-camelio_3885 oh i see; you mean, capture the output of a process or script execution into a variable or something.

      Definitely something to consider as an enhancement, I think. That wouldnt' be too bad (though the variable could get huge, and rutnime variables raen't realy designed for large amounts of text like that)

      posted in Support
      atripp
      atripp
    • RE: Buildmaster - High CPU database since 6.2.22

      Thanks for the update; and the upcoming fixes will certainly make it so that purging is much more efficient on the manual execution side, just in case there's an "Explosion" of executions like this.

      So right now, my concern is that it's "logging every sync" (once per hour), due to a sort of bug or something. Can you check what's getting logged in Infrastructure Sync? You should be able to see this under Admin > Executions, and see if you can spot a pattern?

      No rush. The infra sync executions should clearly show a change history of what was updated on the infra side.

      posted in Support
      atripp
      atripp
    • RE: OTTER - Capture Logs from block execution and assign to variables ?

      @philippe-camelio_3885 said in OTTER - Capture Logs from block execution and assign to variables ?:

      The ANSIBLE::BM-Playbook module returns the log:

      By this, I assume you mean, writes to the Otter execution log, either via Log-Information or an execute process? You might have to do this via PSExec, write the logs to text, and parse it out that way using a regular expression...

      At this time, there's no way to read entries from the log during a live execution.

      Best,
      Alana

      posted in Support
      atripp
      atripp
    • RE: API Key "impersonate user" doesn't work when impersonating an LDAP user

      Thanks @scroak_6473; I found the email, and can see a lot of information from what you sent.

      I can clearly see the identical api challenge/response, and the different behaviors from ProGet.

      Unfortunately, I'm not able to reproduce the scenario on this end, using our own instance and a domain impersonated account. But I think that's because this issue may have already been fixed with PG-1859; would you be able to upgrade to 5.3.22 to confirm?

      posted in Support
      atripp
      atripp
    • RE: [Otter 3.0] Unable to configure Default Git Raft

      @Joshua_1353 I also got this message from GitHub;

      Basic authentication using a password to Git is deprecated and will soon no longer work. Visit https://github.blog/2020-12-15-token-authentication-requirements-for-git-operations/ for more information around suggested workarounds and removal dates.

      I wonder if it's disabled on your account, already 🤔

      posted in Support
      atripp
      atripp
    • RE: [Otter 3.0] Unable to configure Default Git Raft

      @Joshua_1353 Thanks, I was able to access it without a problem, and added a test script.

      But I made a mistake at first, and I kept the default branch name of "master" in the Raft dialog. On your repository, the master branch is named "main".

      What is the configuration inside of Otter? If it says "master" then there will be a sort of error.

      posted in Support
      atripp
      atripp
    • RE: Config apt Proget feed like Apt-Cacher NG

      I'm not familiar with what apt-cacher is, but I'm guessing it is a kind of proxy/caching for APT packages?

      I'll note that connectors are not supported for Debian feeds at this time; only packages you publish yourself to that feed.

      posted in Support
      atripp
      atripp
    • RE: Connection reset while downloading npm packages

      @mathieu-belanger_6065 said in Connection reset while downloading npm packages:

      I am curious, would there be an impact on performance when "piping" connectors together? For example, internal feed A has a connector to internal feed B, which has a connector to internal feed C, which has a connector to npmjs.org?

      Connectors are accessed over HTTP. So assuming you have a "chain" like A --> B --> C --> npm.js, (i.e. different 3 feeds and 3 different connectors), each request may yield 3 additional requests.

      So when your browser asks feed A for package typescript@3.7.4, then following will happen.

      1. If the package is cached or local, the file is streamed to the browser
      2. Each connector (just B, in this case) is queried over HTTP for typescript@3.7.4
      3. The first connector that returns a response, the response body is streamed to the browser

      Each connector follows the same logic. When ProGet (via a request to feed A) asks feed B for that package, the same logic is followed:

      1. If the package is cached or local, the file is streamed to the browser
      2. Each connector (just C, in this case) is queried over HTTP for typescript@3.7.4
      3. The first connector that returns a response, the response body is streamed to the browser

      Continuing the pipe, when ProGet (via a request to feed B via a request to feed A) asks feed C for that package, the same logic is followed:

      1. If the package is cached or local, the file is streamed to the browser
      2. Each connector (just nuget.org, in this case) is queried over HTTP for typescript@3.7.4
      3. The first connector that returns a response, the response body is streamed to the browser

      This is why caching is important, but also why chaining may not be a good solution for high-trafficked npm developer libraries like typescript. The npm client basically does a DoS by requesting hundreds of packages at once. Same is true with nuget.exe as well.

      posted in Support
      atripp
      atripp
    • RE: Unable to Remove Bad Nuget Version from Feed

      @arozanski_1087 I don't think repackaging in the UI is the best for this 🤔 in theory it should work, but we we never designed or tested it for this scenario; it's more to change pre-release versions.

      In theory having both versions (1.0, 1.0.0) in the feed should work; I would just delete all versions, then upload from disk, then edit the file on disk (1.0 to 1.0.0), then upload the other.

      We heard of one other user doing that.

      If you can find "the trick" please do share, because certainly you won't be the last person using this ancient, broken package ;)

      posted in Support
      atripp
      atripp
    • RE: API Key "impersonate user" doesn't work when impersonating an LDAP user

      Hi Simon, you can send to support at inedo dot com. Please include [QA-473] in the subject, so we can find it easily :)

      posted in Support
      atripp
      atripp
    • RE: Unable to Remove Bad Nuget Version from Feed

      @arozanski_1087 ah, yes. Owin. It's most definitely a problem package 🙄

      https://www.nuget.org/packages/Owin

      There's not much we can do about this, when you have a connector to NuGet.org on the same feed. The reason is this.

      • Depending on which NuGet.org API call you use (v2, v2+Semver, v3), the package will be reported from NuGet.org as either 1.0 or 1.0.0.
      • Depending on which NuGet Client you use, and in which context, will first request "1.0.0", then request "1.0" if that fails. Or it might do it in the reverse order, depending on what's in dependency file.
      • The .nuspec is 1.0, as you noticed

      NuGet stopped supporting this back in 2016, but unfortunately... the developers are using an 8+ year old package.

      If you don't use a NuGet.org connector on the feed, you can simply follow those manual repackaging steps I mentioned, and create your own Owin 1.0.0. The clients will still be able to get it, but may issue a 1.0 request first.

      posted in Support
      atripp
      atripp
    • RE: API call to list container images in a feed?

      The two api endpoints I can think of are:

      • /v2/_catalog returns all repository names ("container" names)
      • /v2/<repository-name>/tags/list returns all tags within specified repository

      Some more details are here: https://docs.docker.com/registry/spec/api/

      posted in Support
      atripp
      atripp
    • RE: An error occurred in the web application: Invalid username or password.

      That URL is a bit strange, and I wouldn't expect it to work.

      What is /endpoints/Chocolatey/content/chocolatey.license.xml? If you visit in your browser, what happens?

      posted in Support
      atripp
      atripp
    • RE: [Otter 3.0] Unable to configure Default Git Raft

      Ah, glad we can see the message now.

      Okay - so that error message is coming from within libgit2, and it basically just means there's an authentication error. The most common reason is that the username/password credentials for GitHub are invalid, but the git server (github in this case) won't give the reason -- it could also can also mean your account is locked, you're using a "username" instead of a "token", don't have access to the branch, etc.

      If it works on your local computer, but not BuildMaster, then it means there's "something" in either your local repository or global git configuration that's allowing it. Usually, this is stored git credentials, or even a plug-in that's allowing it.

      That being said, it could also be related to the credentials changes in Otter v3; the problem is we can't reproduce it here. Could we trouble you to do this.

      1. Create a repository on public GitHub.com (i.e. not a private repo, or on a private github instance)
      2. Try that, verify it works
      3. If it doesn't, give me access whatatripp, so then we can test your repository

      With this, we can be looking/working on the same repository, and at least figure out where the problem lies.

      posted in Support
      atripp
      atripp
    • RE: Unable to Remove Bad Nuget Version from Feed

      @arozanski_1087 thanks for clarifying...

      This will be a bit tricky to debug; there are some supported scenarios for the quirky version in the UI, but most many over the API don't work (because API requires semver).

      There could be other factors at play like connectors or caching, or who knows... so could you set up a basic reproduction case, that you could share/ send to us?

      • create two basic packages (no contents just the nuspec file is ok) for 1.0 and 1.0.0, basically that mimic your real packages; your real packages are okay too, but in case you can't share for propietary reason

      • create a new NuGet feed

      • try to repdouce on the new NuGet feed

      We've tried the above test, but don't experience the problem you describe... the packages can be deleted.

      posted in Support
      atripp
      atripp
    • RE: Unable to Remove Bad Nuget Version from Feed

      Hi @arozanski_1087 ,

      Can you navigate to the package(s) from the UI? If so, you should be able to delete it from the UI.... what errors are you seeing when you try to delete?

      If you have a package with a quirky version, the best bet is to download it, edit the nuspec file in the package, delete the quicky package on the server, then republish it.

      Best,

      Alana

      posted in Support
      atripp
      atripp
    • RE: Buildmaster - High CPU database since 6.2.22

      @philippe-camelio_3885 thanks. please keep us in the loop!

      As expected, that's a LOT of infrastructure sync executions. I wonder why. Are there frequent variable changes on your servers/roles?

      There's probably something off, where it's logging when it shouldn't. We can investigate that another time, but in the meantime, the upcoming optimizations in pruning the manual executions should make this go a lot faster next time.

      posted in Support
      atripp
      atripp
    • RE: Buildmaster - High CPU database since 6.2.22

      Thanks @philippe-camelio_3885

      So, the good news is, we've identified the problem. There was just a huge number of manual executions happening, for some reason, and the manual execution purging routine could never catch up. Changing those throttles wouldn't make a difference I'm afraid, as none will trigger a manual execution...

      At first, can you please share the results of this query, so we can see what made all those?

      SELECT [ExecutionType_Name], COUNT(*) FROM [ManualExecutions] GROUP BY [ExecutionType_Name]

      That will tell us what Manual Executions are around, mostly so we can understand what it is. I suspect, infrastructure sync.

      That being said... the first thing I'm now seeing is that the report looks old. It's because the number of rows is 164,125, which is the exact same number as from before. So, I'm thinking, actually, you didn't commit the transaction in the query I posted before? It included a ROLLBACK statement as a safety measure... that's my fault, I should have said to only run DELETE if you were satisfied.

      Since the query seems okay (it reduced rows from 164K down to 1k), please run this:

      DELETE [Executions]
        FROM [Executions] E,
           (SELECT [Execution_Id], 
                   ROW_NUMBER() OVER(PARTITION BY [ExecutionMode_Code] ORDER BY [Execution_Id] DESC) [Row]
             FROM [Executions]
            WHERE [ExecutionMode_Code] IN ('R', 'M', 'T')) EE
      WHERE E.[Execution_Id] = EE.[Execution_Id]
        AND EE.[Row] > 1000
      

      From here, it should actually be fine...

      posted in Support
      atripp
      atripp
    • RE: Connection reset while downloading npm packages

      @mathieu-belanger_6065 thanks for all of the diagnostic and additional information. I think you're right, it's environment / network specific, and not related to ProGet.

      I would check the ProGet Diagnostic Center, under Admin as well.

      Otherwise, proGet doesn't operate at the TCP-level, but uses ASP.NET's network stack. There's really nothing special about how NPM packages are handled, compared with other packages, and we haven't heard of any other issues regarding this.

      For reference, here's code on how a package file is transmitted. Note that, if you're using connectors and the package isn't cached on ProGet, then each connector must be queried. This can yield quite a lot of network traffic.

                  if (metadata.IsLocal)
                  {
                      using (var stream = await feed.OpenPackageAsync(packageName, metadata.Version, OpenPackageOptions.DoNotUseConnectors))
                      {
                          await context.Response.TransmitStreamAsync(stream, "package.tgz", MediaTypeNames.Application.Octet);
                      }
                  }
                  else
                  {
                      var nameText = packageName.ToString();
      
                      var validConnectors = feed
                          .Connectors
                          .Where(c => c.IsPackageIncluded(nameText));
      
                      foreach (var connector in validConnectors)
                      {
                          var remoteMetadata = await connector.GetRemotePackageMetadataAsync(packageName.Scope, packageName.Name, metadata.Version.ToString());
                          if (remoteMetadata != null)
                          {
                              var tarballUrl = GetTarballUrl(remoteMetadata);
                              if (!string.IsNullOrEmpty(tarballUrl))
                              {
                                  var request = await connector.CreateWebRequestInternalAsync(tarballUrl);
                                  request.AutomaticDecompression = DecompressionMethods.None;
                                  using (var response = (HttpWebResponse)await request.GetResponseAsync())
                                  using (var responseStream = response.GetResponseStream())
                                  {
                                      context.Response.BufferOutput = false;
                                      context.Response.ContentType = MediaTypeNames.Application.Octet;
                                      context.Response.AppendHeader("Content-Length", response.ContentLength.ToString());
      
                                      if (feed.CacheConnectors)
                                      {
                                          using (var tempStream = TemporaryStream.Create(response.ContentLength))
                                          {
                                              await responseStream.CopyToAsync(tempStream);
                                              tempStream.Position = 0;
      
                                              try
                                              {
                                                  await feed.CachePackageAsync(tempStream);
                                              }
                                              catch
                                              {
                                              }
      
                                              tempStream.Position = 0;
                                              await tempStream.CopyToAsync(context.Response.OutputStream);
                                          }
                                      }
                                      else
                                      {
                                          await responseStream.CopyToAsync(context.Response.OutputStream);
                                      }
      
                                      return true;
                                  }
                              }
                          }
                      }
                  }
      
      posted in Support
      atripp
      atripp
    • RE: Buildmaster - High CPU database since 6.2.22

      The Execution_Configuration column of the ManualExecutions table will give a clue; it's XML but if you expand the coluumn, you'll see the name of the manual execution.

      It's only supposed to log if something changed, however...

      If there's a bug, one way to check would be to disable infrastructure sync, for the time being.

      posted in Support
      atripp
      atripp
    • RE: Buildmaster - High CPU database since 6.2.22

      If I'm understanding correctly, did your the Manual Execution records go from 1000 to 164,000 in just a few days? If so, that would explain a lot....

      These are the types of so-called Manual Executions:

      • Importing or Exporting Applications
      • Cloning and Applying Template to Applications
      • Sync of Issue Sources
      • Deploying Configuration file
      • Upgrading Inedo Agents
      • Sync infrastructure

      They are supposed to only occur on a manual basis, like when you trigger something from the UI so you can get logs. Or, in the case of sync infrastructure, whenever infrastructure changes.

      Any idea what all the manual executions could be?

      posted in Support
      atripp
      atripp
    • RE: [Otter]Server restart failed

      Hi @Adam1 ,

      The Restart-Server operation is performed on the server itself, using the Inedo Agent or PowerShell Agent.

      Behind the scenes, the agent will just use the advapi32.dll::InitiateShutdown Win32 API method, and that error string indicates that Windows is returning ERROR_ACCESS_DENIED when attempting to initiate the Shutdown. This is the same method that shutdown.exe uses behind the scenes as well.

      So basically, just make sure that the agent process is running as an admin/system account.

      Best,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Buildmaster - High CPU database since 6.2.22

      How often is this happening? It shows 102 executions were purged, and based on the I/O there was a lot of logs deleted... this can be actually quite resource-intensive, as there are a lot of log data.

      But this usually happen during off-hours, etc., so it shouldn't be disturbing.

      posted in Support
      atripp
      atripp
    • RE: OTTER 3.0 - Git Based-Raft ?

      @Joshua_1353 did this work in Otter v2?

      The "too many redirects/auth requests" is usually a kind of red herring, and refers to some sort of configuration problem (corrupt local repository, cached credentials, etc.). We'd need to see the whole stack trace --- but could you post it to a new Topic, so we can track it differently?

      I don't think it's related to v3. The reason it didn't show in v3 was (we just forgot to tag it properly after some coding refactoring changes in Otter).

      posted in Support
      atripp
      atripp
    • RE: OTTER 3.0 - Git Based-Raft ?

      Thanks @Joshua_1353! Looks like this was a minor configuration change, where that particular repository type wouldn't load in Otter v3. I added a missing attribute, and rebuilt, so now he seems to be displayed in the list.

      Easy fix, if you download latest Git extension (1.10.1).

      posted in Support
      atripp
      atripp
    • RE: Functional differences between different "Feed Usage" options

      Hi @Stephen-Schaff_8186,

      Thanks for the clarifications! In fact, I wanted to learn some of the behavior, and here's what I discovered.

      I'm sharing the details, because I think we should take the opportunity to clarify not only the docs, but the UI, since it seems like this can be improved. It's a new concept in ProGet 5.3, and it was primarily intended to guide set-up of new feeds, so we haven't looked at it closely since first adding the feature.

      Feed Type Sets

      There are two sets of feed type options, and which ones are displayed is dependent upon whether the feed type is denoted as having a private gallery (HasPublicGallery).

      HasPublicGallery == true

      • "free/open source packages"
      • "private/internal packages"
      • "validated/promoted packages"
      • "mixed public/private packages"

      HasPublicGallery == false

      • "private/internal packages"
      • "validated/promoted packages"

      These all map to an enum: Mixed = 0, PrivateOnly = 1, PublicOnly = 2, Promoted = 3.

      HasPublicGallery

      The following feed types are denoted (internally) as having an official, public gallery: Chocolatey, Cran, Maven, Npm, NuGet, PowerShell, Pypi, RubyGems.

      • Helm and Docker are not on this list, perhaps because there's no official gallery? I'm not sure.
      • Debian and RPM are not on this list, because I don't think they support connectors

      Feed Type Behavior

      Almost all of the behavioral changes occur in the "out of box tutorial", to guide users through the setup. Aside from that, here's the UI impact I found:

      FeedType == PublicOnly

      • On the list packages page (e.g. /feed/MyFeed):

        • the "package filter info" is displayed as "Unfiltered", even if no package filters are configured to bring visibility to the importance of package filters
        • the "vulnerability status" is displayed as "Not Scanned", even if vulnerability scanning is not configured
      • On the Package Versions page (e.g. /feed/MyFeed/MyPackage/versions):

        • the "vulnerability status" is displayed as "Not Scanned", even if no vulnerabilities are detected

      FeedType == PrivateOnly

      • Feed allows AllowUnknownLicenseDownloads, regardless of global setting; this feels like a big behavioral change, but it makes sense, since why would you license your own packages, etc.
      • The Manage License Filter page displays an error.
      • On the Package Overview page (/feed/MyFeed/MyPackage/1.2.3), the license information box is not displayed
      • On the List Package Versions page (/feed/MyFeed/MyPackage/versions), the license information box is not displayed

      FeedType == Promoted

      • On the List packages page (/feed/MyFeed), the add Package Button is disabled

      FeedType == Mixed

      No UI changes.

      Next Steps?

      Well, that's everything. Any opinions / suggestions?

      I'm not sure why the Add Package button is disabled. Of course you can still use API, or even navigate directly to the page. Perhaps a warning on the Add Package Page would be better?

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Proget 5.0.10 docker with MSSQL server

      This upgrade path isn't supported, and ProGet 5.0.1 does not work on SQL Server.

      Your best route for upgrade is ProGet 5.0 > ProGet 5.3. Then, migrate to ProGet for Linux.

      posted in Support
      atripp
      atripp
    • RE: ProGet - Use Connector filters like package search

      Hello;

      That search syntax is really only supported by NuGet v3 API, I think; so, ProGet simply forwards on the query to that API, and returns the results.

      But regardless, connector filters need to be applied after the remote feed returns results, because connector filter logic can be more complex that what is supported by the various feed APIs (you can allow Microsoft.* and Inedo.* for example).

      More advanced connector filter options are definitely something we've considered, and we'd love to do things like "version: 3.*" for example. But, it's a lot more complicated under the hood, and probably isn't even feasible given the nature of feeds.

      Alana

      posted in Support
      atripp
      atripp
    • RE: Buildmaster - High CPU database since 6.2.22

      My bad, can you try this instead? Basically we are trying to delete all "R, M, T" executions except the most recent 1000 of each type.

      This is what the code is doing now, just really inefficiently for some reason -- and the inefficiency seems to have caused a "backlog" of sorts.

      USE [BuildMaster]
      
      BEGIN TRANSACTION
      
      DELETE [Executions]
        FROM [Executions] E,
           (SELECT [Execution_Id], 
                   ROW_NUMBER() OVER(PARTITION BY [ExecutionMode_Code] ORDER BY [Execution_Id] DESC) [Row]
             FROM [Executions]
            WHERE [ExecutionMode_Code] IN ('R', 'M', 'T')) EE
      WHERE E.[Execution_Id] = EE.[Execution_Id]
        AND EE.[Row] > 1000
      
      SELECT [ExecutionMode_Code], COUNT(*) FROM [Executions] GROUP BY [ExecutionMode_Code]
      
      ROLLBACK
      
      posted in Support
      atripp
      atripp
    • RE: OTTER 3.0 / Fresh install Install with remote DB from InedoHub failed on non-US system

      Hello;

      We haven't seen this error in quite a long time, and I remember it was something we had addressed ages ago. Internally, we still jokingly call it NT AUTORIDAD from this experience, because who would have guessed they localized those names...

      That said, we didn't really change the installation process for Otter 3.0, and it's really just a copy/paste of the ProGet and BuildMaster installation scripts. I don't know why we wouldn't have heard from it until now.

      It's possible this solution was never brought over to the Inedo Hub? Maybe.. for now, we'd rather not mess with the installation scripts until we get more reports, so we'll just wait to hear more. If anyone else experiences this , in any products, please share it :)

      Alana

      posted in Support
      atripp
      atripp
    • RE: [ProGet] Manual database upgrade (docker, kubernetes)

      Thanks @viceice !!

      I think we're really close, actually. You're right, the service code (copied below), sets the version, so this wouldn't work anyways.

      However, I think if we just write-out a script at build time that does something like EXEC Configuration_SetValue 'Internal.DbSchemaVersion', $VersionNumber... and then incluse that in SqlScripts.zip`, it would always work.

      Ultimately what I'd love to do is build a guide like the Docker Compose Installation Guide, but have it be the Kubernetes Installation Guide.

      If we get the version-number-setter script included in SqlScripts.zip, how close do you think we'll be to a Kubernetes install guide? Would it basically be the same as that Compose guide, but just have k8-commands and k8-sample code instead?

              private static int UpdateDatabaseSchema()
              {
                  Console.WriteLine($"ProGet version is {typeof(Program).Assembly.GetName().Version}.");
                  var currentVersion = getCurrentVersion();
                  Console.WriteLine($"Current DB schema version is {currentVersion?.ToString() ?? "unknown"}.");
                  if (currentVersion == typeof(Program).Assembly.GetName().Version)
                      return 0;
      
                  using (var p = Process.Start(getStartInfo()))
                  {
                      p.WaitForExit();
      
                      if (p.ExitCode == 0)
                          setCurrentVersion();
      
                      return p.ExitCode;
                  }
      
                  static ProcessStartInfo getStartInfo()
                  {
      #if NET452
                      return new()
                      {
                          FileName = "mono",
                          Arguments = $"/usr/local/proget/db/inedosql.exe update /usr/local/proget/db/SqlScripts.zip --connection-string=\"{SharedConfig.ConnectionString}\""
                      };
      #else
                      return new()
                      {
                          FileName = "/usr/local/proget/db/inedosql",
                          ArgumentList =
                          {
                              "update",
                              "/usr/local/proget/db/SqlScripts.zip",
                              $"--connection-string={SharedConfig.ConnectionString}"
                          }
                      };
      #endif
                  }
      
                  static Version getCurrentVersion()
                  {
                      try
                      {
                          var s = DB.Configuration_GetValue("Internal.DbSchemaVersion")?.Value_Text;
                          Version.TryParse(s, out var v);
                          return v;
                      }
                      catch
                      {
                          return null;
                      }
                  }
      
                  static void setCurrentVersion()
                  {
                      try
                      {
                          DB.Configuration_SetValue("Internal.DbSchemaVersion", typeof(Program).Assembly.GetName().Version.ToString());
                      }
                      catch
                      {
                      }
                  }
              }
      
      posted in Support
      atripp
      atripp
    • RE: Buildmaster - High CPU database since 6.2.22

      Thanks Philippe!! That's helpful.

      Can you try this sql?

      USE [BuildMaster]
      
      BEGIN TRANSACTION
      
      DELETE [Executions]
       WHERE [ExecutionMode_Code] IN ('R', 'M', 'T')
         AND ROW_NUMBER() OVER(PARTITION BY [ExecutionMode_Code] ORDER BY [Execution_Id] DESC) > 1000
      
      SELECT [ExecutionMode_Code], COUNT(*) FROM [Executions] GROUP BY [ExecutionMode_Code]
      
      ROLLBACK
      

      You should then see results like this:

      R 1000
      M 1000
      S 1560
      T 377
      B 999
      

      You can further inspect the tables, but this should do the trick. If the results look okay, then please run only the DELETE statement and then it will be fine.

      Can you let me know if it works? we will incorporate this logic in BM-3659

      posted in Support
      atripp
      atripp
    • RE: OTTER 3.0 - Agent type SSH

      Hello; thanks for reporting the bug! This was a UI regression, and the HostName field was not being dispayed on the edit form. So it's already fixed in code (via OT-384), and we'll get a new release out quite soon, perhaps a day or so

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Buildmaster - High CPU database since 6.2.22

      Hey @philippe-camelio_3885

      You may be looking at the wrong database... based on Otter in your query.

      Here's a way to see executions by execution type: SELECT [ExecutionMode_Code], COUNT(*) FROM [BuildMaster]..[Executions] GROUP BY [ExecutionMode_Code]

      As for retention policies, you should be able to see the logs of those., and see what's being purged.

      Anwyays, w'ell figure it out... hang tight!

      posted in Support
      atripp
      atripp
    • RE: [ProGet] Manual database upgrade (docker, kubernetes)

      Hi @viceice , we don't have an Official kubernetes deployment for ProGet yet, but working with the community on figuring out how to deploy this way is how we get there. eventually we'd like to offer ProGet Enterprise via Kubernetes and enable load balancing, etc.

      I'm not so familiar with Docker or Kubernetes to be honest, but I can answer all of questions so we can figure it together.

      Is there any ways to run the proget docker image to only upgade the database and then exit?

      Not that I can think of. Here's how it works behind the scenes:

      • On ProGet (for Windows), the database is initialized/upgraded at install time using a tool called inedosql: https://docs.inedo.com/docs/proget/installation/installation-guide/manual#database

      • On ProGet for Linux, the ProGet Service is responsible for upgrading the database on startup using the same mechanism (i.e. inedosql). This was done to simplify the Dockerfile.

      I'm not sure what an InitContainer is, but based on the name, I guess it's a container that exists only to initialize a cluster? Technically we could do several things:

      • run inedosql on Linux
      • add a commandline argument to the ProGet service to terminate after doing a database upgrade

      I don't know what would be easier, or better. What do you think?

      posted in Support
      atripp
      atripp
    • RE: OTTER 3.0 - Git Based-Raft ?

      Git-based Rafts are part of the Git extension, so if you're not seeing them when creating a new Raft (or they've disappeared), then there is probably an extension load error.

      Do you see Git under Admin > Extensions?

      posted in Support
      atripp
      atripp
    • RE: Buildmaster - High CPU database since 6.2.22

      We can definitely try to diagnose what's going on. What are the types of Executions in that table? Manual? Build? Etc?

      posted in Support
      atripp
      atripp
    • RE: How to enable Semantic Versioning for Containers

      Hello; you should see a checkbox, like this.

      9e8d5747-9278-4c84-8762-681af6985b55-image.png

      However, due to a bug (now fixed via PG-1885) the checkbox wasn't being displayed due to a license validation problem.

      It will be fixed in the next maintence release, scheduled for later today

      posted in Support
      atripp
      atripp
    • RE: Functional differences between different "Feed Usage" options

      Hello, great question!

      I hope that I can answer your question by showing you what I changed in the documentation

      Feed usage controls which tabs and messages are displayed in the user interface. For example, "Private/Internal packages" won't display the license filtering options, as you wouldn't create license usage restrictions for your own packages.

      Note that not all feeds have all of these Feed Usage options. Generally speaking, we don't recommend using mixed packages, as it will present all of the user interface options; most of them won't be relevant for packages you create (like license filtering or vulnerability scanning).

      posted in Support
      atripp
      atripp
    • RE: ProGet as ClickOnce publish target?

      I'm not so familiar with ClickOnce publish targets, but my understanding is that ClickOnce is deployed to an ordinary web-based file directory, like a site in IIS that has "file browsing" enabled or something?

      IF so, then I think Asset Directory would work. That's kind of like that, and is meant for general files. Feeds require a special API to access.

      Let us know how it goes.

      posted in Support
      atripp
      atripp
    • RE: do I scale devops

      Hello; I'm not sure if we can help here, this sounds like something more appropriate for Azure DevOps community.

      posted in Support
      atripp
      atripp
    • RE: Multipart body length limit 134217728 exceeded.

      Hello;

      This is apparently a default limitation in .NET5/Core; I'm not sure if it can be changed outside of the code, but I've logged a product change PG-1876 to get this fixed in the next maintence release.

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: No Bulk Import for Maven Feed

      @andrew_5903 thanks, I'd like to update the docs! Did you end up writing a script to just call that for each Jar+Pom in your directory?

      posted in Support
      atripp
      atripp
    • 1
    • 2
    • 23
    • 24
    • 25
    • 26
    • 27
    • 35
    • 36
    • 25 / 36