Navigation

    Inedo Community Forums

    Forums

    • Login
    • Search
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    1. Home
    2. atripp
    3. Posts
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    Posts made by atripp

    • RE: BM - Help needed for Git checkout

      Hi @philippe-camelio_3885 ,

      Hmmm, good question; 8min: 20seconds is quite a long time. So I guess, there's a timeout going on? Maybe it's a prompt for input that's happening? I dunno why that would be the case.

      There's a lot of logs missing, so we can't really see where it's failing. Does it work if you're checking out to a Windows server (maybe localhost?)

      I thought if the application is linked to the git repo, it is transparent and I don't need to add the infor i n the checkout function.

      The build is associated with a Commit/Branch/Repository. The operation will effectively default as follows:

          Git::Checkout-Code
          (
              From: $Repository,
              BranchOrCommit: $Commit
          );
      

      $Repository will just be the name of your connection, and before running on the remote server, the url/username/password will be extracted from the connection.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Out of memory errors after upgrading to 2023.15

      Hi @v-makkenze_6348 ,

      Thanks so much for narrowing that down! Thanks to that, I was able to find the issue and fix it.

      This will be fixed in ProGet 2023.16 (we will release today or tomorrow), but I recommend to patch this.

      To patch, download the SQL Script attached to PG-2466 and then run it against your ProGet database.

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Out of memory errors after upgrading to 2023.15

      Hi @v-makkenze_6348 ,

      Unfortunately this can be a really tricky issue to identify, as you'll need to figure out specifically what's causing these problems. It's often unrelated to ProGet, and could be caused by anything from Windows updates to low disk space.

      I'll try to ask a few questions and give some tips on how to narrow things down.

      The best place to start, what version did you upgrade from? If it was an earlier version of ProGet 2023, I would rollback; that will let you identify if it is in fact related to an upgrade.

      The next thing I would try is disabling the ProGet Service; this is separate from the ProGet Web Service. The regular ProGet Service doesn't need to run for the Web Service (Web UI) to function.

      If the problem goes away, then I restart the service but disable scheduled jobs (Admin > Scheduled Jobs) like feed clean up, vulnerability download, package analyzer, etc.

      If the problem goes away, then I would try to find out which service specifically is causing problems.

      You can also look at the execution logs (Admin > execution logs), and if something is taking a really long time, then that could be an indication of a problem

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: OTTER / Docker - Move to a new server - lost connection to linux server - (Finally it is working fine !)

      Hi @philippe-camelio_3885 ,

      If the Encryption Key is okay, then you shouldn't have a problem viewing the "Secure Credentials" page; that has encrypted values. I would also expect a different error (some "invalid padding" or something) if it was a bad encryption key.

      I couldn't find the error "Invalid signature for supplied public key, or bad username/public key combination" in our codebase, which means it's coming from a library we're using. In this case, libssh2.

      And if that's the case, it usually usually means the problem is on the server (i.e. linux server you're connecting to); and also that someone else might have the same problem.

      Here's what I found on this page about debugging SSH:

      This error can be quite misleading. You'll see this if your server wanted two forms of authentication and you've only provided one.

      Hopefully that helps. You may find other help by searching that same error. And if you discover, please let us know what it is - so another future engineer can also discover the secret way to fix it ;)

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: OTTER / Docker - Move to a new server - lost connection to linux server - (Finally it is working fine !)

      Hi @philippe-camelio_3885,

      The first thing that comes to mind is that the encryption key wasn't moved/set correctly on the new instance; https://docs.inedo.com/docs/installation-linux-supported-environment-variables

      If this is the case, then i think you would also get errors browsing some pages that have encrypted data.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Create System API key via API

      Hi @ivan-magdolen_6846,

      It looks like you're on the right track with finding the XML for that key.

      Since you're automating the installation, and already have DBO access to the ProGet database, I would suggest just directly adding it to the database using that stored procedure. You can also add/edit some values (like license key) if you want to the Configuration table.

      Alternatively I guess you could try Admin:Admin as the API key, since that account will be created by default. I'm not sure if it will work with the native API however.

      Best,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Helm Chart installation

      Hi @ccordova_8628 ,

      We have several customers running ProGet as K8 cluster in Production, and we've even added some "special" Kubernetes-only features like Upgrading the Database Only (Optional)... but we do not have an official Helm chart we can provide.

      This is because, unlike Windows/IIS and Docker/NGIX, we aren't very experienced at troubleshooting broken K8 clusters.

      If you have someone on staff who is a K8 expert, and you already maintain a K8 cluster for other applications, then you should be fine using K8 and creating a helm chart.

      Hopefully it should be relatively easy to configure following our Linux documentation:

      • https://docs.inedo.com/docs/installation-linux-docker-guide
      • https://docs.inedo.com/docs/installation-upgrading-docker-containers

      Here is some community discussion on the matter:

      • https://forums.inedo.com/topic/3140/proget-manual-database-upgrade-docker-kubernetes

      If you end up creating a helm chart, please do share, and we can consider it a community-provided chart :)

      Best,
      Alana

      posted in Support
      atripp
      atripp
    • RE: ProGet push and install package 403 forbidden

      Hi @4391728_4499 ,

      Since you've disabled "Anonymous" access to "View & Download Packages", then NuGet will also need to authenticate to the feed using Basic authentication (username/password) to view and push packages.

      You can do this with the username api and the password of your API Key.

      You'll need to use nuget add source to configure the username/password:
      https://learn.microsoft.com/en-us/dotnet/core/tools/dotnet-nuget-add-source

      Best,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Delete asset by age

      Hi @nathan-wilcox_0355 ,

      Thanks for clarifying; so basically I think you you'll need to parse those version numbers instead of trying to rely on on publish dates. I think this will require some kind of custom script to capture the deletion logic you need....

      You may want to consider using Universal Packages, which does let you keep the last X versions of a particular package. You could then use pre-release versioning as well, which typically is what something like a CI server would create.

      Hope that helps,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Delete asset by age

      Hi @nathan-wilcox_0355 ,

      Asset directory policies don't consider the creation date; instead, we would recommend setting a policy like "keep files downloaded in last 90 days".

      That will delete everything that hasn't been downloaded in the last 90 days.

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Proposal - add Trivy support in server mode

      It's hard to say, because we haven't created our 2024 product roadmap yet :)

      You can see when past versions were released if you are curious:
      https://inedo.com/products/roadmap

      posted in Support
      atripp
      atripp
    • RE: pgscan: lockfileVersion 3 for npm dependencies not supported

      And I'm sure you noticed but looks like this was released :)

      posted in Support
      atripp
      atripp
    • RE: Proposal - add Trivy support in server mode

      Hi @w-repinski_1472 ,

      Thank you for the suggestion!

      We are considering developing our own container scanning solution, potentially in ProGet 2024, similar to ProGet Vulnerability Central (PGVC) but for containers.

      But in the mean time, you may be able to add this as a VulnerabilitySource similar to Clair:
      https://github.com/inedo/inedox-clair

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: pgscan: lockfileVersion 3 for npm dependencies not supported

      Thanks so much @shayde, on a quick glance the code looks good :)

      From here we can do the easy part early next week - internal reviewed, merge, build, test, deploy!

      posted in Support
      atripp
      atripp
    • RE: Using IIS::Ensure-Site without removing bindings?

      Hi @Justinvolved ,

      Ah yes, getting all this modeling done sensibly is a challenge, and documenting it is a whole new pain😫

      The main issue we're facing is that you can't create a Site in IIS without a binding; the API will simply reject it and error. This means that if you use IIS::Ensure-Site to create a site, but don't specify a binding, it will error. However IIS::Ensure-Site can update a site no problem.

      This is why we originally created the Bindings property. However, it's a but challenging to use, and exhibits the behavior you describe: it "ensures" that list matches whatever is in the Site.

      Our current way of thinking is this:

      IIS::Ensure-Site MySite
      (
          AppPool: MyPool,
          Path: C:\Websites\MySite,
          BindingProtocol: http,
          BindingHostName: app.local,
          BindingPort: 80
      );
      
      IIS::Ensure-SiteBinding
      (
          Site: MySite,
          Protocol: https,
         ... ssl properties ...
      );
      

      Our "new" way of thinking is that it might make sense to allow IIS::Ensure-Site to have two sets of binding properties.

      IIS::Ensure-Site MySite
      (
          AppPool: MyPool,
          Path: C:\Websites\MySite,
          HttpHostName: app.local,
          HttpBindingPort: 80,
          HttpsBindingPort: 443,
          HttpsCertificateOrWhatever...
      );
      

      This seems to align with how most people want to set up a site in IIS (i.e. two bindings).

      Definitely open to your feedback

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: PGVC: Blocked packages cannot be unblocked

      @sebastian thanks for confirming!

      I've added this as something to fix via PG-2441 and targeted it as 2013.14 (next Friday), but it's a lower-priority issue so it will may get "bumped" to the next or following depending on other issues

      posted in Support
      atripp
      atripp
    • RE: Using IIS::Ensure-Site without removing bindings?

      Hi @Justinvolved ,

      What properties are you setting?

      If you run Ensure-Site with the Bindings property, it will:

      • update the properties of the bindings specified if needed
      • delete the bindings not specified
      • add the ones that don't exist

      Note that you can specify a list of bindings in that property, so you could do this:

      IIS::Ensure-Site test
      (
          Path: E:\wwwroot\test,
          AppPool: testPool,
          Bindings: @(
              %(IPAddress: *, Port: 80, HostName: test.domain.local, Protocol: http),
              %(IPAddress: *, Port: 443, HostName: test.domain.local, Protocol: https),
           )
      );
      

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Conda feed not generating repodata.json for win-64 subdir

      Hi @e-rotteveel_1850 ,

      Thanks for sharing the packages for this (and that other CONDA issue); my wild-guess is that it's related to your package metadata. But we'll use your packages, attach a debugger and find out :)

      Please give us a few days to investigate/resolve this, and hopefully it'll be a very easy fix.

      posted in Support
      atripp
      atripp
    • RE: PGVC: Blocked packages cannot be unblocked

      @sebastian thanks, that's what I was hoping you could test :)

      Can you check something else: can you actually download the package by the URL directly (i.e. using what the API would do?)

      It should work, because I think this is just the UI incorrectly "double checking" against PGVC records. Not sure If I can explain it well, but maybe you can understand code better...

      6f05d063-8a9e-4178-893f-1e3e901d431f-image.png

      When you download a package, PGVC is first queried and then vulnerability records are added. Then, those added vulnerability records are checked against download rules.

      The records are not added while browsing a package, which is why we perform that second check. However, that second check should first cross-reference PGVC records against vulnerability records...

      Anyway, I wanted to confirm that was the issue - and then if so, we'll take a stab at fixing this.

      posted in Support
      atripp
      atripp
    • RE: ProGet 2023 Data Migration fails with database timeout

      Thanks so much @martijn_9956; after researching the matter further, it seems that the MERGE can be a pretty buggy statement, and its behavior seems to vary based on the operating system, SQL Server SKU, patch version, and probably the phase of the moon.

      We will rewrite this procedure to use a more straight-forward UPDATE/INSERT/DELETE (like you did) via PG-2437, which will ship in next maintenance release.

      posted in Support
      atripp
      atripp
    • RE: Could not create nuget feed using API

      @shenghong-pan_2297 that's a really old version of ProGet

      I looked here to see all the changes:
      https://my.inedo.com/downloads/issues?Product=ProGet&FromVersion=5.2.8&Count=all

      I CTRL-F'd for API and found this:
      PG-1594 5.2.13 FIX: Feed Management API fails to handle feed type codes correctly

      I guess that's it?

      Anyway I would upgrade and hopefully the issue goes away. Note that we're 4 major versions later (5.3 > 6.0 > 2022 > 2023), so you're pretty far behind ;)

      posted in Support
      atripp
      atripp
    • RE: ProGet 2023 Data Migration fails with database timeout

      Thanks so much for the update @martijn_9956; even 2 minutes that is a surprising amount of time to take for what should be a really basic insert/update 😲

      But I'm glad you could repro it to the proc; we can also take a stab at playing around with the procedure as well if you us the script you're running.

      And just to confirm -- you were seeing that @p2 temporary was only around 1658 rows, right? That's what I would expected based on the log messages. I know we tested this with 10k+ rows at least.

      posted in Support
      atripp
      atripp
    • RE: PGVC: Blocked packages cannot be unblocked

      Hi @sebastian ,

      That's really strange... I can't reproduce this, and I can't think why it would behave this way. But the logic is kinda complex. I don't think it has to do with a PGVC vs OSS vulnerability though.

      I did reproduce another bug...
      78bb032d-08a0-4ddd-a937-b1a2b0229652-image.png

      However, it's related to the "block" global rule it seems:
      298d8dd4-06d5-434f-80fa-1cb8d6095310-image.png

      When I change that to "allow" it works fine. I didn't experiment further, b/c I'd like to repro your specific bug and fix that as well.

      Any other input on how to repro would be appreciated; maybe try re-assisng to something else.

      Does it work if you override the block at the package level? You may have to pull the package to do that first.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: ProGet 2023 Data Migration fails with database timeout

      @martijn_9956 thanks, let us know what you find! That is really slow for Executions_AddLogEntry; it's just an insert I think.

      Can you check... what is the database recovery model for ProGet configured to? It should be SIMPLE. Not sure if that would make a difference or not, just a thought...

      posted in Support
      atripp
      atripp
    • RE: pgscan: lockfileVersion 3 for npm dependencies not supported

      Hi @caterina

      Sorry that issue fell off our radar; we're not great at keeping track of GitHub issues.

      We haven't noticed this issue in our testing/environment yet, but I'm guessing that's b/c we haven't gone to v9?

      Looking at the code, I guess it seems easy enough to support? Just a matter of iterating packages/dependencies instead of dependencies perhaps??

      https://github.com/Inedo/pgscan/blob/master/Inedo.DependencyScan/NpmDependencyScanner.cs#L23

      The hardest part for us is getting this tested/verified. We don't have a "problem project" ourselves yet, so we have to repro, study, fix, test, etc.

      Any help here would be appreciated :)

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: ProGet 2023 Data Migration fails with database timeout

      Hi @martijn_9956 ,

      We've seen this happen in the field a few times, and it seems to be very specific to SQL Server version, the hardware, or something like that. It's a bit of a mystery, because when we import a database backup, the migration happens really quickly. We even tried same version of SQL Server. Another user restored the database to another SQL Server, and it worked fine.

      In ProGet 2023.13 (which you're using), we increased the feed migration connection timeout to 1000 seconds (up from 30 seconds), so the fact this this is still happening is totally bizarre. I wonder if you could help troubleshoot, by seeing what's happening on the SQL Server side?

      Based on the log messages, the timeout is happening on when executing FeedPackages_UpdatePackageData; this procedure inputs a Table-valued parameter with 1658 rows (based on your data). Here is the C# code that invokes the query:

      await this.Db.FeedPackages_UpdatePackageDataAsync(
          Feed_Id: this.FeedId,
          Packages_Table: latestVersions.Select(
              p =>
              new TableTypes.FeedPackageEntry
              {
                  PackageName_Id = p.Key,
                  Latest_Package_Version = p.Value.LatestVersion?.ToNormalizedUniqueString(),
                  LatestStable_Package_Version = p.Value.LatestStableVersion?.ToNormalizedUniqueString(),
                  Total_Download_Count = p.Value.TotalDownloads
              }),
          DeleteMissing_Indicator: false
      ).ConfigureAwait(false);
      

      You can peek in SQL Server to see the code, but it's really just doing a straight-forward "upsert" into the FeedPackages table.

      If you attach SQL Profiler, you should be able to see exactly what's going on. The only rough idea we have is that there's something "wrong" with the way we're doing the upsert in FeedPackages_UpdatePackageDataAsync and some version of the query analyzer is tripping over it (but not reporting a deadlock?)

      Any insight would be appreciated, this one's a mystery for now 😑

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Could not create nuget feed using API

      Hi @shenghong-pan_2297 ,

      What version of ProGet are you using? There is no ProGet 5.8 :)

      I tested this in ProGet 2023 and there's no issues

      Alana

      posted in Support
      atripp
      atripp
    • RE: PGVC: Blocked packages cannot be unblocked

      Hi @sebastian ,

      What setting do you have for unassessed vulnerabilities? I.e. under SCA > Vulnerabilities> "Vulnerability Download Blocking Configuration" - I'd like to see Global rule and Feed-specific rules (if they exist).

      Also, the "Manage Vulnerability Sources" is kind of confusing.

      Multiple vulnerability sources are definitely a little weird/confusing, esp if you're familiar w/ ProGet 6 and earlier...

      • when there are 0 PGVC sources, a "Enable" dialog is displayed; this adds a source
      • if there is 1 PGVC sources, a "Disable" dialog is displayed; this deletes the source
      • otherwise, no dialog is shown and you see both in the list

      Feeds still need to be associated a vulnerability source, but we now call this association "download blocking".

      posted in Support
      atripp
      atripp
    • RE: ProGet: Handling of deprecated NuGet packages

      Hi @sebastian

      There is no plan to add user-configurable scheduled job capabilities to ProGet, and it's unlikely we would consider that since they are really hard to support. We do have our Otter product that's designed for that 😉

      However, in ProGet 2022, we considered a periodic "check" for packages in a feed against the source; the use case was "is a newer patch version available" - and if so, then an issue would be raised about using an out-of-date package. We obviously didn't implement that.

      But it seems we could take a similar approach and then also check for unlisting/deprecation as well. This might be something that comes up in our ProGet 2024 planning.

      But in either case, it still involves lots and lots of web calls to check each package against the source - so I would start with a a script and see what you find out.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Workstation/Server with Dual Choco Feeds

      Hi @rmusick_7875 ,

      Unless you build your own GUI client, I don't think what you're doing is going to be possible or feasible to implement; dependencies need to be be in the same feed.

      I suppose you could try "Unlisting" the packages, but I don't know if the Chocolatey GUI client uses the Listed indicated to determine if a package should be listed.

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Link between SCA Project and Package

      Hi @dan-brown_0128 ,

      It's hard to say exactly what's going on without seeing the specifics, but I think I might know what's going on.

      In ProGet, Projects & Releases are not associated with feeds; only package IDs. This means that, if you have the same package in multiple feeds that have SCA Features enabled, ProGet will pick one of those "at random" and link to in the UI - and I guess this selection is wrong in your case? That is, if you navigate to another feed with that package, it will show the vulnerabilities you are seeking?

      If you disable the "SCA Feature" on the Feed Management page, then it should link correctly.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: OT - running shell script displays an error message while it should not

      Hi @philippe-camelio_3885 ,

      When text is written to the stderr stream, Otter will interpret this as an error. Unfortunately a few tools (including git) like to write to the stderr, even though it's not an error, and will use the exit code to indicate an error instead.

      There are a handful of ways to deal with this:

      • set ErrorOutputLogLevel in the SHExec operation
      • use try/catch/force normal in OtterScript
      • modify your script to redirect output

      Redirecting is strongly recommended, and you can do it with 2>&1 in your script. Then you can test the exit code of the tool, and write to the error stream so Otter can pick it up as an error.

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: NuGet Upload Difference between Enterprise and Basic Edition

      Hi @richard-gagliano_3594 ,

      There's no difference between Enterprise and Basic edition with regards to feed behavior like this.

      I'm not totally certain on the inner workings of --skip-duplicate in NuGet, but I believe that it simply suppresses/ignores errors related to pushing. You would have to reveiw the HTTP traffic using a tool like Fiddler to be sure.

      I would check permissions in ProGet; it's most likely that your credentials (API Key) on one server has the Feed_OverwritePackage permission, and is therefore not throwing an error whe you push the package

      Hope that helps!

      Alana

      posted in Support
      atripp
      atripp
    • RE: BM - Editing OSCall Operation using Visual Editor not working anymore

      Hi @philippe-camelio_3885 , thanks for reproting this; definitely looks like a regression of some kind. We'll get it fixed ASAP via BM-3860 -- but sounds like you found a temporary work-around for now :)

      posted in Support
      atripp
      atripp
    • RE: BuildMaster Service logon account

      Hi @Justinvolved ,

      You should be able to see some kind of error message on Windows Event Logs, but my guess is that the account doesn't have access to the BuildMaster SQL Server database .. you'll need to grant that.

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Build error - "Package source __ahempty not found."

      Hi @bardmorgan_7142 ,

      That error seems to be a UI-related error with the Build Script Template editor. Basically that __ahempty value is meant to trigger a validation warning to force you to pick a package source.

      When you edit the build script, what do you have set for the package source drop-downs? It should be a ProGet feed, at least for publishing. But it's not required.

      Also, if you "Edit As OtterScript", you should be able to see where the __ahempty is being added to the script, and hopefully fix. Seeing taht would help us debug it :)

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: How to download private GPG key of an APT repository

      Unfortunately I have no idea what format gpg is looking for or how these could be used locally by debian. We mostly know the API/repo format, not so much the client tooling.

      We use the BouncyCastle encryption library. We simply use that byte array like this:

      var keys = new PgpSecretKeyRingBundle(this.Data.SecretKeys);
      
      using (var output = new MemoryStream())
      {
          using (var armor = new ArmoredOutputStream(new UndisposableStream(output)))
          {
              if (!detached)
              {
                  armor.BeginClearText(HashAlgorithmTag.Sha512);
                  armor.Write(data, 0, data.Length);
                  armor.EndClearText();
              }
      
              foreach (PgpSecretKeyRing ring in keys.GetKeyRings())
              {
                  var key = ring.GetSecretKey();
                  var signer = new PgpV3SignatureGenerator(key.PublicKey.Algorithm, HashAlgorithmTag.Sha512);
                  signer.InitSign(PgpSignature.CanonicalTextDocument, key.ExtractPrivateKeyRaw(null));
                  signer.Update(data, 0, data.Length);
                  signer.Generate().Encode(armor);
              }
          }
      
          return output.ToArray().Where(b => b != '\r').ToArray();
      }
      

      Beyond that no idea how they work. Probably not very helpful, but just FYI

      posted in Support
      atripp
      atripp
    • RE: How to download private GPG key of an APT repository

      Hi @hwittenborn ,

      For Linux/Docker, it's passed-in as an environment variable:
      https://docs.inedo.com/docs/installation-linux-supported-environment-variables

      It's possible you don't have one, and in that case, the data won't be encrypted. Note that SecretKeys is just a byte[] stored as base64.

      Alana

      posted in Support
      atripp
      atripp
    • RE: How to download private GPG key of an APT repository

      @hwittenborn there isn't a way to download the keys once created; it's probably not something we'll add to the UI in the near future since it doesn't seem trivial

      The keys are indeed stored in the SecretKeys, but they're encrypted using the EncryptionKey stored in your ProGet configuration file.

      You can try to decrypt them with this method (note: use EncryptionMode.Aes128)

              /// <summary>
              /// Decrypts data using the specified encryption mode.
              /// </summary>
              /// <param name="data">The data to decrypt.</param>
              /// <param name="mode">Method used to decrypt the data.</param>
              /// <returns>Decrypted data.</returns>
              /// <exception cref="ArgumentNullException"><paramref name="data"/> is null.</exception>
              /// <exception cref="ArgumentOutOfRangeException"><paramref name="mode"/> is invalid.</exception>
              public byte[] Decrypt(byte[] data, EncryptionMode mode)
              {
                  if (data == null)
                      throw new ArgumentNullException(nameof(data));
      
                  byte[]? key = mode switch
                  {
                      EncryptionMode.Aes128 => this.Aes128Key ?? throw new InvalidOperationException("Cannot decrypt value; there is no legacy encryption key defined."),
                      EncryptionMode.Aes256 => this.Aes256Key ?? throw new InvalidOperationException("Cannot decrypt value; there is no AES256 encryption key defined."),
                      EncryptionMode.None => null,
                      _ => throw new ArgumentOutOfRangeException(nameof(mode))
                  };
      
                  if (key == null)
                      return data;
      
                  var iv = new byte[16];
                  Buffer.BlockCopy(data, 0, iv, 0, 8);
                  Buffer.BlockCopy(data, data.Length - 8, iv, 8, 8);
      
                  using var buffer = new MemoryStream(data.Length - 16);
                  buffer.Write(data, 8, data.Length - 16);
                  buffer.Position = 0;
      
                  using var aes = Aes.Create();
                  aes.Key = key;
                  aes.IV = iv;
                  aes.Padding = PaddingMode.PKCS7;
                  using var cryptoStream = new CryptoStream(buffer, aes.CreateDecryptor(), CryptoStreamMode.Read);
                  var output = new byte[SlimBinaryFormatter.ReadLength(cryptoStream)];
      
                  int bytesRead = cryptoStream.Read(output, 0, output.Length);
                  while (bytesRead < output.Length)
                  {
                      int n = cryptoStream.Read(output, bytesRead, output.Length - bytesRead);
                      if (n == 0)
                          throw new InvalidDataException("Cannot decrypt value; stream ended prematurely.");
                      bytesRead += n;
                  }
      
                  return output;
              }
      
      posted in Support
      atripp
      atripp
    • RE: Multiple apps based on the same git repo

      Hi @Justinvolved,

      You can create Global pipelines and scripts, which you can then use across applications.

      You can see this in action here, in one of our extensions:
      https://buildmaster.inedo.com/applications/18/scripts/all?global=False

      Click on "Global (shared)" to see all the scripts. Same is on the Pipeliens page.

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Bulk Package Import not recognizing directories

      Hi @cole-bagshaw_3056 ,

      That message is displayed when the framework (operating system) method Directory.Exists returns false. This means that ProGet cannot access that directory, typically due to permissions or other access problems. No detail is provided to us beyond that.

      Unfortunately I don't have any tips/troubleshooting ideas on why a user/application would not be able to "see" a directory.

      My guess is that it has something to do with your mounting configuration, and how you've mounting the volumne in Docker. You may need to SSH into the container and see where teh mapped volume actually is, etc. But that's just a guess...

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Docker image pull through connector fails

      Hi @guyk ,

      Can you bypass the squid proxy and go directly to ACR? I saw a blog post long ago, where someone said something about proxies being an issue:

      https://faultbucket.ca/2022/05/aks-image-pull-failed-from-proget/

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Proget 2023.7 deadlocks on Get for Cached Packages

      Hi @chuck-buford_5284 ,

      Thanks for letting me know; can you let me know what queries were deadlocked? We shouldn't see any deadlock, but I guess it'd be possible for SELECT * FROM [NuGetFeedPackageVersions_Extended] to deadlock on itself depending on the query plan sql server uses.

      There are a few other things we can try, but we can't repro this at all, even in a test lab that's just hammering the database. Are you using SQL Server Express (i.e. what Inedo Hub installs by default)? It should work the same of course...

      Caching packages use that "CreatePackage method", so it's basically the same thing as installing a package I suppose.

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Encode URI incorrectly cause GCR connector not working

      Hi @PMExtra ,

      Looks like this didn't make it to the 2023 codebase; I've just merged it in via PG-2388 (shipping this friday in ProGet 2023.8).

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: ProGet Release Retention Policies + API Delete

      @mness_8576 thanks! We definitely welcome feedback on the UI/UX - this is a new feature, so there's a lot of room to improve :)

      posted in Support
      atripp
      atripp
    • RE: Move the GitLab packages to Proget repo

      Hi @rochishgvv_4077 ,

      There is no way to continuously sync to a NuGet feed.

      You can create a "Connector" to your GitLab package registry, so that the packages are always on demand: https://docs.inedo.com/docs/proget-feeds-connector-overview

      You can download all the packages from a Connector using a Feed Downloader:
      https://docs.inedo.com/docs/proget-feed-importing

      You can configure GitLab to push to ProGet. We don't have any info on how to do that, however.

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: ProGet Release Retention Policies + API Delete

      Hi @mness_8576 ,

      For now, archiving is the way to do it. Looking at the code too, it doesn't look like there's even an API to do it...

      Here is the code that BuildMaster uses, which clearly just sets the archive flag:

          /// <summary>
          /// Creates or updates the specified release with the specified data
          /// </summary>
          public async Task EnsureRelease(string projectName, string releaseNumber, string? releaseUrl, bool? active, CancellationToken cancellationToken = default)
          {
              using var response = await this.http.PostAsJsonAsync(
                  "api/sca/releases",
                  new
                  {
                      project = projectName,
                      version = releaseNumber,
                      url = releaseUrl,
                      active
                  },
                  cancellationToken
              ).ConfigureAwait(false);
      
              response.EnsureSuccessStatusCode();
          }
      

      Otherwise, we don't have automated deletion or retention policies for archived SCA releases; they don't take up much space (relatively speaking), and we didn't want to commit to retention rules so early on in the feature.

      If they become a problem (UI, performance, etc.), it's easy enough to delete a bunch via SQL for the time being... and that'll help us learn how to create policies. And we can add to API and all that :)

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Extension Loading error

      Hi @scroak_6473 ,

      I'm not sure what the issue is and the errors are very peculiar, I assume that everything works, until you restore the database?

      Afte restoring database, there are a few paths I would check under Admin > Advanced Settings:

      • Extensions.BuiltInExtensionsPath <--- should be C:\Program Files\ProGet\Extensions
      • Extensions.ExtensionsPath <--- should be C:\ProgramData\ProGet\Extensions
      • Extensions.CommonCachePath <-- should be C:\ProgramData\ProGet\ExtensionsCache
      • Extensions.UseNewExtensionLoader <--- should be checked
        And then after edting those, make sure to restart the web.
      posted in Support
      atripp
      atripp
    • RE: Proget 2023.7 deadlocks on Get for Cached Packages

      Hi @chuck-buford_5284 ,

      Thanks; that's exactly what I would have looked for in the file, thanks for sending that;

      How often are these coming up? What sort of hardware are you working with? Are you able to reproduce this consistently?

      The query pattern is implying that there's heavy usage while uploading or deleting packages, and I know we spotted some potential issues earlier - but wanted to wait to confirm something else.

      We have some optimized versions of FeedPackageVersions_DeletePackageVersion and FeedPackageVersions_CreateOrUpdatePackageVersion, but we didn't ship them just yet. Can you try them out? Just run the attached queries in https://inedo.myjetbrains.com/youtrack/issue/PG-2387

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Proget 2023.7 deadlocks on Get for Cached Packages

      Hi @chuck-buford_5284 ,

      I'm surprised to see these on v2023.7, but it's an issue we're working through (it's an entirely new indexing system).

      Can you provide us with your deadlock reports?

      It should be on your SQL Server, under Management > Extended Events > SEssions > system_health > package0.event_file. Then you can click on Filter (or CTRL-R), and add a filter for Field name = xml_deadlock_report.

      Ultimately what we're looking for are the xml files, specifically what two queries are deadocking.

      Here is some screenshots on how to do that:
      https://www.mssqltips.com/sqlservertip/6430/monitor-deadlocks-in-sql-server-with-systemhealth-extended-events/

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • 1
    • 2
    • 11
    • 12
    • 13
    • 14
    • 15
    • 34
    • 35
    • 13 / 35