Navigation

    Inedo Community Forums

    Forums

    • Login
    • Search
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    1. Home
    2. atripp
    3. Posts
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    Posts made by atripp

    • RE: ProGet 2025.12 Published date is default value on packages from connector feed

      Hi @d-mischke_3966,

      When you say a "v1 NuGet feed", do you really mean the ancient version (i.e. used from 2010 to 2015)?

      I don't think ProGet has ever supported that if so.

      Ultimately it sounds like there is a problem with the API though -- 01.01.0001 01:00:00 is not a valid date.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Bug: duplicate docker manifests on connected feed when upstream tag updated

      Hi @mayorovp_3701

      This one is tricky and I'm afraid I cannot reproduce this error either.... but I think I can see how it's possible.

      First off, it's definitely possible for two processes to attempt to add the same manifest to the feed at the same time, and I might expect that during a parallel run like you describe.

      However, there is data integrity constraint (trigger) to design this from happening. This trigger should yield a Image_Digest must be unique across the containing feed error.

      But, looking at the trigger code for PostgreSQL, it looks like we aren't using an advisory lock to completely prevent a race condition.

      Can you confirm that you're using PostgreSQL?

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Signature Packet v3 is not considered secure

      Hi @frei_zs ,

      We are currently working on PG-3110 to add support for "v4 signatures" and intend to release that soon (along with better support for public repositories) in an upcoming maintenance release.

      Unfortunately it's not trivial, as the underlying cryptography library (Bouncy Casstle) does not support it, so we have to reimplement signing -- and the good news is that it seems to work so far, and is much faster.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Need help to request all package vulnerabilities in ProGet 2024 version

      Hi @fabrice-mejean ,

      It's no longer possible to query this information from the database.

      As you've noticed, ProGet now uses a version range (i.e. AffectedVersions_Text ) to determine whether a package is vulnerable or not. So instead of 4.2.3 it's now 4.2.3-4.2.8 or [4.2.*) or something like that.

      Unfortunately it's not practical/feasible to parse this information unless you were to rewrite a substantial amount of ecosystem-specific parsing logic - this would be basically impossible to do in a SQL query.

      Instead, you'll need to use the upcoming pgutil packages meatdata command to see what vulnerabilities a particular packages has. You can also use Notifiers to address newly-discovered vulnerabilities.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: [ProGet] Connector Ordering / Precedence in ProGet

      Hi @koksime-yap_5909 ,

      Not necessarily - it really depends on the API query.

      If the query is like "give me a list of all versions of MyPackage", then ProGet will need to aggregate local packages and connector packages to produce that list.

      If the query is "give me the metadata for MyPackage-1.3.1", then the first source that returns a result is used.

      In practice, NuGet asks for "all versions" a lot. So you'll get a lot of queries.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: ProGet - Unsupported Header when Uploading to Pure Storage S3

      @james-woods_8996 we haven't released the extension yet, so it won't be in any builds of the product. We are waiting on someone to test/verify that it works

      posted in Support
      atripp
      atripp
    • RE: proget 500 Internal server error when pushing to a proget docker feed

      Hi @thomas_3037 ,

      The specific issue was already fixed; I'm going to close it because whatever your experiencing is different and the "history" in this long winding thread will only make it harder to understand ,

      Please "start from the scratch" as if you never read this.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: [ProGet] Connector Ordering / Precedence in ProGet

      Hi @koksime-yap_5909,

      They are ordered by the name - so if orderis important to you, I guess you could always do the 10-nuget.org, 20-abc.net or whatever

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Add support for Terraform Public Registry in ProGet (offline/air-gapped)

      Hi @Stholm ,

      I assume you saw the Terraform Modules documentation in ProGet?

      While updating the Support for Terraform Backends to link to this discussion, I noticed we had some internal notes. So I'll transfer them here:

      This would require implementing both the Provider Registry Protocol (for first-party plugins) and the Provider Network Mirror Protocol (for connectors). Both seem relatively simple, though there appear to be some complexities involving signature files.

      In either case, we ought to not package these because they are quite large. For example, the hashcorp\aws provider for Windows is just a zip file with a single, 628mb .exe. They also have no metadata whatsoever that's returned from the API.

      One option is just to store these as manifest-less packages. For example, hashicorp/aws packages could be pkg:tfprovider/hashicorp@5.75.0?os=windows&arch=amd64. This would be two purls in one feed, which might not work, so it might require a new feed.

      Don't ask me what that all means, I'm just the copy/paster 😂

      But based on my read of that, it sounds like a big effort (i.e. a new feed type) to try to fit a round peg in a square hole. And honestly your homebuilt solution might work better.

      I think we'd need to see how much of a demand there is in the offline/air-gapped Terraform userbase for this. But feel free to add more thoughts as you have them.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: ProGet - Unsupported Header when Uploading to Pure Storage S3

      Hi @james-woods_8996 ,

      ProGet uses the AWS SDK for .NET, so I can't imagine environment variables would have any impact. I have no idea what those mean or do, but there's probably way to configure those in the SDK.

      That said, another user is currently testing a change for Oracle Cloud Infrastructure, which seems to also be giving some kind of hash related error.

      Perhaps it'll work for you? AWS v3.1.4-RC.1 is published to our prerelease feed. You can download and manually install, or update to use the prerelease feed:

      https://docs.inedo.com/docs/proget/administration/extensions#manual-installation

      After installing the extension, the "Disable Payload Signing" will show up on the advanced tab - and that property will be forwarded to the Put Request. In theory that will work, at least according to the one post from above.

      One other thing to test would be uploading Assets (i.e. creating an Asset Directory) via the Web UI. That is the easiest way to do multi-part upload testing.

      If it doesn't work, then we can try to research how else to change the extension to get it working.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Deploy inedo agent with gMSA

      Hi @philippe-camelio_3885,

      Can you clarify what you've tried to date, and the issues you've faced?

      You can silently install the Inedo Agent, or even use a Manual Installation process if you'd prefer.

      Ultimately it's a standard Windows service, and you can change the account from LOCAL SYSTEM (which we recommend, by the way) to another account using sc.exe or other tools.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: ProGet 2025.10: License Update API Issues

      Hi @jw ,

      Technically it's double, though it's not trivial due to the number of places the change would need to be made and tested... ProGet API, pgutil, docs.

      The code/title change itself looks trivial (i.e. just pass in External_Id and Title_Text to the call to Licenses_UpdateLicenseData), though I'm not totally clear what to do about the other one. What does pgutil send in? Null? []? Etc.

      As a free/community user this isn't all that easy to prioritize... but if you could do the heavy-lifting on the Docs and pgutil (i.e. submit a PR), and give us a script or set of pgutil commands that we can just run against a local instance... I'm like 95% I can make the API change in 5 minutes.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Apply license key inside container

      Hi @jlarionov_2030 , easy change! We will not require a valid license for Settings API Endpoint going forward; PG-3133 will ship in next maintenance release, scheduled for Friday.

      posted in Support
      atripp
      atripp
    • RE: Deletion of C:\Program Files\ProGet\Service\postgres\bin directory

      Hi @tobe-burce_3659 ,

      We do not support deleting or modifying any of the contents under the program directory; they will simply return when you upgrade and it may cause other problems.

      Instead, please create an exception; we are aware of the vulnerabilities in libraries that Postgresql uses and can assure you that they are false positives and will have no impact on ProGet... even if you were to be using PostgreSQL.

      Using virus/malware tools to scan/monitor ProGet's operation causes lots of problems, as these tools interfere with file operations and cause big headaches.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Support for gpg in rpm feed

      Hi @Sigve-opedal_6476 ,

      No idea why it wouldn't work, but I would look at something like ProxyMan or Wireshark to capture HTTP traffic, and see what requests are different.

      You should see a pattern of requests that work, and a pattern that doesn't.

      Maybe the client is requesting some other file that you aren't uploading? I don't think there's an API or HTTP header... I think it's all basic GET requests. But that will tell you the delta.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: pgutil doesn't support nuget lock files to generate sbom

      Hi @fabrice-mejean ,

      I definitely understand where you're coming from.... both commands basically work off the assets file, which is generated at build time.

      But your workflow is not common... the standard for SBOM generation is post-build. Doing it pre-build checking requires that packages.lock.json is used, which not many use... it's hard for us to advocate this workflow when most users don't care about saving time in this stage.

      I know we could add a "switch" or something to pgutil, but we learned "the hard way" that adding lots of complex alternative/branching paths to pgscan made for very difficult to maintain/understand code, so we want to keep the utility as simple as possible.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Request for Creation of API for Package Auditing Before Dependency Restoration

      Hi @pmsensi,

      Correct -- it'll be whatever data is on the "Dependencies" tab in ProGet, which is basically whatever is in the manifest file (.nuspec, etc).

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Request for Creation of API for Package Auditing Before Dependency Restoration

      Hi @fabrice-mejean @pmsensi ,

      We've got this spec'd out and on the roadmap now as PG-3126! It'll come through a maintenance release, along with pgutil security commands for configuring users, groups, and tasks.

      The target is 2025.13, which is planned for October 24. I don't know if we'll hit that target, but that's what we're aiming for.

      Please check out the specs on PG-3126; I think it captures what you're looking for, which is basically an expanded metadata object that includes compliance data, detected licenses, and vulnerabilities.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: pgutil doesn't support nuget lock files to generate sbom

      Hi @fabrice-mejean,

      Using packages.lock.json seemed to make the most sense to us too, but ultimately we decided not to use it for a few reasons.

      First and foremost, none of the other .NET SBOM-generators seemed to use the packages.lock.json file. That's usually a sign that there's a "good reason" for us not to either.

      From our perspective, pgutil builds scan is intended to be used in a CI environment, where dotnet build is run anyway and the assets file is already present. We don't have a use-case for an alternative workflow, where a build is not actually run.

      In addition, packages.lock.json are still pretty niche and not widely used. You have to "go out of your way" to use it, and <PackageReference .../ > is by far the most common approach. It might be worth monitoring Issue #658 at CycloneDX/cyclonedx-dotnet to see anyone picks it up there.

      Technically it's not all that complex to do, but it adds complexity and confusion... especially since most users will not be familiar with the differences between the lock and asset file. So it's not a good fit for pgutil builds scan.

      HOWEVER, you could probably write ask ChatGPT to write a trivial PowerShell script that "Transforms" a lock file into a minimal SBOM document, and tweak it for what you want in ProGet. That same script could just upload the file to ProGet, or use pgutil as well.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: pgutil brew package

      Hi @layfield_8963 ,

      Thanks that makes sense -- since you've already got a personal repo going, I think it makes sense to stick with that for now. If other users are interested, we can explore it.

      We publish pgutil pretty regularly, so we'd need to automate updating that repository and that's just one more script t write and one more thing to break later :)

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Conan License detection issue

      Hi @it_9582 ,

      This is a known issue / UI quirk with Conan packages, and hopefully should only impact that one page in the UI.

      To be honest I don't quite get the issue, but it has something to do with the fact that a Conan package is actually "set of packages that share a name and version". Each package in the set can define its own license file.

      The particular page was never really designed for "package sets" so the display is a little weird. It's a nontrivial effort to fix and would obviously impact all other package types, so it's not a priority at the moment.

      We would love to redo the UI at some point, s I think it'd mkae sense to do then.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Support for gpg in rpm feed

      Hi @Sigve-opedal_6476 ,

      Thanks for clarifying; I should have mentioned that I know basically nothing about rpm except how a repository works 😅

      A gpgkey is just a file, right? I think you were on the right track with using an asset directory. I guess you would configure things like this, for an aset directory called gpg in the docker/gpg folder:

      gpgkey=https://myproget.mycompany.com/endpoints/gpg/content/docker/gpg
      

      There's really no "timestamp" on web-based files, the browser/protocol just does a GET and the bytes are returned. I can't imagine rpm is looking at a modified cache header.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Importing Conan Packages empty executions

      Hi @aristo_4359 ,

      I researched this a little further, and I'm afraid Conan Packages cannot be imported from Artifactory at this time; they do not behave like other repositories in Artifactory, which means it's a nontrivial effort to figure out how to get these imported using a different/alternative Artifactory API.

      We will clarify this in upcoming ProGet release via PG-3122.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Support for gpg in rpm feed

      Hi @Sigve-opedal_6476,

      I'm not really sure what you mean by this request.

      An RPM repository is basically just a "dumb file server" with like repodata.xml and a bunch of index.tar.gz files. The rpm client downloads this files and does all the gpg stuff.

      ProGet feeds implement an RPM repository and generate these indexes on the fly... but to the client, it seems like it's just downloading static files.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: pgutil brew package

      Hi @layfield_8963 ,

      About the only knowledge of "homebrew" I have is that it's some kind of thing on Mac, perhaps like apt or chocolatey. I think we'd normally be all about hosting a "cask" or "keg", but I don't think that's what you're asking 🍺😂

      It doesn't make sense for us to try supporting an ecosystem we know so little about. That's the same reason we never did a Chocolatey package on our own, but our friend @steviecoaster from Chocolatey created/maintains the proget package at chocolatey.org, and that has worked out just fine.

      Anyway if you just need "something simple" from us like accepting a (simple) pull request or editing a build/deploy script, we might be able to do that. But otherwise it won't make sense for us to invest in learning about brew and supporting it.

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: The ConnectionString property has not been initialized

      Hi @tyler_5201

      The underlying error is that there is no connection string, as you noticed.

      The connection string is stored in a file (/var/proget/database/.pgsqlconn) that should be accessible to the container. I haven't tested it, but I guess if file is missing or deleted, then I suppose you might run into these issues.

      It should be created on startup of a new container, however. So it's kind of weird. I think you'll want to "play" with it a bit, since there's clearly something going on with your permissions I'm thinking.

      Note that the connection string can also be specified as an environment variable, but I don't think that applies here since you're trying to configure the embedded datbase:
      https://docs.inedo.com/docs/installation/linux/docker-guide#supported-environment-variables

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: npm connector returns 400

      @udi-moshe_0021 sounds like it was a temporary outage on npmjs.org or perhaps even your proxy server. I wouldn't worry about it if it's working now since it's not something you could really control anyway

      posted in Support
      atripp
      atripp
    • RE: Getting 500, "Could not find stored procedure 'Security_GetRoles', but /health show no errors

      Hi @carl-westman_8110 ,

      The error message means that the database wasn't updated as per normal during the start-up process. It's hard to guess why, as we have special handling for that.

      It's likely that restarting the service would have fixed it, but downgrading and then upgrading would also force an upgrade as well. Unfortunately it's hard to say at this point.

      Upgrading to 2025.10 should be fine.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Packages with Noncanonical Names errors on internalized packages

      Hi @jfullmer_7346,

      It's nothing you did.

      The underlying issue is that a bug in ProGet allowed WinSCP (ID=563) to be added to the PackageNameIds table; that should never have happened since nuget is case insensitive. We've since fixed that bug.

      However, once you have a duplicate name, weird things happen since querying for "give me the PackageID for nuget://winscp" returns two results instead of one. So now when you query "give me the VersionID for (ID=563)-v6.5.3", a new entry is created.

      This has been a very long-standing issue, but there aren't any major consequences to these "weird things" except casing in the UI and the health check now fails.

      But we're on it :)

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Packages with Noncanonical Names errors on internalized packages

      Hi @jfullmer_7346 ,

      I haven't had a chance to look into more details, but thanks for providing the results of the query.

      FYI - the PackageNameIds and PackageVersionIds are designed as a kind of "permanent, read-only record" -- once added they are not deleted or modified. Even if all packages are deleted (i.e. FeedPackageVersions). This is why this the "duplicate name" is such a headache to deal with.

      That said, on a quick glance, we can see exactly where the error is coming from: there are duplicate versions (i.e. (ID=563)-v6.5.3 and (ID=562)-v6.5.3). So, when we try to deduplicate (ID=563) and (ID=562) (i.e. winscp and WinSCP), we get the error as expected.

      What's not expected is that those versions were not de-duplicated in the earlier pass. My guess is that it's related to winscp being in one feed and WinSCP being in the other -- we tried to be conservative, and keep the de-duplication to packages related to the feed.

      I'm thinking we just change that logic to "all packages of the feed type". Anyway, please stay tuned. We'll try to get it in the next maintencne release.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Packages with Noncanonical Names errors on internalized packages

      @jfullmer_7346 thanks for giving it a shot, we'll take a closer look!

      The "good news" is that the error message is a "sanity check" failure, so now have an idea of what's causing the error:

          -- Sanity Check (ensure there are no duplicate versions)
          IF EXISTS (
              SELECT * 
               FROM "PackageVersionIds" PV_D,
                    "PackageVersionIds" PV_C
              WHERE PV_D."PackageName_Id" = "@Duplicate_PackageName_Id"
                AND PV_C."PackageName_Id" = "@Canonical_PackageName_Id"
                AND PV_D."Package_Version" = PV_C."Package_Version"
                AND (   (PV_D."Qualifier_Text" IS NULL AND PV_C."Qualifier_Text" IS NULL)
                     OR (PV_D."Qualifier_Text" = PV_C."Qualifier_Text") )
             ) THEN RAISE EXCEPTION 'Cannot deduplicate given nameid'; RETURN; END IF;
      

      In this case, it's saying that there are "duplicate versions" remaining (i.e. WinSCP-1.0.0 and winscp-1.0.0). Those should have been de-duplicated earlier. I wonder if the PackageVersionIds_GetDuplicates() function is not returning the right results.

      I'm not sure what your experience w/ PostgreSQL is, but are you able to query the embedded database? If not, that's fine... it's not meant to be easy to query.

      Also, should the integrity check be taking 30 minutes?

      Maybe. The integrity check needs to verify file hashes, so that involves opening and streaming through all the files on disk. So when you have a lot of large packages, then it's gonna take a while.

      posted in Support
      atripp
      atripp
    • RE: Unable to download io.r2dbc:r2dbc-bom:pom:Borca-SR2 from ProGet feed.

      Hi @bohdan-cech_2403,

      ProGet can handle most invalid Maven version numbers, but Borca-SR2 is really invalid and isn't currently supported. Our "name vs version" parsing requires that versions start with numbers, artifacts start with letters. This has been a Maven rule for 20+ years now.

      It's non trivial and quite risky to change our parsing logic so it's not something we're keen on doing in a maintenance release. This scenario seems to be very rare and impact ancient artifacts and a few cases where authors didn't use Maven to deploy the artifacts.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: maven Checksum validation failed, no checksums available

      @uli_2533 thanks for the additional info, great find in the source code too!!

      On our end, I was looking at the PUT code, where a 301 kind of made sense. I think it must have been some kind of regression on the GET request? Not sure why it didn't get noticed before, but it's a trivial fix.

      PG-3108 will be in the next maintenance release (Sep 19)... or if you want to try it now, it's in inedo/proget:25.0.10-ci.9 Docker image.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: maven Checksum validation failed, no checksums available

      Hi @bohdan-cech_2403 ,

      I'm not sure if that's the issue...

      Returning a 201 has been the behavior for as long as we've had the feed (even the old version of the feed). The official Maven client does not seem to complain or cause any error in our testing, and no other user reported it as a problem.

      Any idea why it's happening "all of a sudden" for you? Is there a new version of Maven or something?

      FYI the PUT uploads for hash files are ignored and a 201 is always returned.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Machine ID changes after restart

      @jorgen-nilsson_1299 said in Machine ID changes after restart:

      What is the Machine ID based on and how can I trouble shoot this? Any way to set a static Machine ID?

      The Machine ID is based on the CPU Vendor ID, Machine Name (host name on Docker), and OS Version.

      Hopefully @felfert gave some advice on how to make sure those don't change,

      posted in Support
      atripp
      atripp
    • RE: proget 500 Internal server error when pushing to a proget docker feed

      @pariv_0352 the code is not fixed in 25.0.9

      However, inedo/proget:25.0.10-ci.5 will have the new code that should prevent this error

      posted in Support
      atripp
      atripp
    • RE: Packages with Noncanonical Names errors on internalized packages

      @jfullmer_7346 thanks! As an FYI...

      • Names and Versions are centrally indexed, and were intended to be "write-only" by design
      • NuGet does not have case-sensitive names, but some earlier bugs allowed duplicate names to be created
      • we are adding Duplicate Names (e.g. winscp and WinSCP) and Duplicate Versions (e.g. winscp-4.0.0 and WinSCP-4.0.0 checking to feed integrity
      • When re-indexing a feed, you'll get an option to de-duplicate names/versions, which will fix it across all feeds
      • Pulling or Publishing a package will update the casing of a centrally-indexed name
      • Also we're getting rid of the concept of "Noncannonical Names" altogether, since we've discovered many NuGet packages "change casing" at some version
      posted in Support
      atripp
      atripp
    • RE: Proget 25.0.9

      Hi @pmsensi ,

      Thanks for the heads-up; looks like there was a replication issue with one of our edge nodes. It should be there now

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: proget - Enhancement request: show value of X-Forwarded-For header

      Thanks @felfert it was an easy add, so look out for PG-3106 in the next maintenance release!

      posted in Support
      atripp
      atripp
    • RE: `pgutil assets metadata get` fails when filename contain spaces

      Hi @mmaharjan_0067

      The "unexpected argument" running without quotes is expected, but it works fine when I run with quotes. I'm afraid I can't reproduce this.

      I would check under Admin > Diagnostic Center to see if anythings logged. Alternatively, you may need to query the API by doing something like this:

      curl http://server:8624/endpoints/Test/metadata/metadata-test/v1/1%20-%20Normal.txt
      

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: ERROR while migrating maven repository from Jfrog Artifactory.

      Hi @bohdan-cech_2403 ,

      Thanks for sharing that; I can confirm receipt, reproduction, and fixing (PG-3105). If you'd like to try it, you can get the fix in 25.0.10-ci.2 - otherwise it'll be in the next maintenance release (next Friday).

      @wechselberg-nisboerge_3629 FYI I entered the url/username/password into ProGet, but I got the message "Failed to find any registries." So, I tried with curl and I got this:

      $> curl "https://artifactory.REDACTED.com/artifactory/api/repositories?type=local" --user support
      Enter host password for user 'support':
      {
        "errors" : [ {
          "status" : 401,
          "message" : "Artifactory configured to accept only encrypted passwords but received a clear text password, getting the encrypted password can be done via the WebUI."
        } ]
      }
      

      I have no idea what explains the different behavior. Anyway, I logged into the portal on that URL and generated some kind of key, and it let me connect.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: proget - Enhancement request: show value of X-Forwarded-For header

      Hi @felfert,

      So far as I can tell, the IP isn't currently logged in these messages... I can see how that would be helpful.

      I can certainly do that (which would then show the X-Forwarded when available), but I wanted to make sure I'm looking in the right place. Because I don't see IP info now.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: proget 500 Internal server error when pushing to a proget docker feed

      Hi @felfert ,

      As an update, we'd are planning on this pattern instead of row/table locking (PG-3104). It gives us a lot more control and makes it a lot easier to avoid deadlocks.

      I still can't reproduce the issue, but I see no reason this won't work.

      CREATE OR REPLACE FUNCTION "DockerBlobs_CreateOrUpdateBlob"
      (
          "@Feed_Id" INT,
          "@Blob_Digest" VARCHAR(128),
          "@Blob_Size" BIGINT,
          "@MediaType_Name" VARCHAR(255) = NULL,
          "@Cached_Indicator" BOOLEAN = NULL,
          "@Download_Count" INT = NULL,
          "@DockerBlob_Id" INT = NULL
      )
      RETURNS INT
      LANGUAGE plpgsql
      AS $$
      BEGIN
      
      	-- avoid race condition when two procs call at exact same time
      	PERFORM PG_ADVISORY_XACT_LOCK(HASHTEXT(CONCAT_WS('DockerBlobs_CreateOrUpdateBlob', "@Feed_Id", LOWER("@Blob_Digest"))));
      
          SELECT "DockerBlob_Id"
            INTO "@DockerBlob_Id"
            FROM "DockerBlobs"
           WHERE ("Feed_Id" = "@Feed_Id" OR ("Feed_Id" IS NULL AND "@Feed_Id" IS NULL))
             AND "Blob_Digest" = "@Blob_Digest";
      
          WITH updated AS
          (
              UPDATE "DockerBlobs"
                 SET "Blob_Size" = "@Blob_Size",
                     "MediaType_Name" = COALESCE("@MediaType_Name", "MediaType_Name"),
                     "Cached_Indicator" = COALESCE("@Cached_Indicator", "Cached_Indicator")
               WHERE ("Feed_Id" = "@Feed_Id" OR ("Feed_Id" IS NULL AND "@Feed_Id" IS NULL)) 
                 AND "Blob_Digest" = "@Blob_Digest"
              RETURNING *
          )        
          INSERT INTO "DockerBlobs"
          (
              "Feed_Id",
              "Blob_Digest",
              "Download_Count",
              "Blob_Size",
              "MediaType_Name",
              "Cached_Indicator"
          )
          SELECT
              "@Feed_Id",
              "@Blob_Digest",
              COALESCE("@Download_Count", 0),
              "@Blob_Size",
              "@MediaType_Name",
              COALESCE("@Cached_Indicator", 'N')
          WHERE NOT EXISTS (SELECT * FROM updated)
          RETURNING "DockerBlob_Id" INTO "@DockerBlob_Id";
      
          RETURN "@DockerBlob_Id";
      
      END $$;
      
      posted in Support
      atripp
      atripp
    • RE: proget 500 Internal server error when pushing to a proget docker feed

      @felfert amazing!! That script will come in handy when we need to help users patch their instance; we can also try to add something that allows you to patch via the UI as well!!

      posted in Support
      atripp
      atripp
    • RE: proget 500 Internal server error when pushing to a proget docker feed

      @felfert thanks for confirming!!

      FYI tThe fix has not been applied yet to code yet, but you can patch the stored procedure (painfully) as a workaround for now. We will try to find a better solution. The only thing I can imagine happening is that the PUT is happening immediately after the PATCH finishes, but before the client receives a 200 response. I have no idea though.

      We'll figure something out, now that we know where it is thanks to your help!!

      posted in Support
      atripp
      atripp
    • RE: Proget maven migration

      Hi @parthu-reddy,

      ProGet 2025 supports existing Maven classic feeds; you should be able to migrate just as you were in ProGet 2024.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Not able to upload .spd files to proget assets

      Hi @parthu-reddy ,

      Since these are network-level errors, you would need to use a tool like Wireshark or another packet analyzer to troubleshoot these kind of connectivity failures.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: proget 500 Internal server error when pushing to a proget docker feed

      Did that, verified that the function actually has changed and did another test. Unfortunately this did not help, error was exactly the same like in my above wireshark dump.
      Or does one have to "compile" the function somehow after replacing? (I never dealt with SQL functions before and in general have very limited SQL knowledge.)

      Ah that's a shame! We're kind of new to "patching" functions like this in postgresql, but I think that should have worked to change the code. And also the code change should have worked.

      If you don't mind trying one other patch, where we select out the Blob_Id again in the end.

      CREATE OR REPLACE FUNCTION "DockerBlobs_CreateOrUpdateBlob"
      (
          "@Feed_Id" INT,
          "@Blob_Digest" VARCHAR(128),
          "@Blob_Size" BIGINT,
          "@MediaType_Name" VARCHAR(255) = NULL,
          "@Cached_Indicator" BOOLEAN = NULL,
          "@Download_Count" INT = NULL,
          "@DockerBlob_Id" INT = NULL
      )
      RETURNS INT
      LANGUAGE plpgsql
      AS $$
      BEGIN
      
          SELECT "DockerBlob_Id"
            INTO "@DockerBlob_Id"
            FROM "DockerBlobs"
           WHERE ("Feed_Id" = "@Feed_Id" OR ("Feed_Id" IS NULL AND "@Feed_Id" IS NULL))
             AND "Blob_Digest" = "@Blob_Digest"
         FOR UPDATE;
      
          WITH updated AS
          (
              UPDATE "DockerBlobs"
                 SET "Blob_Size" = "@Blob_Size",
                     "MediaType_Name" = COALESCE("@MediaType_Name", "MediaType_Name"),
                     "Cached_Indicator" = COALESCE("@Cached_Indicator", "Cached_Indicator")
               WHERE ("Feed_Id" = "@Feed_Id" OR ("Feed_Id" IS NULL AND "@Feed_Id" IS NULL)) 
                 AND "Blob_Digest" = "@Blob_Digest"
              RETURNING *
          )        
          INSERT INTO "DockerBlobs"
          (
              "Feed_Id",
              "Blob_Digest",
              "Download_Count",
              "Blob_Size",
              "MediaType_Name",
              "Cached_Indicator"
          )
          SELECT
              "@Feed_Id",
              "@Blob_Digest",
              COALESCE("@Download_Count", 0),
              "@Blob_Size",
              "@MediaType_Name",
              COALESCE("@Cached_Indicator", 'N')
          WHERE NOT EXISTS (SELECT * FROM updated)
          RETURNING "DockerBlob_Id" INTO "@DockerBlob_Id";
      
          SELECT "DockerBlob_Id"
            INTO "@DockerBlob_Id"
            FROM "DockerBlobs"
           WHERE ("Feed_Id" = "@Feed_Id" OR ("Feed_Id" IS NULL AND "@Feed_Id" IS NULL))
             AND "Blob_Digest" = "@Blob_Digest"
      
          RETURN "@DockerBlob_Id";
      
      END $$;
      

      If this doesn't do the trick, I think we need to look a lot closer.

      posted in Support
      atripp
      atripp
    • RE: proget 500 Internal server error when pushing to a proget docker feed

      Hi @inedo_1308 ,

      I forgot how it worked in the preview migration, but the connection string is stored in the database directory (/var/proget/database/.pgsqlconn).

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: proget 500 Internal server error when pushing to a proget docker feed

      @inedo_1308 sounds good!

      The code would almost certainly be the same, since it hasn't been updated since we did the PostgreSQL version of the script.

      So, I think it's a race condition, though I don't know how it would happen. However, if it's a race condition, then it should be solved with an UPDLOCK (or whatever) in PostgreSQL.

      1. SELECT finds no matching blob in the database (thus DockerBlob_Id is null)
      2. ... small delay ...
      3. UPDATE finds the matching blob because it was added (thus a row gets added to insert)
      4. INSERT does run because there is a row in inserted
      5. A NULL DockerBlob_Id is returned

      If you're able to patch the procedure, could you add FOR UPDATE as follows? We are still relatively to PostgreSQL so I don't know if this the right way to do it in this case.

      I think a second SELECT could also work, but I dunno.

      CREATE OR REPLACE FUNCTION "DockerBlobs_CreateOrUpdateBlob"
      (
          "@Feed_Id" INT,
          "@Blob_Digest" VARCHAR(128),
          "@Blob_Size" BIGINT,
          "@MediaType_Name" VARCHAR(255) = NULL,
          "@Cached_Indicator" BOOLEAN = NULL,
          "@Download_Count" INT = NULL,
          "@DockerBlob_Id" INT = NULL
      )
      RETURNS INT
      LANGUAGE plpgsql
      AS $$
      BEGIN
      
          SELECT "DockerBlob_Id"
            INTO "@DockerBlob_Id"
            FROM "DockerBlobs"
           WHERE ("Feed_Id" = "@Feed_Id" OR ("Feed_Id" IS NULL AND "@Feed_Id" IS NULL))
             AND "Blob_Digest" = "@Blob_Digest"
         FOR UPDATE;
      
          WITH updated AS
          (
              UPDATE "DockerBlobs"
                 SET "Blob_Size" = "@Blob_Size",
                     "MediaType_Name" = COALESCE("@MediaType_Name", "MediaType_Name"),
                     "Cached_Indicator" = COALESCE("@Cached_Indicator", "Cached_Indicator")
               WHERE ("Feed_Id" = "@Feed_Id" OR ("Feed_Id" IS NULL AND "@Feed_Id" IS NULL)) 
                 AND "Blob_Digest" = "@Blob_Digest"
              RETURNING *
          )        
          INSERT INTO "DockerBlobs"
          (
              "Feed_Id",
              "Blob_Digest",
              "Download_Count",
              "Blob_Size",
              "MediaType_Name",
              "Cached_Indicator"
          )
          SELECT
              "@Feed_Id",
              "@Blob_Digest",
              COALESCE("@Download_Count", 0),
              "@Blob_Size",
              "@MediaType_Name",
              COALESCE("@Cached_Indicator", 'N')
          WHERE NOT EXISTS (SELECT * FROM updated)
          RETURNING "DockerBlob_Id" INTO "@DockerBlob_Id";
      
          RETURN "@DockerBlob_Id";
      
      END $$;
      
      posted in Support
      atripp
      atripp
    • 1
    • 2
    • 3
    • 4
    • 5
    • 34
    • 35
    • 1 / 35