Navigation

    Inedo Community Forums

    Forums

    • Login
    • Search
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    1. Home
    2. atripp
    3. Posts
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    Posts made by atripp

    • RE: nginx: subfolder location setup

      Hi @andreas_9392 ,

      That configuration is not supported and will not work; You'll need to configure https://proget.mycompany.com/ or use a port.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: HTTP 500 When pushing docker image

      @wechselberg-nisboerge_3629 great news, thanks! Well it'll be in the upcoming release (2025.13) in that case :)

      posted in Support
      atripp
      atripp
    • RE: HTTP 500 When pushing docker image

      @wechselberg-nisboerge_3629 can you check it out again? Should be there now :)

      posted in Support
      atripp
      atripp
    • RE: HTTP 500 When pushing docker image

      Hi @wechselberg-nisboerge_3629 ,

      Thanks for sharing that; sadly I'm still at a total loss here 🙄

      But I did make a change that I think should work, or at least give us a different error.... can you try upgrading to inedo/proget:25.0.14-ci.7?

      The change is in that build. OF course, you can easily downgrade later.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: [Buildmaster] - SshException: Unable to send channel request

      Hi @Anthony ,

      When you use SHCall, it's translated into a remote SSH command that includes all arguments inline on the shell. Basically something like ssh user@host bash -c '...'

      However, there is an OS-enforced limit on how long this can be, which is typically between ~32K and ~64K characters. It looks like you're there exactly, and you may be able to see this limit with getconf ARG_MAX. Note that you would also get this error if you did ssh user@host bash -c 'echo "Really long....."'.

      So bottom line -- this is an OS/SSH limit. To work-around it, you can just write out $arg to a file, and have your script read in that file.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: HTTP 500 When pushing docker image

      Thanks @wechselberg-nisboerge_3629 , Exactly what I was looking for!

      Can you provide some more information about this image? Basically I'm trying to find the layer / mediatype / size. I believe these commands will do it:

      docker image inspect vl-dev-repo.ki.lan/sst-coco-oci-prod/sub-coco-cli:test --format '{{json .RootFS.Layers}}' | jq .
      
      docker history vl-dev-repo.ki.lan/sst-coco-oci-prod/sub-coco-cli:test
      

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: [ProGet] Questions about configuring and behavior of self-connectors

      Hi @koksime-yap_5909 ,

      [1] I would do localhost as to reduce network traffic; a lot of time, "loopback" connections are handled in software and never make it to the network hardware

      [2] multiple copies of the package are stored

      In general, you should use data-deduplication anyway. Even with out self-connectors, we've seen a 90% reduction in space due to the nature of what's stored (i.e. nearly identical versions).

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: HTTP 500 When pushing docker image

      @wechselberg-nisboerge_3629 the main thing I'm looking for is the HTTP access logs - we have 1.5 Entires before (PATCH-finish, PUT-Start, PUT-finish), so seeing more would be really helpful.

      What's odd is seeing the "retrying..."

      posted in Support
      atripp
      atripp
    • RE: HTTP 500 When pushing docker image

      @wechselberg-nisboerge_3629 thanks for comfirming!

      Any chance you can get more entries from the container log? It'd be really helpful to see more requests going back/forth. This is just such a strange behavior given the seeming simplicity of your image

      Also, there should be an option, in ProGet 2025.12, to enable web logging (Admin > HTTPS Logging); it's a brand new feature, but it writes logs to a log file.

      posted in Support
      atripp
      atripp
    • RE: Bug: duplicate docker manifests on connected feed when upstream tag updated

      Hi @mayorovp_3701 ,

      Thanks for confirming; we can will try to get this fixed in the next maintenance release via PG-3139 -- the underlying issue is most likely a race condition on that trigger I mentioned, so we're going to fix it by adding an advisory lock.

      It will not remove the duplicate content, but in theory deleting the image will.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: [ProGet] Unexpected redirect when accessing Maven package with non-standard version starting with a character

      Hi @koksime-yap_5909 ,

      I'm afraid this is a known limitation with Maven feeds; we made the assumption that package authors would follow the bare-minimum of Maven versioning: packages start with letters, versions start with numbers.

      The only examples we found that were counter to that were 20+ year old artifacts, however, we've since learned that authors still mistakenly use these incorrect versions.

      Unfortunately, supporting these types of versions require a complex/risky change.

      Maven is a file-based API and the client just GETs/PUTs files. However, ProGet is not a file server so we need to actually parse the URLs to figure out which artifact/package the files refer to. In this case, we parse package-alpha-0.1 as package-alpha (version 0.1), not package (version alpha-0.1). Hence, why it's not working.

      If these are your internal packages, the easiest solution is to follow the standard:
      https://docs.oracle.com/middleware/1212/core/MAVEN/maven_version.htm

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: HTTP 500 When pushing docker image

      Hi @wechselberg-nisboerge_3629 ,

      This is definitely a strange error; are you using PostgreSQL by chance?

      I'm seeing 53babe930602: Retrying... a few times. Is this consistently happening with this layer? Is there anything special about it (big, small, etc)?

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Support for gpg in rpm feed

      @Sigve-opedal_6476 great news! Thanks for the update

      posted in Support
      atripp
      atripp
    • RE: Violation of UNIQUE KEY constraint 'UQ__PackageNameIds'.

      Hi @pawel-ostrowski_5669,

      Actually the version you're using (2025.8) has a known regression relating these PackageIds that we already fixed via PG-3097. I didn't realize it until you told me the version and I found that issue.

      Anyway please try latest version of ProGet 2025, it should work.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Violation of UNIQUE KEY constraint 'UQ__PackageNameIds'.

      Hi @parthu-reddy ,

      I'm not sure what version you're using, but the latest version has a checkbox on the reindex function to delete duplicate ids/names wen running the reindex job.

      I would try that -- note you may have to run it twice. This issue is fairly complicated and it's hard to fix without working against exported/backups of user databases.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: ProGet 2025.12 Published date is default value on packages from connector feed

      Hi @d-mischke_3966 ,

      I don't really know... based on the url it does like V1 to me, but the response looks like V2 API looks like (it's an ODATA/RSS style). I don't know what V1 looks like, it's before all of our times :)

      Anyway I would ask them to investigate the issue. V2 has been deprecated for over 5 years now, so we don't wanna make changes, especially to work-around a "bad" third-party feed.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: ProGet 2025.10: License Update API Issues

      Hi @jw ,

      Whoops sorry I keep forgetting - it shows up different on our end. Let me know if you'd like me to update your email on the forums, so you can login with your company account. It's fine either way, but we might forget again -- it shows up as free/community user on our dashboard 😅

      Anyway in that case, sure we can prioritize this for you!

      I guess in the end you guys need to sort out the question if you want to support partial updates.

      Since you're the first user whose requested this... we'll go with what you suggested. That makes sense to me. I just made this (PG-3137) since it was trivial:

      curl -X POST -H "Content-Type: application/json" -d "{\"id\": 1, \"code\": \"0BSD\", \"title\": \"Zero BSD\"}" http://localhost:8624/api/licenses/update
      
      curl -X POST -H "Content-Type: application/json" -d "{\"id\": 1, \"code\": \"0BSD-X\", \"title\": \"XXXZero BSD\"}" http://localhost:8624/api/licenses/update
      
      curl -X POST -H "Content-Type: application/json" -d "{\"id\": 1, \"code\": \"0BSD\", \"title\": \"Zero BSD\", \"spdx\": [\"0BSD\", \"0BSD1\"]}" http://localhost:8624/api/licenses/update
      
      curl -X POST -H "Content-Type: application/json" -d "{\"id\": 1, \"code\": \"0BSD\", \"title\": \"Zero BSD\", \"spdx\": []}" http://localhost:8624/api/licenses/update
      

      Hardest part, by far, was getting the curl commands figured out 😅

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: ProGet 2025.12 Published date is default value on packages from connector feed

      Hi @d-mischke_3966,

      When you say a "v1 NuGet feed", do you really mean the ancient version (i.e. used from 2010 to 2015)?

      I don't think ProGet has ever supported that if so.

      Ultimately it sounds like there is a problem with the API though -- 01.01.0001 01:00:00 is not a valid date.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Bug: duplicate docker manifests on connected feed when upstream tag updated

      Hi @mayorovp_3701

      This one is tricky and I'm afraid I cannot reproduce this error either.... but I think I can see how it's possible.

      First off, it's definitely possible for two processes to attempt to add the same manifest to the feed at the same time, and I might expect that during a parallel run like you describe.

      However, there is data integrity constraint (trigger) to design this from happening. This trigger should yield a Image_Digest must be unique across the containing feed error.

      But, looking at the trigger code for PostgreSQL, it looks like we aren't using an advisory lock to completely prevent a race condition.

      Can you confirm that you're using PostgreSQL?

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Signature Packet v3 is not considered secure

      Hi @frei_zs ,

      We are currently working on PG-3110 to add support for "v4 signatures" and intend to release that soon (along with better support for public repositories) in an upcoming maintenance release.

      Unfortunately it's not trivial, as the underlying cryptography library (Bouncy Casstle) does not support it, so we have to reimplement signing -- and the good news is that it seems to work so far, and is much faster.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Need help to request all package vulnerabilities in ProGet 2024 version

      Hi @fabrice-mejean ,

      It's no longer possible to query this information from the database.

      As you've noticed, ProGet now uses a version range (i.e. AffectedVersions_Text ) to determine whether a package is vulnerable or not. So instead of 4.2.3 it's now 4.2.3-4.2.8 or [4.2.*) or something like that.

      Unfortunately it's not practical/feasible to parse this information unless you were to rewrite a substantial amount of ecosystem-specific parsing logic - this would be basically impossible to do in a SQL query.

      Instead, you'll need to use the upcoming pgutil packages meatdata command to see what vulnerabilities a particular packages has. You can also use Notifiers to address newly-discovered vulnerabilities.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: [ProGet] Connector Ordering / Precedence in ProGet

      Hi @koksime-yap_5909 ,

      Not necessarily - it really depends on the API query.

      If the query is like "give me a list of all versions of MyPackage", then ProGet will need to aggregate local packages and connector packages to produce that list.

      If the query is "give me the metadata for MyPackage-1.3.1", then the first source that returns a result is used.

      In practice, NuGet asks for "all versions" a lot. So you'll get a lot of queries.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: ProGet - Unsupported Header when Uploading to Pure Storage S3

      @james-woods_8996 we haven't released the extension yet, so it won't be in any builds of the product. We are waiting on someone to test/verify that it works

      posted in Support
      atripp
      atripp
    • RE: proget 500 Internal server error when pushing to a proget docker feed

      Hi @thomas_3037 ,

      The specific issue was already fixed; I'm going to close it because whatever your experiencing is different and the "history" in this long winding thread will only make it harder to understand ,

      Please "start from the scratch" as if you never read this.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: [ProGet] Connector Ordering / Precedence in ProGet

      Hi @koksime-yap_5909,

      They are ordered by the name - so if orderis important to you, I guess you could always do the 10-nuget.org, 20-abc.net or whatever

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Add support for Terraform Public Registry in ProGet (offline/air-gapped)

      Hi @Stholm ,

      I assume you saw the Terraform Modules documentation in ProGet?

      While updating the Support for Terraform Backends to link to this discussion, I noticed we had some internal notes. So I'll transfer them here:

      This would require implementing both the Provider Registry Protocol (for first-party plugins) and the Provider Network Mirror Protocol (for connectors). Both seem relatively simple, though there appear to be some complexities involving signature files.

      In either case, we ought to not package these because they are quite large. For example, the hashcorp\aws provider for Windows is just a zip file with a single, 628mb .exe. They also have no metadata whatsoever that's returned from the API.

      One option is just to store these as manifest-less packages. For example, hashicorp/aws packages could be pkg:tfprovider/hashicorp@5.75.0?os=windows&arch=amd64. This would be two purls in one feed, which might not work, so it might require a new feed.

      Don't ask me what that all means, I'm just the copy/paster 😂

      But based on my read of that, it sounds like a big effort (i.e. a new feed type) to try to fit a round peg in a square hole. And honestly your homebuilt solution might work better.

      I think we'd need to see how much of a demand there is in the offline/air-gapped Terraform userbase for this. But feel free to add more thoughts as you have them.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: ProGet - Unsupported Header when Uploading to Pure Storage S3

      Hi @james-woods_8996 ,

      ProGet uses the AWS SDK for .NET, so I can't imagine environment variables would have any impact. I have no idea what those mean or do, but there's probably way to configure those in the SDK.

      That said, another user is currently testing a change for Oracle Cloud Infrastructure, which seems to also be giving some kind of hash related error.

      Perhaps it'll work for you? AWS v3.1.4-RC.1 is published to our prerelease feed. You can download and manually install, or update to use the prerelease feed:

      https://docs.inedo.com/docs/proget/administration/extensions#manual-installation

      After installing the extension, the "Disable Payload Signing" will show up on the advanced tab - and that property will be forwarded to the Put Request. In theory that will work, at least according to the one post from above.

      One other thing to test would be uploading Assets (i.e. creating an Asset Directory) via the Web UI. That is the easiest way to do multi-part upload testing.

      If it doesn't work, then we can try to research how else to change the extension to get it working.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Deploy inedo agent with gMSA

      Hi @philippe-camelio_3885,

      Can you clarify what you've tried to date, and the issues you've faced?

      You can silently install the Inedo Agent, or even use a Manual Installation process if you'd prefer.

      Ultimately it's a standard Windows service, and you can change the account from LOCAL SYSTEM (which we recommend, by the way) to another account using sc.exe or other tools.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: ProGet 2025.10: License Update API Issues

      Hi @jw ,

      Technically it's double, though it's not trivial due to the number of places the change would need to be made and tested... ProGet API, pgutil, docs.

      The code/title change itself looks trivial (i.e. just pass in External_Id and Title_Text to the call to Licenses_UpdateLicenseData), though I'm not totally clear what to do about the other one. What does pgutil send in? Null? []? Etc.

      As a free/community user this isn't all that easy to prioritize... but if you could do the heavy-lifting on the Docs and pgutil (i.e. submit a PR), and give us a script or set of pgutil commands that we can just run against a local instance... I'm like 95% I can make the API change in 5 minutes.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Apply license key inside container

      Hi @jlarionov_2030 , easy change! We will not require a valid license for Settings API Endpoint going forward; PG-3133 will ship in next maintenance release, scheduled for Friday.

      posted in Support
      atripp
      atripp
    • RE: Deletion of C:\Program Files\ProGet\Service\postgres\bin directory

      Hi @tobe-burce_3659 ,

      We do not support deleting or modifying any of the contents under the program directory; they will simply return when you upgrade and it may cause other problems.

      Instead, please create an exception; we are aware of the vulnerabilities in libraries that Postgresql uses and can assure you that they are false positives and will have no impact on ProGet... even if you were to be using PostgreSQL.

      Using virus/malware tools to scan/monitor ProGet's operation causes lots of problems, as these tools interfere with file operations and cause big headaches.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Support for gpg in rpm feed

      Hi @Sigve-opedal_6476 ,

      No idea why it wouldn't work, but I would look at something like ProxyMan or Wireshark to capture HTTP traffic, and see what requests are different.

      You should see a pattern of requests that work, and a pattern that doesn't.

      Maybe the client is requesting some other file that you aren't uploading? I don't think there's an API or HTTP header... I think it's all basic GET requests. But that will tell you the delta.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: pgutil doesn't support nuget lock files to generate sbom

      Hi @fabrice-mejean ,

      I definitely understand where you're coming from.... both commands basically work off the assets file, which is generated at build time.

      But your workflow is not common... the standard for SBOM generation is post-build. Doing it pre-build checking requires that packages.lock.json is used, which not many use... it's hard for us to advocate this workflow when most users don't care about saving time in this stage.

      I know we could add a "switch" or something to pgutil, but we learned "the hard way" that adding lots of complex alternative/branching paths to pgscan made for very difficult to maintain/understand code, so we want to keep the utility as simple as possible.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Request for Creation of API for Package Auditing Before Dependency Restoration

      Hi @pmsensi,

      Correct -- it'll be whatever data is on the "Dependencies" tab in ProGet, which is basically whatever is in the manifest file (.nuspec, etc).

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Request for Creation of API for Package Auditing Before Dependency Restoration

      Hi @fabrice-mejean @pmsensi ,

      We've got this spec'd out and on the roadmap now as PG-3126! It'll come through a maintenance release, along with pgutil security commands for configuring users, groups, and tasks.

      The target is 2025.13, which is planned for October 24. I don't know if we'll hit that target, but that's what we're aiming for.

      Please check out the specs on PG-3126; I think it captures what you're looking for, which is basically an expanded metadata object that includes compliance data, detected licenses, and vulnerabilities.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: pgutil doesn't support nuget lock files to generate sbom

      Hi @fabrice-mejean,

      Using packages.lock.json seemed to make the most sense to us too, but ultimately we decided not to use it for a few reasons.

      First and foremost, none of the other .NET SBOM-generators seemed to use the packages.lock.json file. That's usually a sign that there's a "good reason" for us not to either.

      From our perspective, pgutil builds scan is intended to be used in a CI environment, where dotnet build is run anyway and the assets file is already present. We don't have a use-case for an alternative workflow, where a build is not actually run.

      In addition, packages.lock.json are still pretty niche and not widely used. You have to "go out of your way" to use it, and <PackageReference .../ > is by far the most common approach. It might be worth monitoring Issue #658 at CycloneDX/cyclonedx-dotnet to see anyone picks it up there.

      Technically it's not all that complex to do, but it adds complexity and confusion... especially since most users will not be familiar with the differences between the lock and asset file. So it's not a good fit for pgutil builds scan.

      HOWEVER, you could probably write ask ChatGPT to write a trivial PowerShell script that "Transforms" a lock file into a minimal SBOM document, and tweak it for what you want in ProGet. That same script could just upload the file to ProGet, or use pgutil as well.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: pgutil brew package

      Hi @layfield_8963 ,

      Thanks that makes sense -- since you've already got a personal repo going, I think it makes sense to stick with that for now. If other users are interested, we can explore it.

      We publish pgutil pretty regularly, so we'd need to automate updating that repository and that's just one more script t write and one more thing to break later :)

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Conan License detection issue

      Hi @it_9582 ,

      This is a known issue / UI quirk with Conan packages, and hopefully should only impact that one page in the UI.

      To be honest I don't quite get the issue, but it has something to do with the fact that a Conan package is actually "set of packages that share a name and version". Each package in the set can define its own license file.

      The particular page was never really designed for "package sets" so the display is a little weird. It's a nontrivial effort to fix and would obviously impact all other package types, so it's not a priority at the moment.

      We would love to redo the UI at some point, s I think it'd mkae sense to do then.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Support for gpg in rpm feed

      Hi @Sigve-opedal_6476 ,

      Thanks for clarifying; I should have mentioned that I know basically nothing about rpm except how a repository works 😅

      A gpgkey is just a file, right? I think you were on the right track with using an asset directory. I guess you would configure things like this, for an aset directory called gpg in the docker/gpg folder:

      gpgkey=https://myproget.mycompany.com/endpoints/gpg/content/docker/gpg
      

      There's really no "timestamp" on web-based files, the browser/protocol just does a GET and the bytes are returned. I can't imagine rpm is looking at a modified cache header.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Importing Conan Packages empty executions

      Hi @aristo_4359 ,

      I researched this a little further, and I'm afraid Conan Packages cannot be imported from Artifactory at this time; they do not behave like other repositories in Artifactory, which means it's a nontrivial effort to figure out how to get these imported using a different/alternative Artifactory API.

      We will clarify this in upcoming ProGet release via PG-3122.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Support for gpg in rpm feed

      Hi @Sigve-opedal_6476,

      I'm not really sure what you mean by this request.

      An RPM repository is basically just a "dumb file server" with like repodata.xml and a bunch of index.tar.gz files. The rpm client downloads this files and does all the gpg stuff.

      ProGet feeds implement an RPM repository and generate these indexes on the fly... but to the client, it seems like it's just downloading static files.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: pgutil brew package

      Hi @layfield_8963 ,

      About the only knowledge of "homebrew" I have is that it's some kind of thing on Mac, perhaps like apt or chocolatey. I think we'd normally be all about hosting a "cask" or "keg", but I don't think that's what you're asking 🍺😂

      It doesn't make sense for us to try supporting an ecosystem we know so little about. That's the same reason we never did a Chocolatey package on our own, but our friend @steviecoaster from Chocolatey created/maintains the proget package at chocolatey.org, and that has worked out just fine.

      Anyway if you just need "something simple" from us like accepting a (simple) pull request or editing a build/deploy script, we might be able to do that. But otherwise it won't make sense for us to invest in learning about brew and supporting it.

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: The ConnectionString property has not been initialized

      Hi @tyler_5201

      The underlying error is that there is no connection string, as you noticed.

      The connection string is stored in a file (/var/proget/database/.pgsqlconn) that should be accessible to the container. I haven't tested it, but I guess if file is missing or deleted, then I suppose you might run into these issues.

      It should be created on startup of a new container, however. So it's kind of weird. I think you'll want to "play" with it a bit, since there's clearly something going on with your permissions I'm thinking.

      Note that the connection string can also be specified as an environment variable, but I don't think that applies here since you're trying to configure the embedded datbase:
      https://docs.inedo.com/docs/installation/linux/docker-guide#supported-environment-variables

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: npm connector returns 400

      @udi-moshe_0021 sounds like it was a temporary outage on npmjs.org or perhaps even your proxy server. I wouldn't worry about it if it's working now since it's not something you could really control anyway

      posted in Support
      atripp
      atripp
    • RE: Getting 500, "Could not find stored procedure 'Security_GetRoles', but /health show no errors

      Hi @carl-westman_8110 ,

      The error message means that the database wasn't updated as per normal during the start-up process. It's hard to guess why, as we have special handling for that.

      It's likely that restarting the service would have fixed it, but downgrading and then upgrading would also force an upgrade as well. Unfortunately it's hard to say at this point.

      Upgrading to 2025.10 should be fine.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Packages with Noncanonical Names errors on internalized packages

      Hi @jfullmer_7346,

      It's nothing you did.

      The underlying issue is that a bug in ProGet allowed WinSCP (ID=563) to be added to the PackageNameIds table; that should never have happened since nuget is case insensitive. We've since fixed that bug.

      However, once you have a duplicate name, weird things happen since querying for "give me the PackageID for nuget://winscp" returns two results instead of one. So now when you query "give me the VersionID for (ID=563)-v6.5.3", a new entry is created.

      This has been a very long-standing issue, but there aren't any major consequences to these "weird things" except casing in the UI and the health check now fails.

      But we're on it :)

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Packages with Noncanonical Names errors on internalized packages

      Hi @jfullmer_7346 ,

      I haven't had a chance to look into more details, but thanks for providing the results of the query.

      FYI - the PackageNameIds and PackageVersionIds are designed as a kind of "permanent, read-only record" -- once added they are not deleted or modified. Even if all packages are deleted (i.e. FeedPackageVersions). This is why this the "duplicate name" is such a headache to deal with.

      That said, on a quick glance, we can see exactly where the error is coming from: there are duplicate versions (i.e. (ID=563)-v6.5.3 and (ID=562)-v6.5.3). So, when we try to deduplicate (ID=563) and (ID=562) (i.e. winscp and WinSCP), we get the error as expected.

      What's not expected is that those versions were not de-duplicated in the earlier pass. My guess is that it's related to winscp being in one feed and WinSCP being in the other -- we tried to be conservative, and keep the de-duplication to packages related to the feed.

      I'm thinking we just change that logic to "all packages of the feed type". Anyway, please stay tuned. We'll try to get it in the next maintencne release.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Packages with Noncanonical Names errors on internalized packages

      @jfullmer_7346 thanks for giving it a shot, we'll take a closer look!

      The "good news" is that the error message is a "sanity check" failure, so now have an idea of what's causing the error:

          -- Sanity Check (ensure there are no duplicate versions)
          IF EXISTS (
              SELECT * 
               FROM "PackageVersionIds" PV_D,
                    "PackageVersionIds" PV_C
              WHERE PV_D."PackageName_Id" = "@Duplicate_PackageName_Id"
                AND PV_C."PackageName_Id" = "@Canonical_PackageName_Id"
                AND PV_D."Package_Version" = PV_C."Package_Version"
                AND (   (PV_D."Qualifier_Text" IS NULL AND PV_C."Qualifier_Text" IS NULL)
                     OR (PV_D."Qualifier_Text" = PV_C."Qualifier_Text") )
             ) THEN RAISE EXCEPTION 'Cannot deduplicate given nameid'; RETURN; END IF;
      

      In this case, it's saying that there are "duplicate versions" remaining (i.e. WinSCP-1.0.0 and winscp-1.0.0). Those should have been de-duplicated earlier. I wonder if the PackageVersionIds_GetDuplicates() function is not returning the right results.

      I'm not sure what your experience w/ PostgreSQL is, but are you able to query the embedded database? If not, that's fine... it's not meant to be easy to query.

      Also, should the integrity check be taking 30 minutes?

      Maybe. The integrity check needs to verify file hashes, so that involves opening and streaming through all the files on disk. So when you have a lot of large packages, then it's gonna take a while.

      posted in Support
      atripp
      atripp
    • RE: Unable to download io.r2dbc:r2dbc-bom:pom:Borca-SR2 from ProGet feed.

      Hi @bohdan-cech_2403,

      ProGet can handle most invalid Maven version numbers, but Borca-SR2 is really invalid and isn't currently supported. Our "name vs version" parsing requires that versions start with numbers, artifacts start with letters. This has been a Maven rule for 20+ years now.

      It's non trivial and quite risky to change our parsing logic so it's not something we're keen on doing in a maintenance release. This scenario seems to be very rare and impact ancient artifacts and a few cases where authors didn't use Maven to deploy the artifacts.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: maven Checksum validation failed, no checksums available

      @uli_2533 thanks for the additional info, great find in the source code too!!

      On our end, I was looking at the PUT code, where a 301 kind of made sense. I think it must have been some kind of regression on the GET request? Not sure why it didn't get noticed before, but it's a trivial fix.

      PG-3108 will be in the next maintenance release (Sep 19)... or if you want to try it now, it's in inedo/proget:25.0.10-ci.9 Docker image.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • 1
    • 2
    • 3
    • 4
    • 5
    • 35
    • 36
    • 1 / 36