Navigation

    Inedo Community Forums

    Forums

    • Login
    • Search
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    1. Home
    2. atripp
    3. Posts
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    Posts made by atripp

    • RE: Layer Scanning is not working with images which is pushed with --compression-format zstd:chunked

      Hi @geraldizo_0690 ,

      Nice find with the busybox image... that makes it a lot easier to test/debug on our end!!

      We already have a ZST library in ProGet so, In theory, it shouldn't be that difficult to use that for layers like this. We'll add that via PG-3218 in an upcoming maintenance release -- currently targeting February 20.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Layer Scanning is not working with images which is pushed with --compression-format zstd:chunked

      Hi @geraldizo_0690 ,

      Are you seeing any errors/messages logged like, Blob xxxxxxx is not a .tar.gz file; nothing to scan.? If you go to Admin > Executions, you may see some historic logs about Container scanning.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Zabbix rpm feed not working correctly

      Hi @Sigve-opedal_6476 ,

      Could you give some tips/guidance on how to repro the error? Ideally, it's something we can see only in ProGet :)

      It's probably some quirk in how they implement things, but I wanted to make sure we're looking at the right things before starting.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Using curl to either check or download a script file in Otter

      Hi @scusson_9923 ,

      That is an internal/web-only API url, so it wouldn't behave quite right outside a web browser.

      I can't think of an easy way to accomplish what you're looking to do.... if you could share some of the bigger picture, maybe we can come up with a different approach / idea that would be easier to accomplish.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: InitContainers never start with Azure Sql on ProGet 25.0.18

      Hi @certificatemanager_4002 ,

      I'm sorry but I'm not familiar enough with Kubernetes to help troubleshoot this issue.

      All that I recognize here is the upgradedb command, which is documented here:
      https://docs.inedo.com/docs/installation/linux/installation-upgrading-docker-containers#upgrading-the-database-only-optional

      If you run that command from the command-line (on either linux or windows), things will written to the console. I wish I could tell you why you aren't seeing the messages.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Proget apt snapshot support?

      Hi @phil-sutherland_3118 ,

      This is not on a roadmap. Honestly we really don't really understand what a "snapshot" repository is or how they are used.

      We surveyed some customers about it a while ago, and this summarizes what they said: repository snapshots are archaic; they made sense a long time ago, but Docker changed all that. It's so much simpler to use container images like FROM debian:buster-20230919. That's effectively our snapshot, and when we need to main old releases (which happens more often than I'd like), we just rebuild the image from that. The other big advantage is that build time is easily 10x faster if not more.

      And then we saw that Debian also to maintains their own snapshots (https://snapshot.debian.org/), so we don't quite get how they are used outside of a handful of use cases (like a build process for a specialized appliance OS without Docker).

      Anyway we're open to considering it.... but only two people (including you) have asked in the past several years, so there's no real interest... and we're not sure what they even do :)

      That said, it's possible there's a way to accomplish something that has the same outcomes. For example:

      • create a public aggregate feed (jammy-all) with multiple connectors to Debian, Ubuntu, NGINX, Elasticsearch, etc.
      • create a release feed (jammy-20231101) that snapshots jammy-all

      But we don't know enough to answer that :)

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Http Logs enabled on only one server

      Hi @parthu-reddy ,

      I'm not sure if there's a relation here, but perhaps. The "running out of disk space" is not surpsing if you're indexing mega-repositories like the public debian repos. They are gigabytes in size. Here's some more info about those:
      https://blog.inedo.com/inedo/proget-2025-14-major-updates-to-debian-feeds

      You definitely want to switch to Indexing Jobs when you connect to public repops.

      This can be set at operating system level (it's the %ProgramData% special folder) or in ProGet under Admin > Advanced Settings > LocalStorage.

      Anyway this is somethitng best brought up as a separate topic if you have follow-ups (if you don't mind), I'd hate to pollute this thread with debian/indexing questions :)

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Error importing from Artifactory: The SSL connection could not be established, see inner exception.

      Hi @michael-day_7391 ,

      I can't really comment on what you're seeing in the Artifactory logs (i.e. [1] and [2]), but when an Access Token is specified, that token is sent on requests via a Bearer authorization header (unless Use Legacy API Header is selected). Otherwise, the Username/Password are sent via a Basic header. This happens on each and every request, regardless of whether it's a file download, api call, etc.

      Probably just easier to disable authentication during the import if this keeps coming up.

      OCI Registries (i.e. what you're using for your Helm charts, as opposed to a regular Helm registry) are not supported, so you'd need to export those files and use disk-based import or something like that.

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Deleting and creating a signing key for a Debian Feed doesn't give a success feedback, also still signature v3 is used?

      Hi @frei_zs,

      ProGet 2025.12 does not support the PGP v3 format, and there's no way you can get it working. So, you'll need to upgrade to the latest version, which does support the format.

      Here's some more information on the changes:
      https://blog.inedo.com/inedo/proget-2025-14-major-updates-to-debian-feeds

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Feed Group and Feed

      Hi @mikael ,

      We plan to add this support via PG-3213 in an upcoming maintenance release -- perhaps Feb 20 if all goes well!

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Ability to show usage of (e.g. script) assets

      Hi @jonathan-simmonds_0798,

      Thanks for the suggestion! This has been a long-standing wish-list item, but it's deceptively complicated.

      The "current idea" is a feature called "raft analysis" that will create a list of all raft items that depend on other raft items. For example, a pipeline that references a script. Or, a script that calls a module, and so on. It could also detect warnings/errors and report them.

      Creating this list often involves opening thousands of files and "parsing" them, which is not a trivial operation... but we're only talking "a few minutes" in most cases. However, the main challenges arise with invalidating this list (many edits will cause that to happen), and then communicating the status of the rebuild to users.

      I'll add a note to our BuildMaster 2026 roadmap though and see if we can explore it again; the current focus is boring..... modernization (PostgreSQL).

      That said, you probably noticed it but.... you should be able to see if a particular pipeline has an error (like a missing script) on the piepline overview page. Not as nice.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Deleting Debian feed and connectors didn't delete local index files

      Hi @parthu-reddy ,

      At this time, we don't have a disk cleanup procedure for local storage like this; we may add it in the future, but for the time being you can just delete them. The LocalStorage folder is ephemeral -- not quite "temp" storage, but the contents can be deleted. They will just be recreated next time it's needed.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Error importing from Artifactory: The SSL connection could not be established, see inner exception.

      Hi @michael-day_7391,

      ProGet is not designed to provide many details on network- or OS-level errors; that's where tools like Invoke-WebRequest come in. And it sounds like you've already discovered the root cause (failed certificate revocation check) that way.

      Anyway... when hosting ProGet on Windows, the Windows network stack will be used. So, if Windows is refusing to connect for whatever reason, then ProGet will also not connect. There's unfortunately no way around this, and we do not allow bypassing of SSL in ProGet.

      The good news is, once you get Invoke-WebReuqest working, then you'll be able to connect. There's probably some magical registry setting out there that will help :)

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Nuget connector stuck in failed state ("'0x00' is an invalid start of a property name")

      Hi @mayorovp_3701

      Actually zero byte in position 1 looks like attempt to read UTF16-LE-encoded json as UTF8

      Oh that's a great observation! Yeah that sounds like a reasonable explanation. But still... how could that even be possible?

      It's not like ProGet is going to randomly swap an encoding like that, and it's not like NuGet is going to store .json files incorrectly.

      As for experimentation, next time it happens:

      • remove one of the connectors from the feed to which one
      • navigate to the JSON endpoints of the connector in question, to see if you see the bad JSON
      • try to identify a pattern of behavior that causes this
      • watch for HTTP access logs to see if you can find the exact URL that's being accessed at the time of the connector failure (assuming it's a self-connector)
      • be prepared to attach a MITM proxy to ProGet (Admin > Proxy)

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Nuget connector stuck in failed state ("'0x00' is an invalid start of a property name")

      Hi @mayorovp_3701,

      That's a really strange error; it's basically saying that, somehow, a 0x0 character found its way into some JSON returned by the API. This character is invisible, and you'd need to use a kind of hex editor or developer tool to even see it.

      I guess, in theory it could be inserted by some intermediate device (firewall, gateway, etc), but who knows at this point. I can't imagine how that could happen on either NuGet or ProGet, but that's the first place to start looking.

      I suspect the server restart is unrelated; that certainly wouldn't cause a random 0x0 unless there's something really broken with the computer.

      From here, you'll want to keep isolating the issue, and try to figure out which connector is "bad"

      • If it's NuGet.org -- the issue is most certainly a network/gateway that's doing that.
      • If it's ProGet -- it's likely some strange bug, where 0x0 got inserted to the Database for a connector or feed or something. We saw that during some migrations, but it's realy hard to track down.

      I would just keep experimenting. If it's related to a reboot, just stop/start the service. That should be the same.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Error importing from Artifactory: The SSL connection could not be established, see inner exception.

      Hi @michael-day_7391,

      That's just a generic SSL error, which as you may know, is happening at the operating-system level. That quick-connect screen won't provide details. There may be a redirect happening, but it's hard to say.

      It's unusual that it would work from the web browser, but also not uncommon. If you use Invoke-WebRequest that would reproduce the error. If not, then stop the service and run ProGet manually (proget.exe run) so it's the same user/account.

      You should also be able to get a stack trace by adding a connector; that would be logged as a connector error.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Multiple deployment targets on same server

      Hi @koe ,

      This is definitely a problem that you can solve with BuildMaster, but before giving any kind of technical guidance, I'd like to understand the business processes.

      On first glance, this sounds like one of two scenarios:

      • Quasi-custom Software, where you create a customized build of a software application (perhaps bundled with their plugins, etc)
      • User-driven Deployments, where you maintain a single application but deploy a new version of that application based on user requirements (new feature they requested, bug fix, etc)

      Are either of those close?

      Whatever the case, can you describe the decision-making process or rationale that goes into "deploy a software release to either all production systems, all test systems or just a single one out of all these systems?"

      Are there different types of releases (e.g. a "patch" release of an old version)? Or is everyone "forward only, latest version"?

      BuildMaster is, of course, an automation platform - but more importantly, it's about modeling process and visualization. And when it comes to process, consistency is key - even when there are variations.

      We don't believe a decision like above is "arbitrary, and based on the whims of an application director", but there's probably some rationale that goes into it. So, with BuildMaster, our goal is to help get everyone on the same page about which process to follow for different releases.

      Anyway, how you model this will have a big impact down the line.

      Cheers,

      Alana

      posted in Support
      atripp
      atripp
    • RE: [Feature] Scope SCA permissions to Project or "Project Group"/Assign Project to Feed Group

      Hi @Nils-Nilsson ,

      Good news - this is actually on our ProGet 2026 roadmap.

      The general idea is to "reuse" Feed Groups -- I guess we'd call them "Feed & Project Groups" or something? Anyway, the projects would be grouped in the UI similarly, and you could scope project-based permissions to a group.

      We will try to get it as a preview feature in the coming weeks, assuming it can be done in low risk. It seems like this would be the case.

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Feed Group and Feed

      Hi @mikael ,

      Oh sorry, we decided not to refactor/rewrite the API after all -- and I guess we threw out all the "ideas" attached to that initiative as well. This was on our roadmap for several years, so we didn't realize there was a customer-facing request attached, hence how so we forgot.

      Anyway, I've moved this back as feature request, and we'll look to add this to the existing API. It probably won't be that bad! Please stay tuned, hopefully we'll evaluate within the next couple weeks.

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Local index file update failure: The remote server returned an error: (403) Forbidden.

      hi @michael-day_7391 ,

      The correct connector URL would be:

      https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.31/rpm/
      

      I added that connector and coould see/browse/download packages in the repository.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: AD integration not working in ProGet 2025.18

      Hi @michael-day_7391 ,

      I guess not? I've ever heard of StarTLS, and no one else seems to ask -- so it's probably not worth investigating. I guess LDAPs is what's popular, so probably easier to just go that route.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: An error duing cargo build

      Hi @caspacokku_2900 ,

      Yeah it's pretty weird. The errors are all over the place - not like in a specific database query or anything like that. This also isn't a sign of server-overload that we've seen.

      It's as if internal network connectivity is somehow breaking within a container? Or there's "something else" wrong with the internal PostgreSQL server? These are all "deep system" level errors in basically operating-system level code (drivers, etc).

      We've seen this with another user with really weird errors, but no idea on how to reproduce it. Maybe it's an "error reporting an error".

      As for the feed... there's nothing special about cargo vs other feeds from an API/usage standpoint. If anything, npm hammers the server a lot harder with its 1000+ package restores. And this has nothing to do with connectors.

      Unfortunately we don't have a lot to go on:

      • how about increasing hardware
      • can you try a different physical server
      • could it somehow be the underlying operating system?
      • any patterns as to when this is happening (lots of traffic etc)

      Any clues or consistency would help.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Note on the instructions for downloading packages from Debian Feed

      Hi @geraldizo_0690 ,

      Thanks for the report! Sometimes bug fixes are a single character like this ...
      1c7d2a43-f6ca-41b0-9853-65cdea7cb5b7-image.png

      It'll be be in the next maintenance release via PG-3205 :)

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: An error duing cargo build

      Hi @caspacokku_2900 ,

      I'm afraid these errors aren't related to Cargo, but are indicating some kind of system/network error. In some cases, there's a unexpected/broken network configuration between the ProGet application and the underlying database (PostgreSQL).

      In other cases, it appears to be related to connections to cargo's public repository.

      So bottom line, this is an environment-specific issue. Can you tell us a bit about your configuration? How about a Paste of Admin > System Information?

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: An error occurred in the web application: Property set method not found.

      Hi @michael-day_7391 ,

      These days, it's it's considered a "UX best practice" to provide minimal options in the installer because it's fewer choices to make when you're unfamiliar with the tool. Instead, programs should allow these to be configured post-installation. Most modern tools have shifted to this practice, including our products.

      As for the folder, the ProgramData folder is indeed the standard/recommended practice on Windows. If you're still configuring Windows using multiple drives/partitions (less common these days w/SSD), you should definitely change the User/ProgramData directories during provisioning:

      https://learn.microsoft.com/en-us/troubleshoot/windows-server/user-profiles-and-logon/relocation-of-users-and-programdata-directories

      Many Windows programs (including ProGet) will use these to store application data, and it can balloon to gigabytes if you're not careful.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: AD integration not working in ProGet 2025.18

      Hi @michael-day_7391 ,

      There is an option to "Use LDAPS", so I would makes sure to select that.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: An error occurred in the web application: Property set method not found.

      Hi @michael-day_7391 ,

      This folder is for ephemeral, temporary storage. The only time it will require a lot of space (hundreds of MB) is you're doing things like proxying public debian repositories.

      So maybe you want to change the wrong folder?

      Perhaps you are looking for the PackageRoot instead?

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Proget: Move data to another folder

      Thanks @certificatemanager_4002 -- that will definitely work if you're using inedodb (i.e. settting up a cluster), though for using a single-server instance, inedodb isn't recommended.

      @Sigve-opedal_6476 I'm not super-experienced at Linux myself.... but if the container is stopped, then you should be able to just move/copy the files. The container must be able to read the files -- I do know there's some kind of permissions/user error when things aren't set right.

      I'd share the database error when you restart.

      posted in Support
      atripp
      atripp
    • RE: Migration from SQLServer to PostGres

      Hi @certificatemanager_4002 ,

      We do not recommend using a multi-server / clustered installation for the database server; for an application with a profile like ProGet, it's significantly slower, less reliable, and ironically, has lower availability.

      Instead, in the unlikely event that your physical database server "goes bad", I would just do routine backups and be prepared to "spin up" a new server from backup ASAP.

      Thank,
      Alana

      posted in Support
      atripp
      atripp
    • RE: An error occurred in the web application: Property set method not found.

      Hi @michael-day_7391 ,

      It looks like this was an oversight, and this value cannot be configured. However, we will fix that via PG-3203 in the upcoming maintenance release on Friday.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Add support for Terraform Public Registry in ProGet (offline/air-gapped)

      Thanks @davidroberts63

      I looked into this a little more, so just as an FYI...

      • Terraform Modules are relatively simple templates for code/configuration that you write and maintain (think of it like a Helm chart)
      • Terraform Providers are executable files invoked by Modules, and many of them simply embed and wrap a CLI tool; the most popular providers are AWS, Azure, GCP, but there are many niche ones by other vendors

      No one really writes their own Terraform Providers, and it's highly unlikely you would ever do that. They should be thought of as SDK/CLIs you downloaded from a marketplace... except they run as admin/root and have your most "sacred" credentials.

      So just to clarify... you can currently host Terraform Modules in ProGet and you can also most certainly host Terraform Providers in ProGet (just using an Asset Directory). The question comes down to user experience and convenience.

      The issue here is that Terraform Providers do not fit into ProGet's "package" or "connector" mindset; so it's not just a new feed type, but creating a whole new feature in ProGet that won't really work like other feeds.

      We were able to "force" Terraform Modules into Packages/Feeds... but as you can see, it requires a lot of hoops to jump through, since Terraform is a "terraform.io-first" tool. However, many users create their own Modules, so it adds a lot of value.

      The same isn't true of Providers. From a "proxying" standpoint, high-control organizations would likely not want to "proxy" the Terraform Registry for Providers -- it's way to much risk, considering anyone can just upload whatever they'd like.

      Instead, they'd likely have a review process for adding and upgrading providers. And once that's in place.... how much time are we saving by using a specialized feed type over an asset directory?

      I don't have the answer to that --- but that's what we're considering :)

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: AD integration not working in ProGet 2025.18

      Hi @michael-day_7391 ,

      Unfortunately AD/LDAP issues can be pretty challenging to troubleshoot and debug. I can give you a few general tips, but this is one of those things where there are no useful logs -- it's like trying to diagnose why you're getting a timeout doing an HTTP request. The real issue is somewhere down the line.

      Assuming you're able to connect to the LDAP Server (Domain Controller in this case, it sounds like), the most common is permissions. This can get really painful, because it can be incredibly granular - an account can be allowed to enumerate groups, but not bind to specific users (i.e. do a login). Other times, it's related to multi-domain / complex forrests, and things like misconfigured trusts.

      For security reasons, the AD/LDAP server never really tells the client what's wrong -- that's why you won't see anything useful in ProGet. You have to look at logs on the sever to find out what the exact issue is.

      If those aren't easily accessible, my advise is to keep "playing around" and perhaps try the LDAP/OpenLDAP directory, which is basically just "raw" LDAP queries. Or try V5 vs V4, etc.

      Here is the source code, if you're curious to see what's going on behind the scenes:
      https://github.com/Inedo/inedox-inedocore/tree/master/InedoCore/InedoExtension/UserDirectories/ActiveDirectory

      Agan, the server logs (LDAP/AD Server, nt ProGet server) are going to be the best place to look for queries and issues.

      Hope that helps,

      Alana

      posted in Support
      atripp
      atripp
    • RE: The ConnectionString property has not been initialized

      Hi @tyler_5201 ,

      Sorry I really don't know what "squashing" or "101:0" permissions mean, so I don't really know what the issue is.... but I did read "this does work" and then saw you asked a question which I didn't quite understand 😅

      What I can say is that ProGet needs to have "full control" over the database, package, backup directories. That's not something we could reasonably change... and I don't know if you're even asking that.

      Otherwise, we also have a pretty basic Docker image configuration (Dockerfile), and obviously making changes comes with risks. Before considering those changes, we would need to really understand what kind of value/benefit comes out of this and what kind of changes are involved here.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Add support for Terraform Public Registry in ProGet (offline/air-gapped)

      Hi @mikael ,

      Thanks for the additional insight.

      The use case you describe (offline/air-gapped usage of Terraform) does seem rather niche, and is different than the traditional "proxying" use case we describe. Very few organizations will restrict internet access like that, and Proxying is more about controlling versions and limiting what developers are able to use.

      Anyway, it may not be a good fit for investment on our end. But we'll see if anyone else joins this thread :)

      That said, a Provider is basically just a executable file in a zip file. I wonder if you could simply use Asset directories somehow.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: ProGet - Delete API for Builds

      Hi @jw ,

      We added "auto-publish Inedo.ProGet" to our internal tracking list -- and in theory it will be done in the next week or so. I think we meant to do that earlier but it just fell off the list.

      Thanks for pointing that out - please don't hesitate to bug us if you don't see it published next time :)

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Maven Metadata Checksum Warnings

      Hi @wechselberg-nisboerge_3629,

      In ProGet, the maven metadata files (xml, hash) are indeed generated upon request. The output is deterministic, based on the artifacts in storage and (if relevant) in the remote repository (i.e. connectors). So, if you're seeing it changed, it's because an artifact was uploaded/etc.

      One thing to note -- you cannot upload a metadata file or hash file. Well, you can try (and maven tries) to PUT the file, but the stream is always ignored or "written to /dev/null" as they say.

      We've seen some maven workflows/plugins that attempt to modify/append to this metadata file and re-upload it with changes.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: File download with wget only works with auth-no-challenge argument

      Hi @it_9582 ,

      Thanks for the additional information.

      The reason this is happening is because these endpoints do not return a WWW-Authenticate: Basic ... header value when responding with 401.

      This behavior intentional, as our preferred authentication is using an API Key header value, not Basic credentials. Basic is an alternative option and can be used when you can't easily pass a header.

      We are not willing/able to change this behavior, as it would require changing code on a substantial number of endpoints and may break user integrations that have been relying on existing behavior.

      So I'm afraid this means you'll need to do one of the following:

      • add the --auth-no-challenge option
      • use wget --header="X-ApiKey: <api-key>" "<Download path>" instead
      • use curl instead
      • use pgutil instead

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Searching packages with symbol like @ and / will return empty

      Hi @aristo_4359 ,

      I assume you are referring to npm packages? And that you are using the /packages page and not the Feed page?

      This behavior seems a bit quirky, but expected. The /packages page does not have any special logic for handling package-specific types - and the "search" is more like a "filter". In npm, the @ symbol denotes a group (namespace), and the / symbol is the separator between a group and name - and this is why you get this behavior.

      Hope that explains the behavior a bit! It's not ideal, but a limitation for performance, etc.

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: File download with wget only works with auth-no-challenge argument

      Hi @it_9582 ,

      Unfortunately I'm not really sure what your script is doing or how to fix it... but I will describe the server (ProGet) behavior.

      Unless you allow Anonymous access on the endpoint, ProGet will respond with a 401 when you access a URL without any authentication information (API Key header, Basic credentials). That's what the message you are sharing appears to do.

      So if you're getting that message, then I guess the username/password isn't being sent? I really don't know what --auth-no-challenge means or does.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: ProGet 2025.14 (Build 12) - PostgreSQL Error when uploading

      Hi @it_9582,

      I'm afraid this requires a code change and an external database will have no impact.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: ProGet configuration as code (IaC)?

      Hi @mikael ,

      We have no plans for this and honestly, I wouldn't recommend setting up a tool like ProGet in this manner.

      Outside of some very specialized use cases (like setting up labs for testing, or nodes in a ProGet Enterprise Edge Computing Edition) there are no benefits. Only headaches.

      It might sound fine on paper, but every company that has set it up this way has regretted it. And you will to. The reasons they want "fully reproducible configuration" is usually:

      • so we can store configuration in versioned code
      • so we can easily replicate it in a testing environment
      • so we can easily migrate/move to a new server

      Those seem nice, but it totally fails in practice.

      First, you can't "rollback" most configuration. Say you fat-finger a configuration file and delete half your feeds. There go all your packages. And when you realize you've got gigabytes/terabytes of content to deal with, plus all the metadata in storage, this is a huge headache.

      The configuration you can make idempotent (say, permissions/users) is so much more a pain to work with than a UI. Again, more error prone you lose all the benefits of visual cues, input verification, etc. You fat-finger the wrong setting, and you get some obscure error instead of a helpful red box next to the text box.

      The regret comes in realizing they've created a buggier environment that isn't properly tested, and is somehow less "portable" than an ordinary installation. A year later, when the new team comes in, they usually have to figure out how to "undo" it -- and you can probably guess why we need to get involved to untangle the mess.

      Thank,
      Alana

      posted in Support
      atripp
      atripp
    • RE: ProGet 2025.14 (Build 12) - PostgreSQL Error when uploading

      Hi @it_9582,

      It's certainly possible :)

      However, given the risk associated with the change, it could only happen in a Major Release. This would require editing a lot of code and trying to track down everywhere we might have trimmed/restricted to 200 characters.

      I can add it this to our roadmap for consideration, but note that ProGet 2026 hasn't been targeted for a date yet, let decided what features we'll do.

      I just want to be realistic about the timeline - let us know if you'd like to consider it. It hasn't come up in the very very many years this feature existed, so we're not even totally sure if we'll do it (if it's too much code / too much risk / too close to the deadline / etc).

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: The hostname could not be parsed

      Hi @Julian-huebner_9077,

      This error is occurring while ProGet is trying to generate the "base url". There are a few inputs that go into this:

      • Admin > Advanced Settings > Base URL
      • X-Forwarded Headers, set by a reverse proxy like ngnix

      If any of those have an invalid host name (which is what the error is indicating), then you'll get this error. In most cases, it's a typo in the X-forwarded headers.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: License not found in package

      Hi @dwynn_6489 ,

      I was able to reproduce this issue; the issue is that the package's license declaration specifies a license file of package/license.txt, but that file does not exist in the package.

      We will improve this error message via PG-3199 in the upcoming maintenance release, but in the meantime, the only workaround is to manually assign the license under SCA Licenses. The new version of ProGet will include a direct link to that page for convenience.

      The Purl you'd need to add is as follows:

      pkg:npm/%40progress/kendo-charts@2.9.0
      

      Hope that helps,
      Alana

      posted in Support
      atripp
      atripp
    • RE: ProGet 2025.14 (Build 12) - PostgreSQL Error when uploading

      Hi @it_9582 ,

      I'm afraid this is a long-standing (since we first introduced the feature) limitation on the name. It's not changeable/configurable and would require a nontrivial code change to lengthen.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: 401 When trying to download assests from private repo

      Hi @spencer-seebald_1146 ,

      I was able to identify the issue.

      When you visit the URL in ProGet, then ProGet will visit this URL (slightly trimmed) with the appropriate authorization header:

      https://libraries.cgr.dev/javascript/..../lodash/-/lodash-4.17.20.tgz
      

      However, that URL will issue a 307 redirect to the following:

      /artifacts-downloads/javascript/namespaces/15f7d141c3b76b85/repositories/.../downloads/ABmYrfCH......KpxO1ducu3xmMRtw==
      

      ProGet then follows the redirect, but does not send the authorization header. And thus, a 401 is issued. This is actually the default/expected behavior in HttpClient (i.e. the library in .NET we use) and most clients in other languages (Java, Go, Ruby, etc.) as well.

      Of course it can be worked-around by disabling auto-redirect and implementing yourself to follow the URl with the same header. But that's not so common and, as such, it's not a common practice for servers to issue redirects that require authentication; we see other services handle the redirect using some kind of token in the querystring.

      On our end, this has not been an issue to date. This is logic is buried pretty deep and it's not an easy fix without changing code everything relies on. I'm kind of surprised npm and pip override the default behavior in the fetch() and requests libraries.

      Anyway, it sounds like you can make a change on the private repository server code... so I would here would be to just disable authentication on your artifacts-downloads endpoint. I mean that URL is basically authenticated anyway.... it's so long (I stripped like 1000 characters) that it's basically a password.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Apply license key inside container

      Hi @jlarionov_2030 ,

      PG-3133 (which allowed pgutil settings to run without a license key) was applied to ProGet 2025.12 so I don't think it could have worked in ProGet 2024.39.

      There were also no changes from ProGet 2025.12 to 2025.18 that would have caused this, and it works fine for me.

      Are you sure you're running the pgutil settings command first to apply a license key?

      Just based on the logs, it doesn't say...

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Incorrect Vulnerability Assesment for versions later than specified in description

      Hi @aristo_4359 ,

      This will happen from time to time and there's no great solution to fixing it.

      The underlying issue is simple actually; the source data is incorrectly coded, and systems like PGVD that rely on that will display incorrect results.

      Since sources routinely update data (and they may fix this... if you ask), PGVD will also update the ingested data. So it becomes quite complicated to try to "override" incorrect data, even though it's so obvious from reading the description and looking at it.

      Without getting into too many details, here is how they encoded this at the source:

      "database_specific": {
         "last_known_affected_version_range": "< 0.19.3"
      }
      

      Compare this to another vulnerability at the same source, and you will see this is the correct encoding:

      {
         "last_affected": "2.0.13"
      }
      

      Given the infrequency that this happens, and the fact that it's an old, low-risk vulnerability (we would rate this as a "2 out of 5" on our upcoming scale FYI), we don't think it's worth worrying about.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: 401 When trying to download assests from private repo

      Hi @spencer-seebald_1146 ,

      Thanks for putting all the details together, this is really helpful! In theory, what you're doing should work... and I don't know why it's not. But it sounds like it'd be "trivial" to reproduce in a debug environment, so let's start there :)

      All we really need are credentials. It looks like your end-user opened a ticket on this issue as well (EDO-12512), so I will just add your email to that ticket and respond there with the same request.

      Once we have credentials, we'll try reproducing/fixing and hopefully get this working in no time :)

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Support for NotAutomatic/ButAutomaticUpgrades headers in Debian feed Release files

      Hi @geraldizo_0690 ,

      Thanks! And we appreciate your ideas/suggestion and detailed guidance on how to implement it.

      It seems really simple and should be available in the upcoming maintenance release (next Friday) via PG-3196 -- we can also let you know when a prerelease is available if you wanted to try it sooner than that.

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • 1
    • 2
    • 3
    • 4
    • 5
    • 36
    • 37
    • 1 / 37