Navigation

    Inedo Community Forums

    Forums

    • Login
    • Search
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    1. Home
    2. atripp
    3. Posts
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    Posts made by atripp

    • RE: Symbol Server id issue

      Hi @it_9582 ,

      What version of ProGet are you using? There was a recent regression (PG-3204) that was fixed in ProGet 2025.19 with regards to symbol server. S hopefully upgrading will fix the issue.

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: The SSL connection could not be established, see inner exception.

      Hi @jeff-williams_1864 ,

      I'm not quite sure why nuget.org would report using a self-singed certificate? That seems off, but it sounds like you're doing "something" with regards to certificates that I don't quite understand :)

      On that note, the /usr/local/share/ca-certificates volume store the certificates to be included in the container's certificate authority, which is used when connecting to a server with self-signed certificates: https://docs.inedo.com/docs/installation/linux/docker-guide#supported-volumes

      Hope that helps,

      Alana

      posted in Support
      atripp
      atripp
    • RE: ProGet license injection in AKS Pod

      hi @certificatemanager_4002 ,

      The 500 is occurring on /health because licenseStatus=Error and the software is basically unusable until you correct the license issue.

      You would see a similar "blocking" error in the ProGet UI as well - so just check that, and once you correct the license error, the health check will return to normal..

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: ProGet license injection in AKS Pod

      Hi @certificatemanager_4002 ,

      The license key is set via the UI, so you can browse/access the service as per normal. Then, you will prompted to do that right away when there is no key or it expired: https://docs.inedo.com/docs/myinedo/activating-a-license-key

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Unverified/not approved chocolatey package categorized with Vulnerabilities:None

      Hi @svc-4x9p2a_6341 ,

      First and foremost, Chocolatey does not incorporate "Vulnerabilities" (i.e. centrally aggregated reports of vendor-reported weaknesses in software) into the package ecosystem. This is just not something that's a part of the Windows ecosystem as a whole, unlike the Linux ecosystem (e.g. Ubuntu OVALs).

      Chocolatey does, however, perform automated malware/virus scanning on packages. That's a totally different thing... please read our How Virus Scanning in Chocolatey Works article to learn more.

      From a technical standpoint, ProGet will use (abuse?) the vulnerability subsystem to treat "flagged" packages as vulnerable. This was a "quick and dirty" way for us to experiment with exposing this data through ProGet without having to build an entirely new subsystem just for Chocolatey packages.

      As for crystalreports2008runtime, it did not fail the virus/malware checking, so it's not going to be seen as "vulnerable" by ProGet. Instead, it hasn't been "validated" by Chocolatey's automated system. That's a different feature altogether (i.e. unrelated to virus checking) - and that ancient crystal reports package long predates the moderation feature in Chocolatey I believe.

      In any case, ProGet does not expose nor allow users to "filter" on this validation status, and it's highly unlikely such a capability would add much value to users - especially considering no one has asked for it, and the cost of developing an entirely new, Chocolatey-only feature is nontrivial.

      The reason is that everyone internalizes their packages; see Why You Should Privatize and Internalize your Chocolatey Packages
      to learn more

      Hope that helps, maybe @steviecoaster can assist more.

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Unverified/not approved chocolatey package categorized with Vulnerabilities:None

      Hi @svc-4x9p2a_6341 ,

      First and foremost, Chocolatey does not incorporate "Vulnerabilities" (i.e. centrally aggregated reports of vendor-reported weaknesses in software) into the package ecosystem. This is just not something that's a part of the Windows ecosystem as a whole, unlike the Linux ecosystem (e.g. Ubuntu OVALs).

      Chocolatey does, however, perform automated malware/virus scanning on packages. That's a totally different thing... please read our How Virus Scanning in Chocolatey Works article to learn more.

      From a technical standpoint, ProGet will use (abuse?) the vulnerability subsystem to treat "flagged" packages as vulnerable. This was a "quick and dirty" way for us to experiment with exposing this data through ProGet without having to build an entirely new subsystem just for Chocolatey packages.

      As for crystalreports2008runtime, it did not fail the virus/malware checking, so it's not going to be seen as "vulnerable" by ProGet. Instead, it hasn't been "validated" by Chocolatey's automated system. That's a different feature altogether (i.e. unrelated to virus checking) - and that ancient crystal reports package long predates the moderation feature in Chocolatey I believe.

      In any case, ProGet does not expose nor allow users to "filter" on this validation status, and it's highly unlikely such a capability would add much value to users - especially considering no one has asked for it, and the cost of developing an entirely new, Chocolatey-only feature is nontrivial.

      The reason is that everyone internalizes their packages; see Why You Should Privatize and Internalize your Chocolatey Packages
      to learn more

      Hope that helps, maybe @steviecoaster can assist more.

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Universal Package Versioning

      hi @tyler_5201,

      For a case like this, I'd recommend using a custom metadata field like _vendorVersion or something like that? Of course, that's going to be relatively easy.

      The hart part is to "map" the vendor numbers to a SemVer. I would look at the data, and decide how you want to "pack" them into three segments.

      2024.3.201 might work, assuming there are less than 100 revisions per service pack. Or maybe 2024.302.1. The number is really just for you, so whatever makes sense to you :)

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Using curl to either check or download a script file in Otter

      Hi @scusson_9923 ,

      One idea ... how about a try/catch block?

      It's not great.... but the catch will indicate the file doesn't exist.

      Just a thought...

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Zabbix rpm feed not working correctly

      Hi @Sigve-opedal_6476 , we're currently investigating and will let you know more later this week

      posted in Support
      atripp
      atripp
    • RE: Vulnerability checking on Maven packages

      Hi @davi-morris_9177 ,

      Unfortunately, the source data for these particular vulnerabilities specify invalid version numbers. A valid Maven version is a 5-part number consisting of 1-3 integer segments (separated by a .), an optional build number (prefixed with a -), and then an optional qualifier (another -). Following these rules, 2.9.10.8, is invalid.

      Valid versions are semantically sorted, where as invalid versions are alphabetically sorted -- which is what's causing the big headache here, since "2.21.1" < "2.9.10.8" when you sort alphabetically.

      At this time, we don't have any means to "override / bypass" source data, and rewriting/updating our Maven version parsing for just a small corner case (i.e. these old/irrelevant vulnerabilities in particular) doesn't seem worthwhile.

      As such, for the time being, your best solution is just to "Ignore" these vulnerabilities via an assessment. They are totally irrelevant now, not just because they refer to ancient versions, but there is simply no realistic real-world exploit path: https://cowtowncoder.medium.com/on-jackson-cves-dont-panic-here-is-what-you-need-to-know-54cd0d6e8062

      FYI - for ProGet 2026, we are working on a lot of improvements in vulnerability management that will reduce the noise of these non-exploitable vulnerabilities so teams can address actual risk and focus on delivering value instead of constant patching.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Layer Scanning is not working with images which is pushed with --compression-format zstd:chunked

      Hi @geraldizo_0690 ,

      Nice find with the busybox image... that makes it a lot easier to test/debug on our end!!

      We already have a ZST library in ProGet so, In theory, it shouldn't be that difficult to use that for layers like this. We'll add that via PG-3218 in an upcoming maintenance release -- currently targeting February 20.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Layer Scanning is not working with images which is pushed with --compression-format zstd:chunked

      Hi @geraldizo_0690 ,

      Are you seeing any errors/messages logged like, Blob xxxxxxx is not a .tar.gz file; nothing to scan.? If you go to Admin > Executions, you may see some historic logs about Container scanning.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Zabbix rpm feed not working correctly

      Hi @Sigve-opedal_6476 ,

      Could you give some tips/guidance on how to repro the error? Ideally, it's something we can see only in ProGet :)

      It's probably some quirk in how they implement things, but I wanted to make sure we're looking at the right things before starting.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Using curl to either check or download a script file in Otter

      Hi @scusson_9923 ,

      That is an internal/web-only API url, so it wouldn't behave quite right outside a web browser.

      I can't think of an easy way to accomplish what you're looking to do.... if you could share some of the bigger picture, maybe we can come up with a different approach / idea that would be easier to accomplish.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: InitContainers never start with Azure Sql on ProGet 25.0.18

      Hi @certificatemanager_4002 ,

      I'm sorry but I'm not familiar enough with Kubernetes to help troubleshoot this issue.

      All that I recognize here is the upgradedb command, which is documented here:
      https://docs.inedo.com/docs/installation/linux/installation-upgrading-docker-containers#upgrading-the-database-only-optional

      If you run that command from the command-line (on either linux or windows), things will written to the console. I wish I could tell you why you aren't seeing the messages.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Proget apt snapshot support?

      Hi @phil-sutherland_3118 ,

      This is not on a roadmap. Honestly we really don't really understand what a "snapshot" repository is or how they are used.

      We surveyed some customers about it a while ago, and this summarizes what they said: repository snapshots are archaic; they made sense a long time ago, but Docker changed all that. It's so much simpler to use container images like FROM debian:buster-20230919. That's effectively our snapshot, and when we need to main old releases (which happens more often than I'd like), we just rebuild the image from that. The other big advantage is that build time is easily 10x faster if not more.

      And then we saw that Debian also to maintains their own snapshots (https://snapshot.debian.org/), so we don't quite get how they are used outside of a handful of use cases (like a build process for a specialized appliance OS without Docker).

      Anyway we're open to considering it.... but only two people (including you) have asked in the past several years, so there's no real interest... and we're not sure what they even do :)

      That said, it's possible there's a way to accomplish something that has the same outcomes. For example:

      • create a public aggregate feed (jammy-all) with multiple connectors to Debian, Ubuntu, NGINX, Elasticsearch, etc.
      • create a release feed (jammy-20231101) that snapshots jammy-all

      But we don't know enough to answer that :)

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Http Logs enabled on only one server

      Hi @parthu-reddy ,

      I'm not sure if there's a relation here, but perhaps. The "running out of disk space" is not surpsing if you're indexing mega-repositories like the public debian repos. They are gigabytes in size. Here's some more info about those:
      https://blog.inedo.com/inedo/proget-2025-14-major-updates-to-debian-feeds

      You definitely want to switch to Indexing Jobs when you connect to public repops.

      This can be set at operating system level (it's the %ProgramData% special folder) or in ProGet under Admin > Advanced Settings > LocalStorage.

      Anyway this is somethitng best brought up as a separate topic if you have follow-ups (if you don't mind), I'd hate to pollute this thread with debian/indexing questions :)

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Error importing from Artifactory: The SSL connection could not be established, see inner exception.

      Hi @michael-day_7391 ,

      I can't really comment on what you're seeing in the Artifactory logs (i.e. [1] and [2]), but when an Access Token is specified, that token is sent on requests via a Bearer authorization header (unless Use Legacy API Header is selected). Otherwise, the Username/Password are sent via a Basic header. This happens on each and every request, regardless of whether it's a file download, api call, etc.

      Probably just easier to disable authentication during the import if this keeps coming up.

      OCI Registries (i.e. what you're using for your Helm charts, as opposed to a regular Helm registry) are not supported, so you'd need to export those files and use disk-based import or something like that.

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Deleting and creating a signing key for a Debian Feed doesn't give a success feedback, also still signature v3 is used?

      Hi @frei_zs,

      ProGet 2025.12 does not support the PGP v3 format, and there's no way you can get it working. So, you'll need to upgrade to the latest version, which does support the format.

      Here's some more information on the changes:
      https://blog.inedo.com/inedo/proget-2025-14-major-updates-to-debian-feeds

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Feed Group and Feed

      Hi @mikael ,

      We plan to add this support via PG-3213 in an upcoming maintenance release -- perhaps Feb 20 if all goes well!

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Ability to show usage of (e.g. script) assets

      Hi @jonathan-simmonds_0798,

      Thanks for the suggestion! This has been a long-standing wish-list item, but it's deceptively complicated.

      The "current idea" is a feature called "raft analysis" that will create a list of all raft items that depend on other raft items. For example, a pipeline that references a script. Or, a script that calls a module, and so on. It could also detect warnings/errors and report them.

      Creating this list often involves opening thousands of files and "parsing" them, which is not a trivial operation... but we're only talking "a few minutes" in most cases. However, the main challenges arise with invalidating this list (many edits will cause that to happen), and then communicating the status of the rebuild to users.

      I'll add a note to our BuildMaster 2026 roadmap though and see if we can explore it again; the current focus is boring..... modernization (PostgreSQL).

      That said, you probably noticed it but.... you should be able to see if a particular pipeline has an error (like a missing script) on the piepline overview page. Not as nice.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Deleting Debian feed and connectors didn't delete local index files

      Hi @parthu-reddy ,

      At this time, we don't have a disk cleanup procedure for local storage like this; we may add it in the future, but for the time being you can just delete them. The LocalStorage folder is ephemeral -- not quite "temp" storage, but the contents can be deleted. They will just be recreated next time it's needed.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Error importing from Artifactory: The SSL connection could not be established, see inner exception.

      Hi @michael-day_7391,

      ProGet is not designed to provide many details on network- or OS-level errors; that's where tools like Invoke-WebRequest come in. And it sounds like you've already discovered the root cause (failed certificate revocation check) that way.

      Anyway... when hosting ProGet on Windows, the Windows network stack will be used. So, if Windows is refusing to connect for whatever reason, then ProGet will also not connect. There's unfortunately no way around this, and we do not allow bypassing of SSL in ProGet.

      The good news is, once you get Invoke-WebReuqest working, then you'll be able to connect. There's probably some magical registry setting out there that will help :)

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Nuget connector stuck in failed state ("'0x00' is an invalid start of a property name")

      Hi @mayorovp_3701

      Actually zero byte in position 1 looks like attempt to read UTF16-LE-encoded json as UTF8

      Oh that's a great observation! Yeah that sounds like a reasonable explanation. But still... how could that even be possible?

      It's not like ProGet is going to randomly swap an encoding like that, and it's not like NuGet is going to store .json files incorrectly.

      As for experimentation, next time it happens:

      • remove one of the connectors from the feed to which one
      • navigate to the JSON endpoints of the connector in question, to see if you see the bad JSON
      • try to identify a pattern of behavior that causes this
      • watch for HTTP access logs to see if you can find the exact URL that's being accessed at the time of the connector failure (assuming it's a self-connector)
      • be prepared to attach a MITM proxy to ProGet (Admin > Proxy)

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Nuget connector stuck in failed state ("'0x00' is an invalid start of a property name")

      Hi @mayorovp_3701,

      That's a really strange error; it's basically saying that, somehow, a 0x0 character found its way into some JSON returned by the API. This character is invisible, and you'd need to use a kind of hex editor or developer tool to even see it.

      I guess, in theory it could be inserted by some intermediate device (firewall, gateway, etc), but who knows at this point. I can't imagine how that could happen on either NuGet or ProGet, but that's the first place to start looking.

      I suspect the server restart is unrelated; that certainly wouldn't cause a random 0x0 unless there's something really broken with the computer.

      From here, you'll want to keep isolating the issue, and try to figure out which connector is "bad"

      • If it's NuGet.org -- the issue is most certainly a network/gateway that's doing that.
      • If it's ProGet -- it's likely some strange bug, where 0x0 got inserted to the Database for a connector or feed or something. We saw that during some migrations, but it's realy hard to track down.

      I would just keep experimenting. If it's related to a reboot, just stop/start the service. That should be the same.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Error importing from Artifactory: The SSL connection could not be established, see inner exception.

      Hi @michael-day_7391,

      That's just a generic SSL error, which as you may know, is happening at the operating-system level. That quick-connect screen won't provide details. There may be a redirect happening, but it's hard to say.

      It's unusual that it would work from the web browser, but also not uncommon. If you use Invoke-WebRequest that would reproduce the error. If not, then stop the service and run ProGet manually (proget.exe run) so it's the same user/account.

      You should also be able to get a stack trace by adding a connector; that would be logged as a connector error.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Multiple deployment targets on same server

      Hi @koe ,

      This is definitely a problem that you can solve with BuildMaster, but before giving any kind of technical guidance, I'd like to understand the business processes.

      On first glance, this sounds like one of two scenarios:

      • Quasi-custom Software, where you create a customized build of a software application (perhaps bundled with their plugins, etc)
      • User-driven Deployments, where you maintain a single application but deploy a new version of that application based on user requirements (new feature they requested, bug fix, etc)

      Are either of those close?

      Whatever the case, can you describe the decision-making process or rationale that goes into "deploy a software release to either all production systems, all test systems or just a single one out of all these systems?"

      Are there different types of releases (e.g. a "patch" release of an old version)? Or is everyone "forward only, latest version"?

      BuildMaster is, of course, an automation platform - but more importantly, it's about modeling process and visualization. And when it comes to process, consistency is key - even when there are variations.

      We don't believe a decision like above is "arbitrary, and based on the whims of an application director", but there's probably some rationale that goes into it. So, with BuildMaster, our goal is to help get everyone on the same page about which process to follow for different releases.

      Anyway, how you model this will have a big impact down the line.

      Cheers,

      Alana

      posted in Support
      atripp
      atripp
    • RE: [Feature] Scope SCA permissions to Project or "Project Group"/Assign Project to Feed Group

      Hi @Nils-Nilsson ,

      Good news - this is actually on our ProGet 2026 roadmap.

      The general idea is to "reuse" Feed Groups -- I guess we'd call them "Feed & Project Groups" or something? Anyway, the projects would be grouped in the UI similarly, and you could scope project-based permissions to a group.

      We will try to get it as a preview feature in the coming weeks, assuming it can be done in low risk. It seems like this would be the case.

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Feed Group and Feed

      Hi @mikael ,

      Oh sorry, we decided not to refactor/rewrite the API after all -- and I guess we threw out all the "ideas" attached to that initiative as well. This was on our roadmap for several years, so we didn't realize there was a customer-facing request attached, hence how so we forgot.

      Anyway, I've moved this back as feature request, and we'll look to add this to the existing API. It probably won't be that bad! Please stay tuned, hopefully we'll evaluate within the next couple weeks.

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Local index file update failure: The remote server returned an error: (403) Forbidden.

      hi @michael-day_7391 ,

      The correct connector URL would be:

      https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.31/rpm/
      

      I added that connector and coould see/browse/download packages in the repository.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: AD integration not working in ProGet 2025.18

      Hi @michael-day_7391 ,

      I guess not? I've ever heard of StarTLS, and no one else seems to ask -- so it's probably not worth investigating. I guess LDAPs is what's popular, so probably easier to just go that route.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: An error duing cargo build

      Hi @caspacokku_2900 ,

      Yeah it's pretty weird. The errors are all over the place - not like in a specific database query or anything like that. This also isn't a sign of server-overload that we've seen.

      It's as if internal network connectivity is somehow breaking within a container? Or there's "something else" wrong with the internal PostgreSQL server? These are all "deep system" level errors in basically operating-system level code (drivers, etc).

      We've seen this with another user with really weird errors, but no idea on how to reproduce it. Maybe it's an "error reporting an error".

      As for the feed... there's nothing special about cargo vs other feeds from an API/usage standpoint. If anything, npm hammers the server a lot harder with its 1000+ package restores. And this has nothing to do with connectors.

      Unfortunately we don't have a lot to go on:

      • how about increasing hardware
      • can you try a different physical server
      • could it somehow be the underlying operating system?
      • any patterns as to when this is happening (lots of traffic etc)

      Any clues or consistency would help.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Note on the instructions for downloading packages from Debian Feed

      Hi @geraldizo_0690 ,

      Thanks for the report! Sometimes bug fixes are a single character like this ...
      1c7d2a43-f6ca-41b0-9853-65cdea7cb5b7-image.png

      It'll be be in the next maintenance release via PG-3205 :)

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: An error duing cargo build

      Hi @caspacokku_2900 ,

      I'm afraid these errors aren't related to Cargo, but are indicating some kind of system/network error. In some cases, there's a unexpected/broken network configuration between the ProGet application and the underlying database (PostgreSQL).

      In other cases, it appears to be related to connections to cargo's public repository.

      So bottom line, this is an environment-specific issue. Can you tell us a bit about your configuration? How about a Paste of Admin > System Information?

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: An error occurred in the web application: Property set method not found.

      Hi @michael-day_7391 ,

      These days, it's it's considered a "UX best practice" to provide minimal options in the installer because it's fewer choices to make when you're unfamiliar with the tool. Instead, programs should allow these to be configured post-installation. Most modern tools have shifted to this practice, including our products.

      As for the folder, the ProgramData folder is indeed the standard/recommended practice on Windows. If you're still configuring Windows using multiple drives/partitions (less common these days w/SSD), you should definitely change the User/ProgramData directories during provisioning:

      https://learn.microsoft.com/en-us/troubleshoot/windows-server/user-profiles-and-logon/relocation-of-users-and-programdata-directories

      Many Windows programs (including ProGet) will use these to store application data, and it can balloon to gigabytes if you're not careful.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: AD integration not working in ProGet 2025.18

      Hi @michael-day_7391 ,

      There is an option to "Use LDAPS", so I would makes sure to select that.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: An error occurred in the web application: Property set method not found.

      Hi @michael-day_7391 ,

      This folder is for ephemeral, temporary storage. The only time it will require a lot of space (hundreds of MB) is you're doing things like proxying public debian repositories.

      So maybe you want to change the wrong folder?

      Perhaps you are looking for the PackageRoot instead?

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Proget: Move data to another folder

      Thanks @certificatemanager_4002 -- that will definitely work if you're using inedodb (i.e. settting up a cluster), though for using a single-server instance, inedodb isn't recommended.

      @Sigve-opedal_6476 I'm not super-experienced at Linux myself.... but if the container is stopped, then you should be able to just move/copy the files. The container must be able to read the files -- I do know there's some kind of permissions/user error when things aren't set right.

      I'd share the database error when you restart.

      posted in Support
      atripp
      atripp
    • RE: Migration from SQLServer to PostGres

      Hi @certificatemanager_4002 ,

      We do not recommend using a multi-server / clustered installation for the database server; for an application with a profile like ProGet, it's significantly slower, less reliable, and ironically, has lower availability.

      Instead, in the unlikely event that your physical database server "goes bad", I would just do routine backups and be prepared to "spin up" a new server from backup ASAP.

      Thank,
      Alana

      posted in Support
      atripp
      atripp
    • RE: An error occurred in the web application: Property set method not found.

      Hi @michael-day_7391 ,

      It looks like this was an oversight, and this value cannot be configured. However, we will fix that via PG-3203 in the upcoming maintenance release on Friday.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Add support for Terraform Public Registry in ProGet (offline/air-gapped)

      Thanks @davidroberts63

      I looked into this a little more, so just as an FYI...

      • Terraform Modules are relatively simple templates for code/configuration that you write and maintain (think of it like a Helm chart)
      • Terraform Providers are executable files invoked by Modules, and many of them simply embed and wrap a CLI tool; the most popular providers are AWS, Azure, GCP, but there are many niche ones by other vendors

      No one really writes their own Terraform Providers, and it's highly unlikely you would ever do that. They should be thought of as SDK/CLIs you downloaded from a marketplace... except they run as admin/root and have your most "sacred" credentials.

      So just to clarify... you can currently host Terraform Modules in ProGet and you can also most certainly host Terraform Providers in ProGet (just using an Asset Directory). The question comes down to user experience and convenience.

      The issue here is that Terraform Providers do not fit into ProGet's "package" or "connector" mindset; so it's not just a new feed type, but creating a whole new feature in ProGet that won't really work like other feeds.

      We were able to "force" Terraform Modules into Packages/Feeds... but as you can see, it requires a lot of hoops to jump through, since Terraform is a "terraform.io-first" tool. However, many users create their own Modules, so it adds a lot of value.

      The same isn't true of Providers. From a "proxying" standpoint, high-control organizations would likely not want to "proxy" the Terraform Registry for Providers -- it's way to much risk, considering anyone can just upload whatever they'd like.

      Instead, they'd likely have a review process for adding and upgrading providers. And once that's in place.... how much time are we saving by using a specialized feed type over an asset directory?

      I don't have the answer to that --- but that's what we're considering :)

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: AD integration not working in ProGet 2025.18

      Hi @michael-day_7391 ,

      Unfortunately AD/LDAP issues can be pretty challenging to troubleshoot and debug. I can give you a few general tips, but this is one of those things where there are no useful logs -- it's like trying to diagnose why you're getting a timeout doing an HTTP request. The real issue is somewhere down the line.

      Assuming you're able to connect to the LDAP Server (Domain Controller in this case, it sounds like), the most common is permissions. This can get really painful, because it can be incredibly granular - an account can be allowed to enumerate groups, but not bind to specific users (i.e. do a login). Other times, it's related to multi-domain / complex forrests, and things like misconfigured trusts.

      For security reasons, the AD/LDAP server never really tells the client what's wrong -- that's why you won't see anything useful in ProGet. You have to look at logs on the sever to find out what the exact issue is.

      If those aren't easily accessible, my advise is to keep "playing around" and perhaps try the LDAP/OpenLDAP directory, which is basically just "raw" LDAP queries. Or try V5 vs V4, etc.

      Here is the source code, if you're curious to see what's going on behind the scenes:
      https://github.com/Inedo/inedox-inedocore/tree/master/InedoCore/InedoExtension/UserDirectories/ActiveDirectory

      Agan, the server logs (LDAP/AD Server, nt ProGet server) are going to be the best place to look for queries and issues.

      Hope that helps,

      Alana

      posted in Support
      atripp
      atripp
    • RE: The ConnectionString property has not been initialized

      Hi @tyler_5201 ,

      Sorry I really don't know what "squashing" or "101:0" permissions mean, so I don't really know what the issue is.... but I did read "this does work" and then saw you asked a question which I didn't quite understand 😅

      What I can say is that ProGet needs to have "full control" over the database, package, backup directories. That's not something we could reasonably change... and I don't know if you're even asking that.

      Otherwise, we also have a pretty basic Docker image configuration (Dockerfile), and obviously making changes comes with risks. Before considering those changes, we would need to really understand what kind of value/benefit comes out of this and what kind of changes are involved here.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Add support for Terraform Public Registry in ProGet (offline/air-gapped)

      Hi @mikael ,

      Thanks for the additional insight.

      The use case you describe (offline/air-gapped usage of Terraform) does seem rather niche, and is different than the traditional "proxying" use case we describe. Very few organizations will restrict internet access like that, and Proxying is more about controlling versions and limiting what developers are able to use.

      Anyway, it may not be a good fit for investment on our end. But we'll see if anyone else joins this thread :)

      That said, a Provider is basically just a executable file in a zip file. I wonder if you could simply use Asset directories somehow.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: ProGet - Delete API for Builds

      Hi @jw ,

      We added "auto-publish Inedo.ProGet" to our internal tracking list -- and in theory it will be done in the next week or so. I think we meant to do that earlier but it just fell off the list.

      Thanks for pointing that out - please don't hesitate to bug us if you don't see it published next time :)

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Maven Metadata Checksum Warnings

      Hi @wechselberg-nisboerge_3629,

      In ProGet, the maven metadata files (xml, hash) are indeed generated upon request. The output is deterministic, based on the artifacts in storage and (if relevant) in the remote repository (i.e. connectors). So, if you're seeing it changed, it's because an artifact was uploaded/etc.

      One thing to note -- you cannot upload a metadata file or hash file. Well, you can try (and maven tries) to PUT the file, but the stream is always ignored or "written to /dev/null" as they say.

      We've seen some maven workflows/plugins that attempt to modify/append to this metadata file and re-upload it with changes.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: File download with wget only works with auth-no-challenge argument

      Hi @it_9582 ,

      Thanks for the additional information.

      The reason this is happening is because these endpoints do not return a WWW-Authenticate: Basic ... header value when responding with 401.

      This behavior intentional, as our preferred authentication is using an API Key header value, not Basic credentials. Basic is an alternative option and can be used when you can't easily pass a header.

      We are not willing/able to change this behavior, as it would require changing code on a substantial number of endpoints and may break user integrations that have been relying on existing behavior.

      So I'm afraid this means you'll need to do one of the following:

      • add the --auth-no-challenge option
      • use wget --header="X-ApiKey: <api-key>" "<Download path>" instead
      • use curl instead
      • use pgutil instead

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Searching packages with symbol like @ and / will return empty

      Hi @aristo_4359 ,

      I assume you are referring to npm packages? And that you are using the /packages page and not the Feed page?

      This behavior seems a bit quirky, but expected. The /packages page does not have any special logic for handling package-specific types - and the "search" is more like a "filter". In npm, the @ symbol denotes a group (namespace), and the / symbol is the separator between a group and name - and this is why you get this behavior.

      Hope that explains the behavior a bit! It's not ideal, but a limitation for performance, etc.

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: File download with wget only works with auth-no-challenge argument

      Hi @it_9582 ,

      Unfortunately I'm not really sure what your script is doing or how to fix it... but I will describe the server (ProGet) behavior.

      Unless you allow Anonymous access on the endpoint, ProGet will respond with a 401 when you access a URL without any authentication information (API Key header, Basic credentials). That's what the message you are sharing appears to do.

      So if you're getting that message, then I guess the username/password isn't being sent? I really don't know what --auth-no-challenge means or does.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: ProGet 2025.14 (Build 12) - PostgreSQL Error when uploading

      Hi @it_9582,

      I'm afraid this requires a code change and an external database will have no impact.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • 1
    • 2
    • 3
    • 4
    • 5
    • 36
    • 37
    • 1 / 37