Navigation

    Inedo Community Forums

    Forums

    • Login
    • Search
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    1. Home
    2. stevedennis
    3. Posts
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    Posts made by stevedennis

    • RE: Connector for accessing Jfrog pypi feed

      Hi @pmsensi ,

      You'd need to run through a proxy like Fiddler or ProxyMan, which can capture the outbound / inbound traffic. Or if you provide us with a URL/API key we might be able to play around and attach a debugger. You could always create a free trial or something and set it up that way.

      First thing that comes to mind is that your endpoint is incorrect and you may need to "play around" with the URL. Typically it's like this:

      • endpoint is https://server.corp/path/my-pypi
      • index file is https://server.corp/path/my-pypi/simple
      • ProGet automatically appends /simple unless you specify otherwise on Advanced

      The "Simple " endpoint is just a list of HTML like <a href="...">. That's what ProGet/connectors/pypi follow.

      Thanks,

      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: PHP Composer feed connect to vcs type repository or how to upload

      Hi @ayatsenko_3635, @dubrsl_1715,

      Thanks for the feedback and continued discussion.

      So, our Composer feed is relatively new, so we are open to exploring new ideas. One option might be to do an "import" of packages into ProGet, similar to how we handle connectors in Terraform feeds. But that's something that could also be done with a script, so we'd want to see what that looks like as a prototype.

      That said, we definitely can't implement "content pointers" to Git repositories. The Git-based "package" model is not only outdated but it has several "fatal" Software Supply Chain problems.

      The biggest problem, by far, is that package content is hosted by a third party (i.e. GitHub) and managed by another third party (i.e. the repository owner). At any time, a package author or the host can simply delete the repository and cause a major downstream impact - like the infamous left-pad incident in npm.

      This is why other package ecosystems have adopted a "read only" package repositories and disallow deletes (except for rare cases like abuse). Once you upload a package to npmjs, nuget, rubygems, etc. -- it's permanently there, and users can always rely on that being the case.

      That's simply not possible with Git-based "packages". The "content pointer" must be updated periodically updated, such as if the author decides to move the Git repo to GitLab, Gitea, etc. Now you no longer have a read-only package repository, but one that must be editable. Who edits? Are they tracked? What edits are allowed? Etc.

      There are several of other issues, especially with private/organizational usage, like the fact that there's no reasonable way to QA/test packages (i.e. compared to using a pre-release/repackaging workflow) and that committing/publishing are coupled (i.e. you tag a commit to publish it). This makes governance impractical.

      And that's not to mention the fact that there's no real way to "cache" or "privatize" third-party remote Git repositories. Unless, of course, you mirror them into your own Git server... which is technically challenging, especially if the author changes hosts.

      We first investigated feeds years ago, but they didn't have package files at the time -- only Git repository pointers. Like Rust/Cargo (which used to be Git-based, and still technically supports Git-based packages), we anticipate this same maturity coming to Packagist/Composer as well.

      So while we certainly understand that this is a workflow shift, it's a natural evolution/maturation of package management. Likewise, it took decades for the development world to shift from "file shares" and legacy source control to "Git", but that's what everyone has standardized on.

      That's the world ProGet operates in, so it might be a bit "too far ahead", or we might be misreading the tea leaves - but a "package mindset" is a requirement in using ProGet. But hoepfully we can make that easier, possibly with exploring "package imports" or something like that.

      Cheers,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: Known licenses are shown as unknown

      Hi @frank-benson_4606 ,

      Thanks for clarifying, that makes sense.

      I'm afraid that ProGet does not "crawl" the parent artifacts for metadata; we had considered it, but it's rather challenging to do from an engineering standpoint, difficult to present crawler errors, and fairly uncommon.

      Thanks,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: PHP Composer feed connect to vcs type repository or how to upload

      Hi @dubrsl_1715 ,

      Thanks for the clear explanation; I'm afraid your best bet here is to "take the plunge" and adopt package management best practices. A key tenet being a "self-contained archive file with a manifest" that is read-only and "cryptographically sealed" (i.e. hashed).

      A "pointer to a Git commit" seems convenient at first, but there's a good reason that many ecosystems (including Composer) have moved away from it -- ultimately it's just not scalable and doesn't handle team workflows very effectively.

      This will likely involve updating your composer.lock files and also the way you publish packages. In general, your process for creating a new version of a PHP packages should look something like:

      1. Checkout code
      2. Modify composer.json file with a version number
      3. Package the file
      4. Publish to ProGet

      As you get more mature, you can get into pre-release versioning and repackaging.

      Hope that helps,

      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: Known licenses are shown as unknown

      Hi @frank-benson_4606,

      I looked into this a bit closer now.

      Looking at the commons-io-2.14.0.pom, there is no Licenses element specified. The pom should have that, and it'd be nice if the package authors added it; if you requested that via a pull request or issue in their github, I'm sure they would. That said, that's why it's not showing in ProGet.

      This is why you see the unknown license detected, and that means you have to click "Assign License Type to Package" for ProGet to associate the package/license. I assume that you did that on 2.14.0, and selected Apache-2.0.

      By default, that selection only applies to the specific version, and if you wanted it to apply to all versions of commons-io (including future ones not yet published) you'd need to click on the "Apply to all versions".

      If you navigate to SCA > Licenses, and click on Apache-2.0, you can see the assignment to the package under the "Purls" tab. It would show: pkg:maven/commons-io/commons-io@2.14.0 for the version you selected.

      You will need to either do this for all versions or decide if you want to add an entry to the Package Name tab (i.e. pkg:maven/commons-io/commons-io) under the Apache-2.0 license definition.

      Thanks,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: ProGet encryption key decryption failure

      Hi @sneh-patel_0294 ,

      What I mean is, in your browser, open multiple tabs -- one for /administration/cluster on each node in the cluster, bypassing the load balancer. All nodes should show "green" for that.

      The one that shows "red" still has the wrong encryption key. Modify the encryption key, and restart the servicies, and reload the tab, and it should work fine.

      Thanks,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: ProGet encryption key decryption failure

      @sneh-patel_0294 to restart the services, you can do so from the Inedo Hub or the Windows Services (look for INEDOPROGETSVC and INEDOPROGETWEBSVC). If you're still using IIS, make sure to restart the app pool as well

      posted in Support
      stevedennis
      stevedennis
    • RE: ProGet encryption key decryption failure

      Hi @sneh-patel_0294 ,

      This message means that the decryption keys across machines are different, which result in exactly the behavior you describe (403s, logouts):
      https://docs.inedo.com/docs/installation/configuration-files

      I know you mentioned you already checked, so there's likely a typo, miscopy, looking at the wrong folder, etc. Note that the folder is %PROGRAMDATA%\Inedo\SharedConfig\ProGet.config, as opposed to C:\ProgramData\Inedo\... - on some machines, the program data folder is stored in a different location.

      I would also make sure to restart the service/web as well. To test, you can try loading that tpage on all nodes and you should not see "Encryption key decryption failure" when refreshing the nodes.

      Hope that helps,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: Known licenses are shown as unknown

      Hi @frank-benson_4606 ,

      This appears to be a known issue that will be fixed in 2025.15, releasing this Friday, that causes certain URL-based licenses to not be detected (PG-3153).

      If you're using Docker, you can try upgrading to inedo/proget:25.0.15-ci.4, which should have that fix in it.

      Thanks,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: ProGet 2025.10: License Update API Issues

      @jw good call, we'll get this fixed via in this week's release via PG-3161

      posted in Support
      stevedennis
      stevedennis
    • RE: 'Usage & Statistics' info missing

      Hi @k-lis_1147,

      Sorry on the slow reply; we did not get a chance to investigate in last release, but it was on the list this week. That being said, it was also an easy fix (didn't anticipate it to be a copy/paste fix)-- and we'll get it via PG-3160 in this week's mainteancne release.

      Thanks,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: How to create a license attribution report?

      Hi @frank-benson_4606,

      Whoops, it looks like that was kept in the documentation by mistake; I just removed it now.

      We had planned that feature way back in ProGet 2023, but it never was implemented. That feature -- as well as some of the more advanced license compliance ideas we had -- have since left our roadmap due to a total lack of interest from end-users.

      The main reason for lack of interest is that the pgutil builds audit lists all packages and licenses. So, most users found that to be sufficient. They just hand that list to the legal team, who creates that amendment. So perhaps that will suffice in your use-case as well.

      Let us know if not, always open to hearing more.

      Thanks,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: Known licenses are shown as unknown

      Hi @frank-benson_4606,

      ProGet's license detection requires generally that a package is cached or local to ProGet in order to detect the license. When you visit the package page, a request is being made to download the metadata from the remote connector, which is how you can see the license in that case.

      That being said:

      • you can enable OSS Metadata Caching, which will perform these requests on remote packages -- but it's obviously a performance hit
      • there is a known bug (fixed in 2025.15, releasing Friday) that causes certain URL-based licenses to not be detected (PG-3153)

      Hope that helps to troubleshoot. A prerelease version of 2025.15 is vailable should be interested

      Thanks,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: Reporting and SCA

      Hi @rick-kramer_9238,

      That vulnerability is in our database as PGV-2228003, and it shows up when I view that package:

      cb5a7fbf-48ef-4d73-9a94-09162c0a1992-image.png

      If you can provide more details about what you mean by " the report is telling us no vulnerabilities are detected" I can investigate further.

      Thanks,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: Remote NuGet package cached after unlisting

      Hi @yaakov-smith_7984 ,

      This behavior is expected and by design. "Deprecation" and "Unlisted" are server-side metadata (i.e. stored in the remote repository, not the package itself), and once a package is brought into to a different server (i.e. ProGet), it's "disconnected" from the other server.

      That being said, there is a feature in ProGet that can routinely "sync" this server-side metadata:

      • https://docs.inedo.com/docs/proget/sca/howto-deprecated-package-alerts
      • https://docs.inedo.com/docs/proget/sca/policies#oss-metadata-updating-caching

      This feature obviously comes with some performance costs, though you'd really have to enable it to see if that has any impact on operation.

      Another approach is to use a retention policy that deletes cached packages older than 90 days.

      Thanks,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: Upload to Debian Feed fails with "Package does not have a control file."

      Hi @frei_zs,

      Based on the fact that unpackaging/repackaging it works, there's definitely something "wrong" with the original package file.

      Debian uses a tarfile format, and there are several "buggy" tarfile writers that don't get the format quite write. If I remember correctly, some ancient versions of dpkg wrote these files incorrectly. Some tarfile readers account for these errors while others (perhaps including ours) do not.

      We may be able to attach the file to a debugger and give more details, but if this is a one-off or rare circumstance, than I would just repackage it and not worry about it.

      Thanks,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: ProGet 2025.14: Vulnerability Database Updater causes duplicates in PackageNameIds

      Hi @jw ,

      Did you try this on a new instance, or did you discover this on your (older) instance?

      This was a known issue through several versions of ProGet 2025, and it impacts mostly SCA as you noticed. However, the vuln updater has since been fixed, so it shouldn't be continuing.

      The "feed reindex" function can also merge/fix these duplicate names. They should be detected during a "feed integrity" check, and show as a "warning".

      Thanks,

      posted in Support
      stevedennis
      stevedennis
    • RE: Question on Single-Instance Performance (Docker Deployment with Embedded PostgreSQL)

      Hi @koksime-yap_5909 ,

      I'm afraid we can't provide much clearer guidance than that, as there are so many factors involved that make predicting performance basically impossible. For example, the feed types you're using, your CI server configuration, how often developers are rebuilding, etc.

      The article you found is actually what we send users who experience symptoms of server overload, to help understand where it comes from and how to prevent it. As the article mentions, the biggest bottleneck is network traffic during peak traffic - there's only so much that a single network card can handle, and scaling CPU/RAM doesn't really help.

      This is where load-balancing comes in. The main downside is complexity/cost, which is why a most customers start with a single instance. It can take quite a while for a tool like ProGet to be fully onboarded across teams, so performance problems likely won't happen at first.

      Hope that helps, let us know if you have any other questions!

      Thanks,

      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: Clarification on Retention Rules and Recently Created Files Being Deleted

      Hi @koksime-yap_5909 ,

      Good catch; that is most definitely a bug. I just checked, and it's isolated to assets - packages and Docker images work as expected.

      This will be fixed in the upcoming maintenance release via PG-3150; it's shiping Friday, but we can provide a pre-release if you're interested in testing earlier.

      Thanks,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: Clarification on Retention Rules and Recently Created Files Being Deleted

      Hi @koksime-yap_5909,

      In the event that the artifact has not been downloaded (i.e. the last download date is "null"), then the publish date will be considered. So if you set "90 days", then an artifact will be deleted at earliest, 90 days from publication if it hasn't been downloaded.

      Thanks,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: Lost Administrator Rights — How to Restore Admin Access?

      Hi @koksime-yap_5909,

      The command will recreate the user, restore administrative privileges, etc. It's safe to run - and you'll ultimately be left with a Admin/Admin user that you can log-in as.

      On ProGet 2025, the command is proget or proget.exe We should update the docs for sure

      Thanks,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: Rocky Linux rpm feed not working

      Hi @Sigve-opedal_6476 ,

      There are some known issues that we intend to fix with PG-3144 in the next maintenance release (scheduled for Friday). This will likely be resolved then.

      The inedo/proget:25.0.14-ci.10 container should have these changes inthem, if you'd like to try it out sooner.

      Thanks,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: inedoxpack error: No extensions were found...

      @yakobseval_2238 thanks for letting us know, I just updated it!

      posted in Support
      stevedennis
      stevedennis
    • RE: 'Usage & Statistics' info missing

      Hi @k-lis_1147,

      Based on what you described, it should show up.

      Can you confirm what feed type you're using, and whether or not you're using PostgreSQL (this is the default for ProGet 2025).

      I just discovered a bug (PG-3145) that would impact PostgreSQL (all feeds probably) and certain feed types on SQL Server (Maven) that would cause that information not to display on that page.

      Easy fix, but just want to double-check

      Thanks,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: Lost Administrator Rights — How to Restore Admin Access?

      Hi @koksime-yap_5909 ,

      If you ever get "locked out" of an Inedo product, either due to misconfiguration or a lost password, you can restore the default Admin/Admin account and reenable Built-in User Sign-on by using ProGet.exe resetadminpassword

      Here's more information on this procedure:
      https://docs.inedo.com/docs/installation/security-ldap-active-directory/various-ldap-troubleshooting

      Thanks,

      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: Mark private Nuget/Npm Packages as Vulnerable?

      Hi @tayl7973_1825 ,

      Thanks for the feedback; this is all a relatively new space, so we're in the process of building best practices / advice as well as tools to help teams solve these problems.

      Right now, based on your suggestion, it sounds like the workflow would require us to manually identify which applications depend on a vulnerable library, notify each owning team

      You are correct - the SCA Builds & Projects functionality is designed to "provide that link" between specific package versions and specific builds of applications. The builds are a moving target, as they may or may not be active/deployed.

      The "Project" in ProGet is not intended to the "source of truth" about the project itself, but be sort of sync'd with the truth (e.g. like an Application in BuildMaster). That's why there's a "stages timeline" for builds in PRoGet.

      hope it fits within their priorities, and then track remediation through individual tickets.

      Our advice here is to think of it more like, "advise them of the identified security risk and unavailability of the impacted library they are using". Ultimately it should be up to the team (their product owner) to evaluate the risk you identified and mitigate it. For example, TeamLunchDecider1000 can probably live with a security risk, but let the team decide.

      Once you've removed the library from ProGet, they can't use it anymore and it's "no longer your problem" to worry about or track through tickets.

      Ideally, we were hoping our package management system — since it already governs distribution and security controls — could act as that “one stop shop” to track and visualize which applications still rely on a vulnerable version along side it's assigned severity rating.

      ProGet already provides visibility into consumers through SCA, and you can already see how OSS Vulnerabilities impact builds.

      HOWEVER, our core advice here is to not try to establish your own in-house "vulnerability database" for in-house libraries your organization. Even large orgs (2000+ developers) won't do that.

      Instead, it's a simple binary decision: PULL or KEEP the library. If you PULL, then notify consumers it's unavailable going forward and let them decide how to mitigate.

      That approach is superior to OSS Vulnerability workflows, but it's obviously not possible for OSS library authors to do.

      Cheers,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: Mark private Nuget/Npm Packages as Vulnerable?

      Hi @tayl7973_1825 ,

      Thanks for clarifying it. Based on that, I would say that "Vulnerabilities" are most definitely the wrong tool for the job. You can certainly "hammer in a screw" but there's a better way to do it - and we don't make "screw hammers" at Inedo 😉

      We're working on best practices / guidance on how to build security policies around these topics, but I'll try to give some tips now.

      What you're describing isn't a vulnerability per se, but a SAST Issue: a potential weakness in code detected by a static analysis security tool. Most of these are false positives and present no real security risks, but some are.

      If you discover a SAST Issue in one of your libraries, then you should use the following process:

      1. Evaluate if it's a false positive or not
      2. Unpublish the library internally if there's a security risk
      3. Enumerate the consumers (i.e. applications in flight or deployed to production)
      4. Evaluate the security risk (low, high), based on the consumers/usage
      5. Notify the application teams to upgrade the library as appropriate

      Note how this process is fundamentally different than OSS packages / vulnerability workflows:

      • you can unpublish/block packages from your repository
      • you know which applications are consuming your packages
      • you know which teams maintain which applications
      • you can work with those teams to assess the risks

      Bottom line: if a package causes a real security risk, then unpublish it and fix the consuming applications as appropriate. Otherwise, don't.

      There's really no middle ground or room in this process for "Vulnerabilities" - and trying to curate an internal "vulnerability database" is just going to make things less secure in your organization.

      That's a theme in our upcoming content, but the general idea is when you treat all issues/vulnerabilities as security risks, then it's impossible to focus on the ones that are actual risks -- and it's as meaningless as saying "everything is a top priority".

      Thanks,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: Mark private Nuget/Npm Packages as Vulnerable?

      Hi @tayl7973_1825 ,

      This is not possible nor is it a workflow we'd recommend to support. Vulnerabilities have a very specific meaning / use case -- third-party discoveries in open-source packages that may impact your code (but probably won't) -- and it's not a good idea to "abuse" them for other purposes.

      Deprecated is one solution, but a better would be to use SCA and monitor how that package is being used, so you can understand impact on library consumers:
      https://docs.inedo.com/docs/proget/sca/builds

      Thanks,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: inedoxpack error: No extensions were found...

      Hi @yakobseval_2238 ,

      Can you let us know the commands/arguments you're using?

      Thanks,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: Apply license key inside container

      Hi @jlarionov_2030,

      I haven't tested or tried it, but I can't help but wonder the API is responding with some kind of "license required" error, and blocking the seting.

      I suppose we could investigate and try to resolve the error, but automated setup with a license key isn't so common of a requirement.... if tis is not really something you will do that often, perhaps it's not worth the effort.

      Let us know your thoughts.

      Thanks,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: Apply license key inside container

      Hi @jlarionov_2030 ,

      As of ProGet 2023 (or maybe earlier?), license keys are no longer requested / entered at installation time, but in the software itself now. This only matters on new instances.

      You can use pgutil settings to set a license key if you'd like.

      Thanks,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: Not able to upload .spd files to proget assets

      Hi @parthu-reddy,

      Thanks for discovering/confirming that; unfortunately we're not able to reproduce this issue, as the multi-part / chunked uploads already take into account multiple servers.

      • Chunked upload sessions are persisted in the shared database (ChunkedUploads table)
      • Bytes are appended to a file stored in shared store

      Would you be able to dig into the request patterns a little more? I suspect there's "something" configured on the load-balancer that's "doing something weird" with these ranged requests.

      The Multipart Upload API explains what's happening behind the scenes, and you may find that using pgutil assets upload is easier to troubleshoot.

      Thanks,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: ProGet 2025.10: License Update API Issues

      Hi @jw ,

      It definitely looks like there's some "drift" in the documentation and behavior. I already see some stuff about "allowedFeeds" in there, which isn't even a thing since policies. And you're right, you can't update code/title, which aligns with the pgutil behavior as well.

      We'd rather have "no documentation" than wrong documentation, any suggestions on what to delete from here? https://github.com/inedo/inedo-docs/blob/master/Content/proget/api/licenses/update.md

      Feel free to submit a PR if you've got a clear idea.

      In general, we want to make sure the pgutil docs are accurate (those are very easy to test), and we figure... someone can just look at the pgutil code to learn the HTTP endpoint or library.

      Thanks,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: npm connector returns 400

      Hi @udi-moshe_0021 ,

      My guess is that your proxy server is blocking certain things or having issues with redirects; you'd really have to monitor the back-and-forth traffic to see what's going on.

      As you can see, there are a lot of "redirects" going on and URLS/Domains that you will not expect and can be hard to predict.

      Thanks,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: Promote Package if Build is promoted to new stage

      Hi @it_9582,

      First and foremost, we don't recommend the "package promotion" feature as a means to indicate which "stage" (i.e. tested quality) a package is in relative to a CI/CD pipeline.

      Instead, repackaging should be used:
      https://docs.inedo.com/docs/proget/packages/repackaging

      Having multiple feeds is fine; we do that for Products and PreReleaseProducts on proget.inedo.com, but that's to make it "harder" for someone to accidently use a prerelease version. Otherwise, you can just use one feed and have retention policies cleanup the "-ci" builds.

      As for having the "build promotion" feature in ProGet be used as a workflow engine (i.e. to trigger actions upon promotion), I don't think we would consider that. At the most, we would do a webhook of sorts... though it doesn't make a ton of sense to be honest.

      The reason is that ProGet isn't intended as the "source of truth" for build status - the idea is that you would have something like a pipeline in BuildMaster) update the statuses in ProGet.

      The main benefit to having this status in ProGet is retention of builds/SBOMs.

      Hope that helps,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: Universal Package has no license field in metadata

      Hi @it_9582 ,

      You can specify licensing information in the description or in a custom metadata field like _licenseCode or something. It's not something that ProGet "reads" or "understands", mostly we didn't imagine users would package and upload third-party content with unwanted licenses.

      If that's a use case that your mid/long-term implementation, we could definitely explore working together and properly adding/supporting it as a feature. It's not something we'd want to just "quickly throw in" from a forum post :)

      Definitely chat with your account manager / point of contact once you're closer to or past the "purchase" side of things and we can make it happen. I've forwarded this message to him as well.

      Thanks,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: Does ProGet support Cloud Object Storage in Oracle Cloud Infrastructure's Object Storage

      Hi @mickey-durden_1899 ,

      Just to double check, can you try.... resetting to disk-based storage, reconfiguring cloud storage, and then making sure that DisablePayloadSigning is checked. And tehn open settings, to make sure DisablePayloadSigning is checked and stays checked.

      Thanks,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: Otter - InvalidRunspaceStateException on WinRM servers

      Hi @alexvanini_5999 ,

      I would try using the Inedo Agent instead; if you are getting this error, it most certainly means that there is some kind of security/hardening/account setting that is blocking WinRM. This is the underlying technology that PS Remoting and the PowerShell-based agent use.

      In this state, it's a real pain to get working - and the Inedo Agent is much more stable, anyway. It will not be "randomly blocked" by a new GPO or Patch Tuesday bug as we've seen a lot with WMI.

      Otherwise, you'll need to scour the web for obscure settings that may have been applied to the server. You may see information logged on the target machine under Windows Event Log under Windows Logs → Application or System related to WinRM.

      It's possible that the domain account lacks needed rights, even though it's a local admin. Sometimes subtle rights (like SeRemoteInteractiveLogonRight, etc.) can block initialization.

      Good luck, let us know what you find!

      Thanks,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: ProGet 2025.10: License Update API Issues

      Hi @jw ,

      I'm not able to reproduce any issues on my end; I'm not entirely sure how you're testing, but let me share with you the code on the server side in ProGet:

          private static async Task UpdateLicenseAsync(AhHttpContext context, LoggedResponseStream output, WebApiContext apiContext)
          {
              EnsureMethod(context, "POST");
              EnsureCanManageLicenses(apiContext);
      
              var input = await JsonSerializer.DeserializeAsync(context.Request.InputStream, LicenseApiJsonContext.Default.LicenseInfo, context.CancellationToken)
                  ?? throw new HttpException(400, "Expected license object.");
      
              var license = await DB.Licenses_GetLicenseAsync(External_Id: input.Code)
                  ?? throw new HttpException(404, "License not found.");
      
              List<int>? nameIds = null;
              if (input.PackageNames?.Count > 0)
              {
                  nameIds = [];
                  foreach (var n in input.PackageNames)
                  {
                      if (!PackageNameId.TryParse(n, out var nameId))
                          throw new HttpException(400, $"Invalid package name: {n}");
      
                      nameIds.Add((await nameId.EnsureDatabaseIdAsync()).Id!.Value);
                  }
              }
      
              List<int>? versionIds = null;
              if (input.Purls?.Count > 0)
              {
                  versionIds = [];
                  foreach (var v in input.Purls)
                  {
                      if (!PUrl.TryParse(v, out var purl))
                          throw new HttpException(400, $"Invalid purl: {v}");
      
                      versionIds.Add((await ((PackageVersionId)purl).EnsureDatabaseIdAsync()).Id!.Value);
                  }
              }
      
              await DB.Licenses_UpdateLicenseDataAsync(
                  License_Id: license.License_Id,
                  PackageVersionIds_Csv: versionIds?.Count > 0 ? string.Join(',', versionIds) : null,
                  PackageNameIds_Csv: nameIds?.Count > 0 ? string.Join(',', nameIds) : null,
                  SpdxIds_Csv: input.Spdx?.Count > 0 ? string.Join(',', input.Spdx) : null,
                  Urls_Csv: input.Urls?.Count > 0 ? string.Join(',', input.Urls) : null
              );
          }
      

      I'm not sure if that's helpful, but if not... can you put a specific reproduction case?

      Also note that the license data in the UI is cached, but it's invalidated when you visit the /licenses page and others.

      Thanks,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: How to create a Custom OSS provider

      Hi @fabrice-mejean ,

      I'm not sure what you mean by "Custom OSS provider", can you clarify?

      Thanks,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: Search feed(s) for version string

      @aristo_4359 oh I see! The "search" function does not work by version in that case

      posted in Support
      stevedennis
      stevedennis
    • RE: Search across all feeds for a specific file hash?

      Hi @rob-leadbeater_2457,

      I'm afraid a "search by file hash" isn't supported, but you could relatively easily write a script to iterate through the feeds using pgutil. Or you could just search in the databse as well (FeedPackageVersions_Extended).

      Thanks,
      steve

      posted in Support
      stevedennis
      stevedennis
    • RE: Search feed(s) for version string

      Hi @aristo_4359,

      You can see "all versions" of a package in the UI by clicking the package name in the breadcrumb or by selecting "All Versions" in the multi-button dropdown.

      Thanks,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: Get package license with ProGetClient

      Hi @pmsensi ,

      Can you take a look at this thread?

      https://forums.inedo.com/topic/5493/request-for-creation-of-api-for-package-auditing-before-dependency-restoration/7

      I believe that new API proposal ( pgutil packages metadata) would contain that information -- please share your thoughts in that thread so we can keep it in one place.

      Thanks,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: RPM feed can't be browsed

      Hi @wechselberg-nisboerge_3629 ,

      Given how you uploaded the file, the only scenario that I could see this happening is if the file on disk is somehow corrupted. For example, if you were to locate one of the .rpm files on disk and change a few bytes with a hex editor, I would expect this exact error to occur.

      A feed reindex could would never fix this and obviously files cannot "heal themselves". However, this is exactly how hardware behaves, so I would look into that.

      Thanks,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: RPM feed can't be browsed

      Hi @wechselberg-nisboerge_3629 ,

      This error is the result of a "bad rpm package" (basically one that is compressed using a method we don't support) getting accepted into ProGet. It should have been rejected on load -- if ProGet cannot open a package file due to unsupported compression, then I'm not sure how it would have indexed the package.

      You should be able to find which package it is by going to the "packages" tab at the top of the UI; when you click on the rpm package, it will give a similar error.

      How did you add this package? What package is it? This is an error we can definitely fix if we can figurer out how you got the unsupported package in there.

      Thanks,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: Various excpetions when browsing the web interface

      Hi @wechselberg-nisboerge_3629 ,

      These specific errors would have no impact on performance, feed loading, nor would they cause ProGet to "break down" in any manner. And rebooting would most definitely not help, since they stem from bad/corrupt data.

      One possibility is that you have bad hardware - that causes peculiar and sporadic errors just like these that cannot be reproduced,.

      I can only imagine how frustrating this is, but your experience is atypical and without reproduction cases we really don't know how to help. I would focus on trying to reproduce -- if it's indeed "bad data" that you are uploading, it would happen every single time.

      Thanks,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: /usr/local/proget/service/ProGet.Service missing from container image

      @albert-pender_6390 great news!

      And thanks for the heads up, I just updated the docs

      posted in Support
      stevedennis
      stevedennis
    • RE: /usr/local/proget/service/ProGet.Service missing from container image

      Hi @albert-pender_6390 ,

      This was renamed to proget, so it would just bew

      /usr/local/proget/service/proget upgradedb
      

      Thanks,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: Various excpetions when browsing the web interface

      Hi @wechselberg-nisboerge_3629 ,

      These messages are all unrelated and seem to stem mostly from a combination of bad/corrupt input data. Honestly I've never seen these errors before, but that's what they all sound like to me.

      Are these impacting any actual usage, or are you simply "seeing" them in the Diagnostic Center?

      If they are impacting usage, please put together a reproduction case so that you can consistently recreate the problem and we can study it.

      Otherwise, if they are jjust "showing up" then you can disregard them. The Diagnostic Center is not intended for "proactive health" checking, just to troubleshoot usage errors. Messages logged there many not be problems at all, especially if users are doing things like uploading bad data.

      Thanks,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • 1
    • 2
    • 3
    • 4
    • 5
    • 9
    • 10
    • 1 / 10