Navigation

    Inedo Community Forums

    Forums

    • Login
    • Search
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    1. Home
    2. atripp
    3. Posts
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    Posts made by atripp

    • RE: Noncompliant packages can still be downloaded

      Hi @daniel-mccoy_4395,

      Based on what you've described, it sounds like ProGet is indeed blocking downloads; this is visible in the ProGet Web UI with a "Download Blocked" indicator. If you try accessing the download URL, you will in fact get a 400 error.

      However, NuGet/Visual Studio aggressively cache package - which means they aren't even attempting to download them. If you clear all the NuGet caches (system, user, http, project, etc), then it should attempt to download then again.

      That said, as of ProGet 2026, we no longer recommend downloads. This is one reason, but there are more reasons.

      Here's an work-in-progress article that discusses our new guidance:
      https://guides.inedo.com/vulnerability-management/containment/

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: See all versions of a package regardless of feed and see feed status on that view for each version

      Hi @carl-westman_8110 ,

      Not really... Feeds and Views are a bit different concept and we don't really encourage using the presence in a particular feed as a means to identify whether something has been released. Instead, we'd encourage using Pre-Release Packages & Repackaging
      , which make it obvious from simply lookin at the version (i.e. 1.1.1-rc.7 indicates not yet released).

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: ProGet Migration

      Hi @certificatemanager_4002 ,

      ProGet is licensed per instance (i.e. installation), you will need a separate license if you wish to maintain a production and non-production instances of ProGet. See the official Licenses for Non-production / Testing Environments for more details.

      For things like a one-off, cloud-migration, using a Trial license (which you can get from My.Inedo.com) is fine.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: ProGet Migration

      Hi @certificatemanager_4002 ,

      Just to clarify the support:

      We are planning to upgrade to ProGet 25.x, as we understand that Microsoft SQL Server support will be not supported by the end of the year.

      We are currently planning to discontinue SQL Server support in ProGet 2027. It will continue to work in ProGet 2025 and ProGet 2026 regardless of when you use the software.

      To answer your questions...

      1. You can continue using SQL Server in ProGet 2025
      2. Please see Configuring High Availability & Load Balancing, which details the implementation
      3. ProGet for Linux is supported in a Docker environment; many users will deploy using Kubernetes, but we do not provide charts or templates... only a Docker Installation Guide that you will need to "translate" into pods, etc
      4. ProGet can handle that traffic, though a lot of factors will determine how much server resources are required; I would start with a two-node cluster and evaluate/consider adding more if needed

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: NPM Incorrect Handling of min-release-age

      Hi @Ashley ,

      Good news -- this will be fixed via PG-3265 in the upcoming maintenance release (next Friday).

      In case you're curious, the bug was that we were comparing packagePublished.AddDays(recentlyPublishedDays.Value) > DateTime.UtcNow.Date, which includes the time-portion on the left side, but not the right-side (so 12:00A).

      Just changing to packagePublished.Date.AddDays(recentlyPublishedDays.Value) > DateTime.UtcNow.Date does the trick, and it works for both Aged and Recently Published.

      cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: ProGet: Feed Signing Key

      Hi @stno_9153 ,

      Thanks for clarifying; that's not possible with ProGet. A Debian feed is not designed to be a "read-only mirror", but instead a repository where you can add/filter/update packages. So, that's why ProGet must generate/sign the (In)Release files.

      I'm afraid we have no plans to support a read-only mirror use case in the forseeable future.

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: ProGet: Feed Signing Key

      Hi @stno_9153 ,

      (In)Release files are signed using a private/public key scheme, so unless you were somehow able to get a copy of Ubuntu's private signing keys and upload it to ProGet... it is not possible to sign those files using the original Ubuntu Key.

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: NPM Incorrect Handling of min-release-age

      Thanks @Ashley, that's exactly what I was thinking.

      I haven't tried reproducing this yet, but I've got all the steps to now! And at that point, I'll have a debugger and all the code in front of me, so it should be an easy fix. It's probably related to UTC/local time, I don't think we've ever tested it "by the hour" like that :)

      Anyway stay tuned we'll get it fixed pretty soon.

      posted in Support
      atripp
      atripp
    • RE: ProGet: Debian feed minor performance problem

      Hi @stno_9153 ,

      Oh yeah, that'll make a HUGE difference for public repositories. OTherwise it'll probably not work at all :)

      Anyway glad it's working now

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: NPM Incorrect Handling of min-release-age

      Hi @Ashley ,

      To set override the publish date, first Pull the package to ProGet so that it's no longer a cached package. Once you do that, you will see a "Set Package Status" option (you may need to refresh page). On that modal dialog, select "Override Metadata..." and enter the date.

      3f6d8fe1-e54f-495e-be09-f9d55ed1dc4e-image.png

      That's what we do to test these rules; note you can delete the package and re-download it to cache it again.

      Let me know if you spot anything off, it seemed to work for me, but I might be looking at the wrong things.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: ProGet: Debian feed minor performance problem

      Hi @stno_9153,

      If the error happened during a apt update of hundreds of packages, then it probably was a case of server overload. Make sure to set a lower concurrent rate, which you can do under Admin > HTTPS SEttings > edit. 100 is the recommendation, and will be the default in ProGet 2026.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Deploying a Docker Image via Kubernetes with a yaml file

      Hi @brandon_owensby_2976,

      Argo CD is free / open source and no license is required. You'd be better off learning that than trying to do Kubernetes another way. FYI there's also Kargo, which is a "wrapper" that sits on top of ArgoCD and has some kind of promotion workflow outside of typical GitOps (pull requests I guess?).

      To be honest, I really don't know if the Kubernetes extension even works; it was originally intended for Otter, to create a "Desired state" and offer an alternative to Git-based approaches. But there's just no demand and GitOps is just the Kubeernetes standard. We haven't tested it in years.

      We do not plan on migrating it to the next SDK version. It's just a light wrapper around kubectrl, which has probably changed over the years. If you really wanna mess with Kubernetes outside of Argo CD I would just run kubectrl apply/replace directly.

      Good luck!!

      posted in Support
      atripp
      atripp
    • RE: NPM Incorrect Handling of min-release-age

      Hi @ashleycanham ,

      The min-release-age setting in npm and ProGet's "Recently published" are unrelated. One controls how the client (npm) behaves, the other controls how the server (ProGet) behaves.

      I'm not an expert on min-release-age on the client (npm) side, but I believe it changes the way the dependency resolution algorithm works. In turn, that means npm will request different packages from the server. That's why changing that value will yield different server results.

      On the server (ProGet) side, ProGet effectively blocks package downloads by looking at the publish date (which you can see on the history page, and even set/change on the Set Package Status Page) against the current server date time. This is obviously indicated by "Download Blocked" in your screenshot, but more precisely it has to do with "package compliance".

      There's a lot involved with that, but if you Reanalyze the Package, you can get detailed logs of what's making the package Noncompliant. Specifically, in those logs, you should see something like this:

      Policy "{policy.Name}" considers recently published ({recentlyPublishedDays} days) {rule}
      Publish date of {package.Published.Value.Date:d} is considered recently published.
      

      That date will be UTC-based (the UI typically displays local time, i.e. BST), but you'll get an idea of how it works.

      Anyway, that's where I would start. Considering timezones, rounding, or partial days, you may find it simply easiest to set min-release-age=8 so that npm isn't requesting a package that's 6.99999 days old, or something weird like that.

      One last thing worth mentioning, we are no longer recommending blocking noncomplaint packages in most cases. Instead, using pgutil builds scan can be used to "Break builds" and give a much clearer output, so that developers don't have to chase down npm error logs.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: ProGet: Debian feed minor performance problem

      Hi @stno_9153 ,

      If you're getting an error downloading a .deb file, it wouldn't be related to the feed/connector indexes (i.e. those In/Release files).

      When you request a .deb file, ProGet will first check if the file is stored (cached) locally. If so, then it will send them file. Otherwise, it will "forward" the request to the connector and stream the file to you while saving on disk, so that it's cached for next time.

      A timeout typically is related to network or hardware errors. The first thing I would work on is reproducing and isolating where theh error is occuring. You can delete cached packages from the feed using the UI, and also download files that way.

      I would just use curl to test downloads.

      Let us know what you find!

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Alpine/APK-based container images show no vulnerabilities despite CVEs existing in PGVD

      Hi @kien-buit_2449 ,

      Thanks for sharing the details. I was able to confirm this is some kind of bug (data problem?) in ProGet. It appears to be in the datafile that's downloaded/imported into ProGet, though I'm not sure.

      Stay tuned, and we'll let you know once a fix is ready.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: SBOM Dependency Tree is lost when importing and exporting

      Hi @christian-georg_5533 ,

      Thanks for sharing the additional context, that makes sense.

      You are correct -- ProGet is a package repository with SCA as a value-added feature. Most of our users create SBOM because they are required to due to regulations, and but don't find much use outside of that :)

      Of course we're always interested in expanding features but we aren't really striving for the "central SBOM Repository" use case now. We'll see if there's more demand down the line, feel free to share more information (other products, tools) sinc eit's always good to think about future versions of the product!

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: SBOM Dependency Tree is lost when importing and exporting

      Hi @christian-georg_5533 ,

      Thanks for sharing this; this behavior is expected.

      ProGet is not a "SBOM Document Repository" (e.g. like Dependency Track), but instead models Projects & Builds for Software Composition Analysis (SCA). A Build is comprised of Packages (which should be stored in Feeds in ProGet), and "importing an SBOM document" is one way to create a build/package dataset.

      Note that a build in ProGet will often be comprised of multiple SBOM documents, especially for web applications where like npm + .NET is used.

      ProGet can "export" a build as an SBOM, and some of the information from the imported document will be used. However, our SCA model does not model a dependency tree, so it's not possible for this kind of information to be output or preserved.

      That could be something we consider as a feature request, but we'd need to start with the "SCA model" (i.e. Builds + Packages) first, and try to understand why modeling a dependency tree relationship is beneficial.

      The main idea I can think of is to "reduce noise from vulnerabilities", but that's a core feature of ProGet 2026 (upcoming!) so I'd check that out first, and see if it makes sense still.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Increased Incorrect Classification of Security Vulnerabilities

      Hi @geraldizo_0690 ,

      I think the best way for us to proceed with this investigation is to get a copy of your database backup. And as a bonus, we'll validate your database to make sure the upgrade to ProGet 2026 and the new vulnerability management features work nicely :)

      I created a secure public link for you, which you can access in this ticket that I've created for you: https://my.inedo.com/tickets/view?ticketNumber=EDO-12790

      Just let us know once you've uploaded the BAK file, and we'll take a look and figure it out from there.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Unhandled exception in execution #xxx: 42702: column reference "DatabasePath_Text" is ambiguous

      Hi @cole-brand_2889 ,

      In general, that error message means some kind of code problem. Like, using "DatabasePathText" on a multi-table join without specifying which table/alias it belongs to.

      However given the queries (see below code), I don't see the problem. I haven't seen the error on PostgreSQL... so that must mean it's Aurora Postgres specific?

      According to ChatGPT, "Aurora PostgreSQL is stricter about recordset functions and want a relation alias before the column definition list.", but who knows if that's true. About the only way to test this theory is to modify the function code in your database. I've pasted it below, and you should be able to just run that to "edit" the code.

      It'll get updated during any normal upgrade/downgrade, so no real worry.

      The first change suggested was to add a BPT here:

          WITH BlobPackages_Table AS (
              SELECT * FROM jsonb_to_recordset("@BlobPackages_Table") AS BPT("DatabasePath_Text" VARCHAR(200), "PackageVersion_Id" INT)
          ),
      

      I don't see how that could work, but who knows. Another suggested change was this:

          WITH BlobPackages_Table AS (
              SELECT * FROM jsonb_to_recordset(COALESCE("@BlobPackages_Table", '[]'::jsonb)) AS BPT("DatabasePath_Text" VARCHAR(200), "PackageVersion_Id" INT)
          ),
      

      Although, I also don't believe that word work, since the @BlobPackages_Table would not be null. But again who knows.

      Anyway... that's where I would start. It might be something else altogether, but I can't see it and I guess my ChatGPT prompt didn't spot it either.

      CREATE OR REPLACE PROCEDURE "DockerBlobs_RecordScanData"
      (
          "@DockerBlob_Id" INT,
          "@BlobInfo_Configuration" XML,
          "@BlobPackages_Table" JSONB
      )
      LANGUAGE plpgsql
      AS $$
      BEGIN
      
          IF "@BlobInfo_Configuration" IS NULL THEN
              DELETE FROM "DockerBlobInfos" WHERE "DockerBlob_Id" = "@DockerBlob_Id";
          ELSE
              INSERT INTO "DockerBlobInfos" ("DockerBlob_Id", "BlobInfo_Configuration")
                   VALUES ("@DockerBlob_Id", "@BlobInfo_Configuration")
              ON CONFLICT DO
               UPDATE SET "BlobInfo_Configuration" = "@BlobInfo_Configuration"
                    WHERE "DockerBlob_Id" = "@DockerBlob_Id";
          END IF;
      
          UPDATE "DockerBlobs"
             SET "LastScan_Date" = CURRENT_TIMESTAMP
           WHERE "DockerBlob_Id" = "@DockerBlob_Id";
      
          WITH BlobPackages_Table AS (
              SELECT * FROM jsonb_to_recordset("@BlobPackages_Table") AS ("DatabasePath_Text" VARCHAR(200), "PackageVersion_Id" INT)
          ),
          packagesToRemove AS (
              SELECT *
                FROM "DockerBlobPackages" DBP
                LEFT JOIN BlobPackages_Table BPT 
                       ON BPT."DatabasePath_Text" = DBP."DatabasePath_Text" 
                      AND BPT."PackageVersion_Id" = DBP."PackageVersion_Id"
               WHERE DBP."DockerBlob_Id" = "@DockerBlob_Id" 
                 AND BPT."PackageVersion_Id" IS NULL
          ),
          deletes AS  (
              DELETE FROM "DockerBlobPackages" DBP
                    USING packagesToRemove PTR
                    WHERE DBP."DockerBlob_Id" = "@DockerBlob_Id"
                      AND DBP."DatabasePath_Text" = PTR."DatabasePath_Text" 
                      AND DBP."PackageVersion_Id" = PTR."PackageVersion_Id"
          ),
          newBlobPackages AS (
              SELECT BPT.*
                FROM BlobPackages_Table BPT
                     LEFT JOIN "DockerBlobPackages" DBP 
                            ON DBP."DockerBlob_Id" = "@DockerBlob_Id" 
                           AND BPT."DatabasePath_Text" = DBP."DatabasePath_Text" 
                           AND BPT."PackageVersion_Id" = DBP."PackageVersion_Id"
               WHERE DBP."PackageVersion_Id" IS NULL
          )
          INSERT INTO "DockerBlobPackages"
               SELECT "@DockerBlob_Id",
                      "DatabasePath_Text",
                      "PackageVersion_Id"
                 FROM newBlobPackages BPT;
      
      END $$;
      

      // also I do realize the json input is not a great way to handle this, but it's how we needed to port a few things from SQL Server to maintain parity in behavior

      Let us know if you find anything! Thanks, Alana

      posted in Support
      atripp
      atripp
    • RE: Deploying a Docker Image via Kubernetes with a yaml file

      Hi @brandon_owensby_2976,

      I'm afraid we don't have a lot of great documentation / information on how to deploy Kubernetes using BuildMaster. The existing extension is quite old and reflects a pre-Helm approach where Kubernetes resources were deployed directly via raw manifests.

      These days, Helm charts are the standard way to package Kubernetes applications. A chart contains templated manifests along with default configuration (values.yaml) that can be overridden per environment.

      Once you have a chart, you typically deploy it with a command like:

      helm upgrade --install myapp corp/app -f values.yaml
      

      Additional overrides (like your override.yaml) can be layered in as needed. In theory, that's where a BuildMaster configuration file would come in, and BuildMaster would also run the upgrade commands.

      However... Helm isn't really run outside of development environments. Instead, most teams use a GitOps-based tool (i.e. Argo CD or Flux), which in turn use Helm to continuously "sync" whatever's in Git with what's running in the cluster.

      The idea is that the "deployment state" is maintained in Git and doesn't need to be triggered from an external release system. In other words, a production "deployment" is done by issuing a commit.

      Because of this, pipeline-driven deployment tools BuildMaster just popular for Kubernetes workflows. We've seen competitive tools try, but they get a lot of pushback from the end-users (i.e. Kubernetes engineers) and as a result don't see much adoption.

      In my opinion, this is like 7 layers of unnecessary complexity (let alone error-prone) and a basic Docker deployment covers like 99% of use cases... but that's not where the market is.

      Hope that helps clarify things a bit, let us know what you find.

      Cheers,

      Alana

      posted in Support
      atripp
      atripp
    • RE: Not able to delete published docker images

      Hi @parthu-reddy ,

      This does not appear to be a "fat manifest". Two other things are coming to mind.

      First, are these referenced in any helm charts? If so, they won't get removed unless you delete the helm chart first.

      Second, how about trying a separate policy, "Delete untagged manifests"? That should clear out these images.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Amazon.S3.AmazonS3Exception: Please reduce your request rate.

      Hi @cole-brand_2889 ,

      Wow, I didn't realize that S3 rate-limited like that!! That's good to know.

      Unless this is something that could be configured as an advanced SDK switch in the S3FileSystem, then I don't think there's much that could/should be done in the ProGet or extension code.

      After searching the error (and seeing this very post on the first page of Google 😂), this just seems to be endemic with S3; there's no published rate limit, and even in AWS official blog article the only solutions seem to be "follow the error message and reduce your request rate".

      There's probably something you can do do on the load-balancer side of things... reducing concurrent requests, etc.

      Let us know what you find!

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: ProGet Connector Filters Performance

      Hi @davidroberts63 ,

      While connector filters were never really designed to replace the "approved packages" workflow, we've seen many users do exactly that over the years, yielding hundreds of entries.

      It's not exactly a use case we recommend, as one of the big benefits of the approved packages flow is to prevent "instinctively upgrading dependencies" yielding in regressions. But, if you're already effectively doing that through automation, then I suppose you already know the risks :)

      From a performance standpoint, it shouldn't make a notable impact. Those have been optimized for quite some time now.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Incorrect published date handling breaks min-release-age for npm feeds

      Hi @aleksander-szczepanek_3253 ,

      If you navigate to Admin > Advanced Settings and check "Use Connector Publish Date", then this will behave as you expect. Note that you will need to delete already-cached packages.

      This will be default behavior in ProGet 2026+

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Transfer License: Active On Two Servers Temporarily

      Hi @denis-krienbuehl_4885 ,

      Thanks for checking; for a short-term like this no problem!

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Supported Database for ProGet HA Installations

      Hi @EnterpriseVirtualization_2441 ,

      We do not recommend using SQL Server Availability groups..

      For a product like ProGet, a single database node is all that's required -- and it's strongly recommended.

      There is no practical benefit to a clustered database here - on the contrary, it makes the product slower, less stable, and more costly/complex to maintain. As such, InedoDB does not support clustering.

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Support for kubernetes-based deployment of ProGet and InedoDB?

      Hi @jeff-williams_1864 ,

      ProGet for Linux (Docker) is fully supported. You deploy it how you'd like, and many customers use container orchestration platforms like Kubernetes with no problem.

      However, we only provide step-by-step instructions for Docker. This is intentional, as these platforms are quite complex and require a lot of skills to configure, maintain, and troubleshoot.

      While we try to help support "platform issues" on Windows (i.e. everything from permissions to Domain configuration), that's a lot more straightforward for us to support -- and Microsoft can pick up the slack (e.g. a failed Windows update, etc).

      So long story short, if you are comfortable with Kubernetes/Openshift, feel free to use it. But otherwise, we don't want ProGet to be our users' "first Kubernetes" experience :)

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: https://docs.inedo.com/docs/proget/api/pgutil#sources ~/.config/pgutil/ pgutil.config correction

      Thanks for pointing that out @rcpa0 ! I've just updated the docs now.

      posted in Support
      atripp
      atripp
    • RE: Supported Database for ProGet HA Installations

      Hi @jeff-williams_1864 ,

      You mentioned that you're "using the embedded database at the moment", which I take to mean that you're not using a separate SQL Server container image.

      The In that case, the only options for a clustered installation is using InedoDB (recommended) or an External PostgreSQL (not recommended).

      If you were using SQL Server, then SQL Server would be supported for a clustered instance as well. However, we are moving away from SQL Server, so we definitely wouldn't recommend it on a new installation.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Proget is unable to download Maven packages that use a nonstandard versioning scheme

      Hi @devops-user @joshua-mitchell_8090 ,

      Thank you so much for testing! We'll merge this in via PG-3251 in tomorrow's maintenance release.

      As for the other error, it's technically unrelated - but that package has such a long "compliance analysis report" that it's getting truncated in the database cache. PostgreSQL complains about that, SQL Server silently does it. Anyway w'ell fix via PG-3250 perhaps in tomorrow's release as well.

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Docker _catalog and tags calls do not respect tokens

      Hi @Stephen-Schaff ,

      The Docker API does not use API keys but a ticket-based system (i.e. docker login). Here is how to use it:
      https://docs.inedo.com/docs/proget/docker/semantic-versioning#example-powershell-script-to-authenticate-to-docker

      We added some kind of support via PG-3206 in ProGet 2025.20, though it was only intended to address self-connectors to Docker registries. I do'nt know how well it will work here.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Composer feed: metapackage is not saved as a local package

      Hi @vdubrovskyi_1854 ,

      Unfortunately this is just how composer works; it never requests the metapackage from the server (i.e. ProGet) nor does it upload the composer.lock file to ProGet.

      There is obviously no way for ProGet to "guess" what metapackages you may want. Obviously, ProGet does not automatically download/install every metapackage from the upstream repository. That's obviously not behavior anyone would want, and we will not add it to ProGet.

      You have two options:

      1. Modify the behavior of composer to request these packages from ProGet,
      2. Writing a script to parse your composer.lock and then download and/or promote those files within ProGet

      Hope that helps,

      Alana

      posted in Support
      atripp
      atripp
    • RE: Composer feed: metapackage is not saved as a local package

      Hi @vdubrovskyi_1854 ,

      I'm not an expert on how Composer handles packages, but so far as I can tell the behavior you’re seeing is expected and is how metapackage types are handled.

      A metapackage does not contain any files and is not installed into the vendor/ directory. It exists only to define dependencies on other packages. There is no contents in the package and thus, there is nothing for Composer to fetch.

      Because of this:

      • It will appear in composer.lock as part of dependency resolution
      • It will not create a directory under vendor/
      • The content itself is not be fetched (downloaded) from Composer
      • Only the Composer API is queried

      ProGet can only cache a package when a download/fetch occurs. Since metapackages are not fetched, there is nothing to cache.

      When you downloaded manually, you are deviating from Composer’s normal install behavior for metapackages -- so that's why it appears.

      In summary, this behavior is expected and not an error in ProGet. Unfortunately there's no way for ProGet to cache these packages, since Composer never downloads them.

      Hope that helps,

      Alana

      posted in Support
      atripp
      atripp
    • RE: [Buildmaster] Add queryable custom properties on a deployment level

      Hi @Anthony,

      I'm afraid not; the deploymentinfo is part of the BuidMaster API, and it's effectively reading data from the database that you'd otherwise see in the UI. Its fairly "disconnected" from runtime execution.

      The only persistent (i.e. outside of runtime) variables are going to be configuration (i.e. Build-scoped) variables. I suppose one thing you could do is define multiple build variables (e.g. $MyTarget1=value, $MyTarget2=value2).

      You could also store a map variable on the build, like %MyMap = %(MyTarget1: value, MyTarget2: value2) -- although that might involve a bit of awkward OtterScript to get working.

      It's not a use case we designed for.

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Proget is unable to download Maven packages that use a nonstandard versioning scheme

      Hi @devops-user ,

      No problem, I just pushed Release 2025.25-rc.1.

      It's based simply the 2025.24 release with the Maven patch added in. You can install like this:
      https://docs.inedo.com/docs/installation/windows/inedo-hub/howto-install-prerelease-product-versions

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: proget.inedo.com DDOSed?

      Thanks for the heads up @felfert !

      Looks like we're back now, looks like there was some issue reporting the outage on our end :)

      Alana

      posted in Support
      atripp
      atripp
    • RE: [Buildmaster] Add queryable custom properties on a deployment level

      hi @Anthony ,

      This sounds like a great use-case for variables; you can programmatically set them in OtterScript using an operation (check your /reference/operations page to learn what the name is, I think it was renamed in modern versions of BUildMaster), and then query them with the variables endpoint: builds/«application-name»/«release-number»/«build-number»

      Here's a link to the documentation:
      https://docs.inedo.com/docs/buildmaster/reference/api/variables

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Proget is unable to download Maven packages that use a nonstandard versioning scheme

      Hi @devops-user ,

      Can you try out the container image above? We'd like to get confirmation that it's working and we can then merge it to ProGet 2025 (we were otherwise planning ProGet 2026).

      Thanks
      Alana

      posted in Support
      atripp
      atripp
    • RE: Proget is unable to download Maven packages that use a nonstandard versioning scheme

      Thanks @joshua-mitchell_8090 , we'll consider merging it in then!

      As for "how the dependencies are identified within the project build vulnerabilities", I suppose so - the IncrementalVersion2 will allow for proper vulnerability associated with packages that use "incorrect" versions (like 1.2.3.4). Jackson Databind is the one we kept coming across.

      Note you can request another trial key from my.inedo.com to try it out :)

      posted in Support
      atripp
      atripp
    • RE: ProGet SBOM Scan Not Creating Vulnerability Issues for NPM Packages

      Hi @_moep_ ,

      So there are quite a few "moving pieces" here.

      Vulnerability -> Assessment -> Compliance -> Build Issue

      Vulnerabilities & Assessments

      First and foremost, when you navigate to qs@0.6.6 in the ProGet UI, you should see several vulnerabilities listed, such PGV-2287703. So, the "identification" is there as a result of the offline version of that database being included with ProGet.

      But, ProGet is all about reducing noise while helping elevate real risks - and most vulnerabilities are theoretical, have no real-world exploits, would require a dedicated attacker, and would result tin no real damage.

      A "Denial of Service from Prototype Pollution" is great example of such a vulnerability. The risks and problems introduced by reactively upgrading every dependency far exceed any benefits -- moreover, it "de-sensitizes" everyone to real security risks. The idea of "when everything is severe nothing is" is the same as "when everything is a priority, nothing is".

      That's where Assessment comes in. In ProGet 2025 and earlier, a vulnerability is generally as "assessed" Ignored, Warn, or Blocked. PGV-2287703 will be assessed as Warn by default.

      **NOTE this will be changing in ProGet 2025. **

      Policies & Compliance

      Next, there's the question of Compliance; the vulnerability assessment (among other things, like license, deprecation status, etc) will determines whether or not a package is Compliant, Noncompliant, or Warn.

      Compliance rules are configured in policies. In ProGet 2025, by default, the "Warn" Assessment will not make a package Noncompliant. Just Warn.

      Builds & Issues

      A Build is considered Noncompliant if any of the packages are Noncompliant. A Noncomplaint build should be blocked from deploying to production.

      This is where Issues come in: an issue may be created when a build is analyzed (try it out by clicking [analyze] in the UI) for a Noncompliant package. The purpose of these Issues are to effectively "override" the compliance status on a single package.

      They are not informational; if you want a list of packages, vulnerabilities, licenses, just use pgutil builds audit to get that listing.

      Long story short, I'd decide on a process you'd want to use before even considering web hooks for all this.

      Also note that this mostly requires a paid license, so you may not even be getting functionality if you're on a free version

      hope that helps,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Support for Air-gapped environments

      Hi @steviecoaster ,

      Offline / air-gapped installation is common and a documented use-case:
      https://docs.inedo.com/docs/installation/windows/inedo-hub/offline

      As the article mentions, you can download the "offline installer", which is essentially a
      self-extracting zip file that runs a Custom Installer created using the Inedo Hub.

      That .exe file is not suitable for automation, so if that's a requirement then you'll need to use an alternative if you wanted to automate upgrade/installation. That article outlines a few concepts, but ultimately it really depends how "air-gapped" we're talking here.

      If we're talking a SCIF with "security-guard inspected installation media", then I don't think automation is really going to really get you much ;)

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Allow networkservice to use the DB in Proget

      Hi @reseau_6272 ,

      Just to confirm, you've switched the ProGet service from a domain account to use Network Service, and when starting the service you're getting some kind of permission error from SQL Server?

      The easiest solution is to simply switch to using a username/password instead of Windows Integrated Authentication and edit the connection string appropriately. Keeping in mind that, eventually, you will need to move away from SQL Server and migrate to PostgreSQL, which will not have these issues.

      Otherwise, you will explicitly need to grant a login to the machine account. Network Service is represented in SQL Server as the machine account (e.g., DOMAIN\MACHINENAME$), and the identity needs to be explicitly created CREATE LOGIN [MYDOMAIN\WEB01$] FROM WINDOWS; before you can assign permissions.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: [ProGet] Unexpected redirect when accessing Maven package with non-standard version starting with a character

      Hi @koksime-yap_5909,

      Good news, it's available now for testing! We're considering merging to ProGet 2025, or maybe keeping for ProGet 2026?

      Anyway, I posted a lot more detail now:
      https://forums.inedo.com/topic/5696/proget-is-unable-to-download-maven-packages-that-use-a-nonstandard-versioning-scheme/2

      Thanks,
      Alana

      FYI -- locked this, in case anyone has comments/questions on that change, I guess that post will be the "official" thread at this point :)

      posted in Support
      atripp
      atripp
    • RE: Symbol Server id issue

      Hi @it_9582 ,

      Thanks for checking that! Well, I'm not sure then :)

      From here, how about sending us the package? Then I can upload it and see about debugging in ProGet to find out where it's coming from.

      If you can open a ticket and reference QA-3010 somewhere it'll link the issues right-up on our dashboard. Then you can attach the file to that ticket.

      We'll respond on there, and eventually update this thread once we figure out the issue.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Proget is unable to download Maven packages that use a nonstandard versioning scheme

      Hi @joshua-mitchell_8090 ,

      Thanks for the inquiry! The changes are available in the inedo/proget:25.0.24-ci.4 container image, and we'd love to get a second set of eyes. Are you using Docker?

      They're relatively simple, but we just avoid changing stuff like this in maintenance releases... so it's currently slated for ProGet 2026.

      But it should be okay for a maintenance release. Please let us know, we'll decide to release based on your or other user feedback.

      Here's what we changed.

      First, we added a "sixth" component called IncrementalVersion2 that will support versions like 1.2.3.4-mybuild-678 (where 4 is the second incrementing version), so that vulnerability identification can work better. Our implementation is based on the the Maven version specs, which in retrospect, seems to be followed only by ProGet. Pretty low risk here.

      Second, we changed our "path parsing" logic, which identifies the groupId, artifactId, version, artifactType from a string like /junit/junit/4.8.2/junit-4.8.2.jar into /mygroup/more-group/group-42/my-artifact/1.0-SNAPSHOT/maven-metadata.xml.

      It's a little hard to explain, so I'll just share the new and old logic:

      //OLD: if (urlPartsQ.TryPeek(out string? maybeVersion) && char.IsNumber(maybeVersion, 0))
      if (urlPartsQ.TryPeek(out string? maybeVersion) && (
          char.IsNumber(maybeVersion, 0)
          || maybeVersion.EndsWith("-SNAPSHOT", StringComparison.OrdinalIgnoreCase)
          || (this.FileName is not null && !this.FileName.Equals("maven-metadata.xml", StringComparison.OrdinalIgnoreCase))
          ))
      {
          this.Version = maybeVersion;
          urlPartsQ.Pop();
      }
      

      Long story short, this seems to work fine for v8.5.0 and shouldn't break unless someone is uploading improperly named artifact files (e.g. my-group/my-artifact/version-1000/maven-metadata.xml or e.g. my-photo/cool-snapshot/hello-kitty.jpg).

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: [ProGet] Unexpected redirect when accessing Maven package with non-standard version starting with a character

      Hi @koksime-yap_5909 ,

      Just as a quick update! Given that this is a more wide-spread problem, we've fixed the code and plan to release in ProGet 2026 (or possibly sooner, if we can make it low-risk enough in a mainteannce release).

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Symbol Server id issue

      Hi @it_9582 ,

      Sorry it looks like we're dealing with a lot more code than I expected we would. I really don't know what to look at, and neither your code or our code makes sense (it's been many many years since anyone edited it.

      I'm not sure if it's helpful, but I'll share the code of our class. If you spot anything simple to change, we can explore it. Otherwise, I think the only way to move forward would be for you to share us some example nuget packages that we can set a debugger to.

      Here's the MicrosoftPdbFile class, which i combined into one giant string here:

      using System;
      using System.Collections;
      using System.Collections.Generic;
      using System.Collections.Immutable;
      using System.IO;
      using System.Linq;
      using System.Text;
      
      namespace Inedo.ProGet.Symbols;
      
      /// <summary>
      /// Provides access to the data contained in a Microsoft PDB file.
      /// </summary>
      public sealed class MicrosoftPdbFile : IDisposable, IPdbFile
      {
          private RootIndex root;
          private Dictionary<string, int> nameIndex;
          private bool leaveStreamOpen;
          private bool disposed;
      
          /// <summary>
          /// Initializes a new instance of the <see cref="MicrosoftPdbFile"/> class.
          /// </summary>
          /// <param name="stream">Stream which is backed by a PDB file.</param>
          /// <param name="leaveStreamOpen">Value indicating whether to leave the stream open after this instance is disposed.</param>
          public MicrosoftPdbFile(Stream stream, bool leaveStreamOpen)
          {
              if (stream == null)
                  throw new ArgumentNullException(nameof(stream));
      
              this.leaveStreamOpen = leaveStreamOpen;
              this.Initialize(stream);
          }
      
          /// <summary>
          /// Gets the PDB signature.
          /// </summary>
          public uint Signature { get; private set; }
          /// <summary>
          /// Gets the PDB age.
          /// </summary>
          public uint Age { get; private set; }
          /// <summary>
          /// Gets the PDB guid.
          /// </summary>
          public Guid Guid { get; private set; }
      
          ImmutableArray<byte> IPdbFile.Id => this.Guid.ToByteArray().ToImmutableArray();
          bool IPdbFile.IsPortable => false;
      
          /// <summary>
          /// Returns a stream backed by the data in a named PDB stream.
          /// </summary>
          /// <param name="streamName">Name of the PDB stream to open.</param>
          /// <returns>Stream backed by the specified named stream.</returns>
          public Stream OpenStream(string streamName)
          {
              if (streamName == null)
                  throw new ArgumentNullException(nameof(streamName));
      
              int? streamIndex = this.TryGetStream(streamName);
              if (streamIndex == null)
                  throw new InvalidOperationException($"Stream {streamName} was not found.");
      
              return this.root.OpenRead((int)streamIndex);
          }
          /// <summary>
          /// Returns an enumeration of all of the stream names in the PDB file.
          /// </summary>
          /// <returns>Enumeration of all stream names.</returns>
          public IEnumerable<string> EnumerateStreams() => this.nameIndex.Keys;
          /// <summary>
          /// Returns an enumeration of all of the source file names in the PDB file.
          /// </summary>
          /// <returns>Enumeration of all of the source file names.</returns>
          public IEnumerable<string> GetSourceFileNames()
          {
              var srcFileNames = this.EnumerateStreams()
                  .Where(s => s.StartsWith("/src/files/", StringComparison.OrdinalIgnoreCase))
                  .Select(s => s.Substring("/src/files/".Length))
                  .ToHashSet(StringComparer.OrdinalIgnoreCase);
      
              try
              {
                  using (var namesStream = this.OpenStream("/names"))
                  using (var namesReader = new BinaryReader(namesStream))
                  {
                      namesStream.Position = 8;
                      int length = namesReader.ReadInt32();
                      long endPos = length + 12;
      
                      while (namesStream.Position < endPos && namesStream.Position < namesStream.Length)
                      {
                          try
                          {
                              var name = ReadNullTerminatedString(namesReader);
                              if (name.Length > 0 && Path.IsPathRooted(name))
                                  srcFileNames.Add(name);
                          }
                          catch
                          {
                              // Can't read name
                          }
                      }
                  }
              }
              catch
              {
                  // Can't enumerate names stream
              }
      
              return srcFileNames;
          }
      
          /// <summary>
          /// Closes the PDB file.
          /// </summary>
          public void Close()
          {
              if (!this.disposed)
              {
                  this.root.Close(this.leaveStreamOpen);
                  this.disposed = true;
              }
          }
          void IDisposable.Dispose() => this.Close();
      
          private void Initialize(Stream stream)
          {
              var fileSignature = new byte[0x20];
              stream.Read(fileSignature, 0, fileSignature.Length);
      
              this.root = new RootIndex(stream);
      
              using (var sigStream = this.root.OpenRead(1))
              using (var reader = new BinaryReader(sigStream))
              {
                  uint version = reader.ReadUInt32();
                  this.Signature = reader.ReadUInt32();
                  this.Age = reader.ReadUInt32();
                  this.Guid = new Guid(reader.ReadBytes(16));
      
                  this.nameIndex = ReadNameIndex(reader);
              }
          }
          private int? TryGetStream(string name) => this.nameIndex.TryGetValue(name, out int index) ? (int?)index : null;
      
          private static Dictionary<string, int> ReadNameIndex(BinaryReader reader)
          {
              int stringOffset = reader.ReadInt32();
      
              var startOffset = reader.BaseStream.Position;
              reader.BaseStream.Seek(stringOffset, SeekOrigin.Current);
      
              int count = reader.ReadInt32();
              int hashTableSize = reader.ReadInt32();
      
              var present = new BitArray(reader.ReadBytes(reader.ReadInt32() * 4));
              var deleted = new BitArray(reader.ReadBytes(reader.ReadInt32() * 4));
              if (deleted.Cast<bool>().Any(b => b))
                  throw new InvalidDataException("PDB format not supported: deleted bits are not 0.");
      
              var nameIndex = new Dictionary<string, int>(hashTableSize + 100, StringComparer.OrdinalIgnoreCase);
      
              for (int i = 0; i < hashTableSize; i++)
              {
                  if (i < present.Length && present[i])
                  {
                      int ns = reader.ReadInt32();
                      int ni = reader.ReadInt32();
      
                      var pos = reader.BaseStream.Position;
                      reader.BaseStream.Position = startOffset + ns;
                      var name = ReadNullTerminatedString(reader);
                      reader.BaseStream.Position = pos;
      
                      nameIndex.Add(name, ni);
                  }
              }
      
              return nameIndex;
          }
          private static string ReadNullTerminatedString(BinaryReader reader)
          {
              var data = new List<byte>();
              var b = reader.ReadByte();
              while (b != 0)
              {
                  data.Add(b);
                  b = reader.ReadByte();
              }
      
              return Encoding.UTF8.GetString(data.ToArray());
          }
      
          private sealed class PagedFile : IDisposable
          {
              private LinkedList<CachedPage> pages = new LinkedList<CachedPage>();
              private Stream baseStream;
              private readonly object lockObject = new object();
              private BitArray freePages;
              private uint pageSize;
              private uint pageCount;
              private bool disposed;
      
              public PagedFile(Stream baseStream, uint pageSize, uint pageCount)
              {
                  this.baseStream = baseStream;
                  this.pageSize = pageSize;
                  this.pageCount = pageCount;
                  this.CacheSize = 1000;
              }
      
              public int CacheSize { get; }
              public uint PageSize => this.pageSize;
              public uint PageCount => this.pageCount;
      
              public void InitializeFreePageList(byte[] data)
              {
                  this.freePages = new BitArray(data);
              }
              public byte[] GetFreePageList()
              {
                  var data = new byte[this.freePages.Count / 8];
                  for (int i = 0; i < data.Length; i++)
                  {
                      for (int j = 0; j < 8; j++)
                      {
                          if (this.freePages[(i * 8) + j])
                              data[i] |= (byte)(1 << j);
                      }
                  }
      
                  return data;
              }
              public byte[] GetPage(uint pageIndex)
              {
                  if (this.disposed)
                      throw new ObjectDisposedException(nameof(PagedFile));
                  if (pageIndex >= this.pageCount)
                      throw new ArgumentOutOfRangeException();
      
                  lock (this.lockObject)
                  {
                      var page = this.pages.FirstOrDefault(p => p.PageIndex == pageIndex);
                      if (page != null)
                      {
                          this.pages.Remove(page);
                      }
                      else
                      {
                          var buffer = new byte[this.pageSize];
                          this.baseStream.Position = this.pageSize * pageIndex;
                          this.baseStream.Read(buffer, 0, buffer.Length);
                          page = new CachedPage
                          {
                              PageIndex = pageIndex,
                              PageData = buffer
                          };
                      }
      
                      while (this.pages.Count >= this.CacheSize)
                      {
                          this.pages.RemoveLast();
                      }
      
                      this.pages.AddFirst(page);
      
                      return page.PageData;
                  }
              }
              public void Dispose()
              {
                  this.baseStream.Dispose();
                  this.pages = null;
                  this.disposed = true;
              }
      
              private sealed class CachedPage : IEquatable<CachedPage>
              {
                  public uint PageIndex;
                  public byte[] PageData;
      
                  public bool Equals(CachedPage other) => this.PageIndex == other.PageIndex && this.PageData == other.PageData;
                  public override bool Equals(object obj) => obj is CachedPage p ? this.Equals(p) : false;
                  public override int GetHashCode() => this.PageIndex.GetHashCode();
              }
          }
          private sealed class PdbStream : Stream
          {
              private RootIndex root;
              private StreamInfo streamInfo;
              private uint position;
      
              public PdbStream(RootIndex root, StreamInfo streamInfo)
              {
                  this.root = root;
                  this.streamInfo = streamInfo;
              }
      
              public override bool CanRead => true;
              public override bool CanSeek => true;
              public override bool CanWrite => false;
              public override long Length => this.streamInfo.Length;
              public override long Position
              {
                  get => this.position;
                  set => this.position = (uint)value;
              }
      
              public override void Flush()
              {
              }
              public override int Read(byte[] buffer, int offset, int count)
              {
                  if (buffer == null)
                      throw new ArgumentNullException(nameof(buffer));
      
                  int bytesRemaining = Math.Min(count, (int)(this.Length - this.position));
                  int bytesRead = 0;
      
                  while (bytesRemaining > 0)
                  {
                      uint currentPage = this.position / this.root.Pages.PageSize;
                      uint currentPageOffset = this.position % this.root.Pages.PageSize;
      
                      var page = this.root.Pages.GetPage(this.streamInfo.Pages[currentPage]);
      
                      int bytesToCopy = Math.Min(bytesRemaining, (int)(this.root.Pages.PageSize - currentPageOffset));
      
                      Array.Copy(page, currentPageOffset, buffer, offset + bytesRead, bytesToCopy);
                      bytesRemaining -= bytesToCopy;
                      this.position += (uint)bytesToCopy;
                      bytesRead += bytesToCopy;
                  }
      
                  return bytesRead;
              }
              public override int ReadByte()
              {
                  if (this.position >= this.Length)
                      return -1;
      
                  uint currentPage = this.position / this.root.Pages.PageSize;
                  uint currentPageOffset = this.position % this.root.Pages.PageSize;
      
                  var page = this.root.Pages.GetPage(this.streamInfo.Pages[currentPage]);
                  this.position++;
      
                  return page[currentPageOffset];
              }
              public override long Seek(long offset, SeekOrigin origin)
              {
                  switch (origin)
                  {
                      case SeekOrigin.Begin:
                          this.position = (uint)offset;
                          break;
      
                      case SeekOrigin.Current:
                          this.position = (uint)((long)this.position + offset);
                          break;
      
                      case SeekOrigin.End:
                          this.position = (uint)(this.Length + offset);
                          break;
                  }
      
                  return this.position;
              }
              public override void SetLength(long value) => throw new NotSupportedException();
              public override void Write(byte[] buffer, int offset, int count) => throw new NotSupportedException();
              public override void WriteByte(byte value) => throw new NotSupportedException();
          }
          private sealed class RootIndex
          {
              private BinaryReader reader;
              private List<StreamInfo> streams = new List<StreamInfo>();
              private StreamInfo rootStreamInfo;
              private StreamInfo rootPageListStreamInfo;
              private uint freePageMapIndex;
      
              public RootIndex(Stream stream)
              {
                  this.reader = new BinaryReader(stream);
                  this.Initialize();
              }
      
              public PagedFile Pages { get; private set; }
      
              public Stream OpenRead(int streamIndex)
              {
                  var streamInfo = this.streams[streamIndex];
                  return new PdbStream(this, streamInfo);
              }
              public void Close(bool leaveStreamOpen)
              {
                  if (!leaveStreamOpen)
                      this.reader.Dispose();
              }
      
              private void Initialize()
              {
                  this.reader.BaseStream.Position = 0x20;
                  var pageSize = this.reader.ReadUInt32();
                  var pageFlags = this.reader.ReadUInt32();
                  var pageCount = this.reader.ReadUInt32();
                  var rootSize = this.reader.ReadUInt32();
                  this.reader.ReadUInt32(); // skip reserved
      
                  this.Pages = new PagedFile(this.reader.BaseStream, pageSize, pageCount);
                  this.freePageMapIndex = pageFlags;
      
                  // Calculate the number of pages needed to store the root data
                  int rootPageCount = (int)(rootSize / pageSize);
                  if ((rootSize % pageSize) != 0)
                      rootPageCount++;
      
                  // Calculate the number of pages needed to store the list of pages
                  int rootIndexPages = (rootPageCount * 4) / (int)pageSize;
                  if (((rootPageCount * 4) % (int)pageSize) != 0)
                      rootIndexPages++;
      
                  // Read the page indices of the pages that contain the root pages
                  var rootIndices = new List<uint>(rootIndexPages);
                  for (int i = 0; i < rootIndexPages; i++)
                      rootIndices.Add(this.reader.ReadUInt32());
      
                  // Read the free page map
                  this.reader.BaseStream.Position = pageFlags * pageSize;
                  this.Pages.InitializeFreePageList(this.reader.ReadBytes((int)pageSize));
      
                  this.rootPageListStreamInfo = new StreamInfo(rootIndices.ToArray(), (uint)rootPageCount * 4);
      
                  // Finally actually read the root indices themselves
                  var rootPages = new List<uint>(rootPageCount);
                  using (var rootPageListStream = new PdbStream(this, this.rootPageListStreamInfo))
                  using (var pageReader = new BinaryReader(rootPageListStream))
                  {
                      for (int i = 0; i < rootPageCount; i++)
                          rootPages.Add(pageReader.ReadUInt32());
                  }
      
                  this.rootStreamInfo = new StreamInfo(rootPages.ToArray(), rootSize);
                  using (var rootStream = new PdbStream(this, this.rootStreamInfo))
                  {
                      var rootReader = new BinaryReader(rootStream);
      
                      uint streamCount = rootReader.ReadUInt32();
      
                      var streamLengths = new uint[streamCount];
                      for (int i = 0; i < streamLengths.Length; i++)
                          streamLengths[i] = rootReader.ReadUInt32();
      
                      var streamPages = new uint[streamCount][];
                      for (int i = 0; i < streamPages.Length; i++)
                      {
                          if (streamLengths[i] > 0 && streamLengths[i] < int.MaxValue)
                          {
                              uint streamLengthInPages = streamLengths[i] / pageSize;
                              if ((streamLengths[i] % pageSize) != 0)
                                  streamLengthInPages++;
      
                              streamPages[i] = new uint[streamLengthInPages];
                              for (int j = 0; j < streamPages[i].Length; j++)
                                  streamPages[i][j] = rootReader.ReadUInt32();
                          }
                      }
      
                      for (int i = 0; i < streamLengths.Length; i++)
                      {
                          this.streams.Add(
                              new StreamInfo(streamPages[i], streamLengths[i])
                          );
                      }
                  }
              }
          }
          private sealed class StreamInfo
          {
              private uint[] pages;
              private uint length;
      
              public StreamInfo(uint[] pages, uint length, bool dirty = false)
              {
                  this.pages = pages;
                  this.length = length;
                  this.IsDirty = dirty;
              }
      
              public uint[] Pages
              {
                  get => this.pages;
                  set
                  {
                      if (this.pages != value)
                      {
                          this.pages = value;
                          this.IsDirty = true;
                      }
                  }
              }
              public uint Length
              {
                  get => this.length;
                  set
                  {
                      if (this.length != value)
                      {
                          this.length = value;
                          this.IsDirty = true;
                      }
                  }
              }
              public bool IsDirty { get; private set; }
          }
      }
      
      posted in Support
      atripp
      atripp
    • RE: Symbol Server id issue

      Hi @it_9582 ,

      It's certainly possible; there's a few hundred lines of code that make up the MicrosoftPdbFile class, so I don't know which parts to share with you. Of course I'm happy to share it all if you'd like.

      Since you mentioned your colleague was able to read the file, perhaps you can share what you did, and I can see how it compares to our code?

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Symbol Server id issue

      Hi @it_9582 ,

      If there's an error reading the file using GetMetadataReader, then we load it using the MicrosoftPdbFile class that we wrote. So, I'm guessing that's what's causing the wrong information?

      Anyway, let me share you the full code for the PortablePdbFile class. I summarized it before, but you can see the full context of what we're doing any why.

      using System.Collections.Immutable;
      using System.IO;
      using System.Reflection.Metadata;
      
      namespace Inedo.ProGet.Symbols;
      
      public sealed class PortablePdbFile : IPdbFile
      {
          private readonly MetadataReader metadataReader;
      
          private PortablePdbFile(MetadataReader metadataReader) => this.metadataReader = metadataReader;
      
          // visual studio always treats this value like a guid, despite the portable pdb spec
          public ImmutableArray<byte> Id => this.metadataReader.DebugMetadataHeader.Id.RemoveRange(16, 4);
      
          // not really age, but actually last 4 bytes of id - ignored by visual studio
          uint IPdbFile.Age => BitConverter.ToUInt32(this.metadataReader.DebugMetadataHeader.Id.ToArray(), 16);
          bool IPdbFile.IsPortable => true;
      
          public IEnumerable<string> GetSourceFileNames()
          {
              foreach (var docHandle in this.metadataReader.Documents)
              {
                  if (!docHandle.IsNil)
                  {
                      var doc = this.metadataReader.GetDocument(docHandle);
                      yield return this.metadataReader.GetString(doc.Name);
                  }
              }
          }
      
          public static PortablePdbFile Load(Stream source)
          {
              if (source == null)
                  throw new ArgumentNullException(nameof(source));
      
              try
              {
                  var provider = MetadataReaderProvider.FromPortablePdbStream(source, MetadataStreamOptions.LeaveOpen);
                  var reader = provider.GetMetadataReader();
                  if (reader.MetadataKind != MetadataKind.Ecma335)
                      return null;
      
                  return new PortablePdbFile(reader);
              }
              catch
              {
                  return null;
              }
          }
      
          void IDisposable.Dispose()
          {
          }
      }
      
      posted in Support
      atripp
      atripp
    • RE: Proget 25.x and Azure PostGres

      Hi @certificatemanager_4002 ,

      From the cybersecurity perspective, it's fine to leave it as root since the core process is run by the non-root user postgres inside of the container. You're never exposing a network service while the containerized process has root privileges.

      Here is more information on this if you're curious:
      https://stackoverflow.com/questions/73672857/how-to-run-postgres-in-docker-as-non-root-user

      As you can see in that link, it's technically possible to configure as non-root, but it requires more effort and doesn't really get you any benefit.

      As for load-testing and restarting, it really depends on the hardware and similar factors. Keep in mind that InedoDB is simply the postgresql container image with some minor configuration tweaks/changes. So any question you ask about InedoDB you can really ask about postgresql as well.

      As for using an external PostgreSQL server, the only information we have at this time is in the link I sent you before. You really need to be an expert on PostgreSQL if you wish to run your own server.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • 1
    • 2
    • 3
    • 4
    • 5
    • 38
    • 39
    • 1 / 39