They are ordered by the name - so if orderis important to you, I guess you could always do the 10-nuget.org, 20-abc.net or whatever
Thanks,
Alana
They are ordered by the name - so if orderis important to you, I guess you could always do the 10-nuget.org, 20-abc.net or whatever
Thanks,
Alana
Hi @Stholm ,
I assume you saw the Terraform Modules documentation in ProGet?
While updating the Support for Terraform Backends to link to this discussion, I noticed we had some internal notes. So I'll transfer them here:
This would require implementing both the Provider Registry Protocol (for first-party plugins) and the Provider Network Mirror Protocol (for connectors). Both seem relatively simple, though there appear to be some complexities involving signature files.
In either case, we ought to not package these because they are quite large. For example, the
hashcorp\awsprovider for Windows is just a zip file with a single, 628mb .exe. They also have no metadata whatsoever that's returned from the API.One option is just to store these as manifest-less packages. For example,
hashicorp/awspackages could bepkg:tfprovider/hashicorp@5.75.0?os=windows&arch=amd64. This would be two purls in one feed, which might not work, so it might require a new feed.
Don't ask me what that all means, I'm just the copy/paster 
But based on my read of that, it sounds like a big effort (i.e. a new feed type) to try to fit a round peg in a square hole. And honestly your homebuilt solution might work better.
I think we'd need to see how much of a demand there is in the offline/air-gapped Terraform userbase for this. But feel free to add more thoughts as you have them.
Thanks,
Alana
Hi @james-woods_8996 ,
ProGet uses the AWS SDK for .NET, so I can't imagine environment variables would have any impact. I have no idea what those mean or do, but there's probably way to configure those in the SDK.
That said, another user is currently testing a change for Oracle Cloud Infrastructure, which seems to also be giving some kind of hash related error.
Perhaps it'll work for you? AWS v3.1.4-RC.1 is published to our prerelease feed. You can download and manually install, or update to use the prerelease feed:
https://docs.inedo.com/docs/proget/administration/extensions#manual-installation
After installing the extension, the "Disable Payload Signing" will show up on the advanced tab - and that property will be forwarded to the Put Request. In theory that will work, at least according to the one post from above.
One other thing to test would be uploading Assets (i.e. creating an Asset Directory) via the Web UI. That is the easiest way to do multi-part upload testing.
If it doesn't work, then we can try to research how else to change the extension to get it working.
Thanks,
Alana
Can you clarify what you've tried to date, and the issues you've faced?
You can silently install the Inedo Agent, or even use a Manual Installation process if you'd prefer.
Ultimately it's a standard Windows service, and you can change the account from LOCAL SYSTEM (which we recommend, by the way) to another account using sc.exe or other tools.
Thanks,
Alana
Hi @jw ,
Technically it's double, though it's not trivial due to the number of places the change would need to be made and tested... ProGet API, pgutil, docs.
The code/title change itself looks trivial (i.e. just pass in External_Id and Title_Text to the call to Licenses_UpdateLicenseData), though I'm not totally clear what to do about the other one. What does pgutil send in? Null? []? Etc.
As a free/community user this isn't all that easy to prioritize... but if you could do the heavy-lifting on the Docs and pgutil (i.e. submit a PR), and give us a script or set of pgutil commands that we can just run against a local instance... I'm like 95% I can make the API change in 5 minutes.
Thanks,
Alana
Hi @jlarionov_2030 , easy change! We will not require a valid license for Settings API Endpoint going forward; PG-3133 will ship in next maintenance release, scheduled for Friday.
Hi @tobe-burce_3659 ,
We do not support deleting or modifying any of the contents under the program directory; they will simply return when you upgrade and it may cause other problems.
Instead, please create an exception; we are aware of the vulnerabilities in libraries that Postgresql uses and can assure you that they are false positives and will have no impact on ProGet... even if you were to be using PostgreSQL.
Using virus/malware tools to scan/monitor ProGet's operation causes lots of problems, as these tools interfere with file operations and cause big headaches.
Thanks,
Alana
Hi @Sigve-opedal_6476 ,
No idea why it wouldn't work, but I would look at something like ProxyMan or Wireshark to capture HTTP traffic, and see what requests are different.
You should see a pattern of requests that work, and a pattern that doesn't.
Maybe the client is requesting some other file that you aren't uploading? I don't think there's an API or HTTP header... I think it's all basic GET requests. But that will tell you the delta.
Thanks,
Alana
Hi @fabrice-mejean ,
I definitely understand where you're coming from.... both commands basically work off the assets file, which is generated at build time.
But your workflow is not common... the standard for SBOM generation is post-build. Doing it pre-build checking requires that packages.lock.json is used, which not many use... it's hard for us to advocate this workflow when most users don't care about saving time in this stage.
I know we could add a "switch" or something to pgutil, but we learned "the hard way" that adding lots of complex alternative/branching paths to pgscan made for very difficult to maintain/understand code, so we want to keep the utility as simple as possible.
Thanks,
Alana
Hi @pmsensi,
Correct -- it'll be whatever data is on the "Dependencies" tab in ProGet, which is basically whatever is in the manifest file (.nuspec, etc).
Thanks,
Alana
Hi @fabrice-mejean @pmsensi ,
We've got this spec'd out and on the roadmap now as PG-3126! It'll come through a maintenance release, along with pgutil security commands for configuring users, groups, and tasks.
The target is 2025.13, which is planned for October 24. I don't know if we'll hit that target, but that's what we're aiming for.
Please check out the specs on PG-3126; I think it captures what you're looking for, which is basically an expanded metadata object that includes compliance data, detected licenses, and vulnerabilities.
Thanks,
Alana
Hi @fabrice-mejean,
Using packages.lock.json seemed to make the most sense to us too, but ultimately we decided not to use it for a few reasons.
First and foremost, none of the other .NET SBOM-generators seemed to use the packages.lock.json file. That's usually a sign that there's a "good reason" for us not to either.
From our perspective, pgutil builds scan is intended to be used in a CI environment, where dotnet build is run anyway and the assets file is already present. We don't have a use-case for an alternative workflow, where a build is not actually run.
In addition, packages.lock.json are still pretty niche and not widely used. You have to "go out of your way" to use it, and <PackageReference .../ > is by far the most common approach. It might be worth monitoring Issue #658 at CycloneDX/cyclonedx-dotnet to see anyone picks it up there.
Technically it's not all that complex to do, but it adds complexity and confusion... especially since most users will not be familiar with the differences between the lock and asset file. So it's not a good fit for pgutil builds scan.
HOWEVER, you could probably write ask ChatGPT to write a trivial PowerShell script that "Transforms" a lock file into a minimal SBOM document, and tweak it for what you want in ProGet. That same script could just upload the file to ProGet, or use pgutil as well.
Thanks,
Alana
Hi @layfield_8963 ,
Thanks that makes sense -- since you've already got a personal repo going, I think it makes sense to stick with that for now. If other users are interested, we can explore it.
We publish pgutil pretty regularly, so we'd need to automate updating that repository and that's just one more script t write and one more thing to break later :)
Thanks,
Alana
Hi @it_9582 ,
This is a known issue / UI quirk with Conan packages, and hopefully should only impact that one page in the UI.
To be honest I don't quite get the issue, but it has something to do with the fact that a Conan package is actually "set of packages that share a name and version". Each package in the set can define its own license file.
The particular page was never really designed for "package sets" so the display is a little weird. It's a nontrivial effort to fix and would obviously impact all other package types, so it's not a priority at the moment.
We would love to redo the UI at some point, s I think it'd mkae sense to do then.
Thanks,
Alana
Hi @Sigve-opedal_6476 ,
Thanks for clarifying; I should have mentioned that I know basically nothing about rpm except how a repository works 
A gpgkey is just a file, right? I think you were on the right track with using an asset directory. I guess you would configure things like this, for an aset directory called gpg in the docker/gpg folder:
gpgkey=https://myproget.mycompany.com/endpoints/gpg/content/docker/gpg
There's really no "timestamp" on web-based files, the browser/protocol just does a GET and the bytes are returned. I can't imagine rpm is looking at a modified cache header.
Thanks,
Alana
Hi @aristo_4359 ,
I researched this a little further, and I'm afraid Conan Packages cannot be imported from Artifactory at this time; they do not behave like other repositories in Artifactory, which means it's a nontrivial effort to figure out how to get these imported using a different/alternative Artifactory API.
We will clarify this in upcoming ProGet release via PG-3122.
Thanks,
Alana
I'm not really sure what you mean by this request.
An RPM repository is basically just a "dumb file server" with like repodata.xml and a bunch of index.tar.gz files. The rpm client downloads this files and does all the gpg stuff.
ProGet feeds implement an RPM repository and generate these indexes on the fly... but to the client, it seems like it's just downloading static files.
Thanks,
Alana
Hi @layfield_8963 ,
About the only knowledge of "homebrew" I have is that it's some kind of thing on Mac, perhaps like apt or chocolatey. I think we'd normally be all about hosting a "cask" or "keg", but I don't think that's what you're asking 

It doesn't make sense for us to try supporting an ecosystem we know so little about. That's the same reason we never did a Chocolatey package on our own, but our friend @steviecoaster from Chocolatey created/maintains the proget package at chocolatey.org, and that has worked out just fine.
Anyway if you just need "something simple" from us like accepting a (simple) pull request or editing a build/deploy script, we might be able to do that. But otherwise it won't make sense for us to invest in learning about brew and supporting it.
Cheers,
Alana
Hi @tyler_5201
The underlying error is that there is no connection string, as you noticed.
The connection string is stored in a file (/var/proget/database/.pgsqlconn) that should be accessible to the container. I haven't tested it, but I guess if file is missing or deleted, then I suppose you might run into these issues.
It should be created on startup of a new container, however. So it's kind of weird. I think you'll want to "play" with it a bit, since there's clearly something going on with your permissions I'm thinking.
Note that the connection string can also be specified as an environment variable, but I don't think that applies here since you're trying to configure the embedded datbase:
https://docs.inedo.com/docs/installation/linux/docker-guide#supported-environment-variables
Thanks,
Alana
@udi-moshe_0021 sounds like it was a temporary outage on npmjs.org or perhaps even your proxy server. I wouldn't worry about it if it's working now since it's not something you could really control anyway
Hi @carl-westman_8110 ,
The error message means that the database wasn't updated as per normal during the start-up process. It's hard to guess why, as we have special handling for that.
It's likely that restarting the service would have fixed it, but downgrading and then upgrading would also force an upgrade as well. Unfortunately it's hard to say at this point.
Upgrading to 2025.10 should be fine.
Thanks,
Alana
Hi @jfullmer_7346,
It's nothing you did.
The underlying issue is that a bug in ProGet allowed WinSCP (ID=563) to be added to the PackageNameIds table; that should never have happened since nuget is case insensitive. We've since fixed that bug.
However, once you have a duplicate name, weird things happen since querying for "give me the PackageID for nuget://winscp" returns two results instead of one. So now when you query "give me the VersionID for (ID=563)-v6.5.3", a new entry is created.
This has been a very long-standing issue, but there aren't any major consequences to these "weird things" except casing in the UI and the health check now fails.
But we're on it :)
Thanks,
Alana
Hi @jfullmer_7346 ,
I haven't had a chance to look into more details, but thanks for providing the results of the query.
FYI - the PackageNameIds and PackageVersionIds are designed as a kind of "permanent, read-only record" -- once added they are not deleted or modified. Even if all packages are deleted (i.e. FeedPackageVersions). This is why this the "duplicate name" is such a headache to deal with.
That said, on a quick glance, we can see exactly where the error is coming from: there are duplicate versions (i.e. (ID=563)-v6.5.3 and (ID=562)-v6.5.3). So, when we try to deduplicate (ID=563) and (ID=562) (i.e. winscp and WinSCP), we get the error as expected.
What's not expected is that those versions were not de-duplicated in the earlier pass. My guess is that it's related to winscp being in one feed and WinSCP being in the other -- we tried to be conservative, and keep the de-duplication to packages related to the feed.
I'm thinking we just change that logic to "all packages of the feed type". Anyway, please stay tuned. We'll try to get it in the next maintencne release.
Thanks,
Alana
@jfullmer_7346 thanks for giving it a shot, we'll take a closer look!
The "good news" is that the error message is a "sanity check" failure, so now have an idea of what's causing the error:
-- Sanity Check (ensure there are no duplicate versions)
IF EXISTS (
SELECT *
FROM "PackageVersionIds" PV_D,
"PackageVersionIds" PV_C
WHERE PV_D."PackageName_Id" = "@Duplicate_PackageName_Id"
AND PV_C."PackageName_Id" = "@Canonical_PackageName_Id"
AND PV_D."Package_Version" = PV_C."Package_Version"
AND ( (PV_D."Qualifier_Text" IS NULL AND PV_C."Qualifier_Text" IS NULL)
OR (PV_D."Qualifier_Text" = PV_C."Qualifier_Text") )
) THEN RAISE EXCEPTION 'Cannot deduplicate given nameid'; RETURN; END IF;
In this case, it's saying that there are "duplicate versions" remaining (i.e. WinSCP-1.0.0 and winscp-1.0.0). Those should have been de-duplicated earlier. I wonder if the PackageVersionIds_GetDuplicates() function is not returning the right results.
I'm not sure what your experience w/ PostgreSQL is, but are you able to query the embedded database? If not, that's fine... it's not meant to be easy to query.
Also, should the integrity check be taking 30 minutes?
Maybe. The integrity check needs to verify file hashes, so that involves opening and streaming through all the files on disk. So when you have a lot of large packages, then it's gonna take a while.
ProGet can handle most invalid Maven version numbers, but Borca-SR2 is really invalid and isn't currently supported. Our "name vs version" parsing requires that versions start with numbers, artifacts start with letters. This has been a Maven rule for 20+ years now.
It's non trivial and quite risky to change our parsing logic so it's not something we're keen on doing in a maintenance release. This scenario seems to be very rare and impact ancient artifacts and a few cases where authors didn't use Maven to deploy the artifacts.
Thanks,
Alana
@uli_2533 thanks for the additional info, great find in the source code too!!
On our end, I was looking at the PUT code, where a 301 kind of made sense. I think it must have been some kind of regression on the GET request? Not sure why it didn't get noticed before, but it's a trivial fix.
PG-3108 will be in the next maintenance release (Sep 19)... or if you want to try it now, it's in inedo/proget:25.0.10-ci.9 Docker image.
Thanks,
Alana
Hi @bohdan-cech_2403 ,
I'm not sure if that's the issue...
Returning a 201 has been the behavior for as long as we've had the feed (even the old version of the feed). The official Maven client does not seem to complain or cause any error in our testing, and no other user reported it as a problem.
Any idea why it's happening "all of a sudden" for you? Is there a new version of Maven or something?
FYI the PUT uploads for hash files are ignored and a 201 is always returned.
Thanks,
Alana
@jorgen-nilsson_1299 said in Machine ID changes after restart:
What is the Machine ID based on and how can I trouble shoot this? Any way to set a static Machine ID?
The Machine ID is based on the CPU Vendor ID, Machine Name (host name on Docker), and OS Version.
Hopefully @felfert gave some advice on how to make sure those don't change,
@pariv_0352 the code is not fixed in 25.0.9
However, inedo/proget:25.0.10-ci.5 will have the new code that should prevent this error
@jfullmer_7346 thanks! As an FYI...
winscp and WinSCP) and Duplicate Versions (e.g. winscp-4.0.0 and WinSCP-4.0.0 checking to feed integrityHi @pmsensi ,
Thanks for the heads-up; looks like there was a replication issue with one of our edge nodes. It should be there now
Thanks,
Alana
The "unexpected argument" running without quotes is expected, but it works fine when I run with quotes. I'm afraid I can't reproduce this.
I would check under Admin > Diagnostic Center to see if anythings logged. Alternatively, you may need to query the API by doing something like this:
curl http://server:8624/endpoints/Test/metadata/metadata-test/v1/1%20-%20Normal.txt
Thanks,
Alana
Hi @bohdan-cech_2403 ,
Thanks for sharing that; I can confirm receipt, reproduction, and fixing (PG-3105). If you'd like to try it, you can get the fix in 25.0.10-ci.2 - otherwise it'll be in the next maintenance release (next Friday).
@wechselberg-nisboerge_3629 FYI I entered the url/username/password into ProGet, but I got the message "Failed to find any registries." So, I tried with curl and I got this:
$> curl "https://artifactory.REDACTED.com/artifactory/api/repositories?type=local" --user support
Enter host password for user 'support':
{
"errors" : [ {
"status" : 401,
"message" : "Artifactory configured to accept only encrypted passwords but received a clear text password, getting the encrypted password can be done via the WebUI."
} ]
}
I have no idea what explains the different behavior. Anyway, I logged into the portal on that URL and generated some kind of key, and it let me connect.
Thanks,
Alana
Hi @felfert,
So far as I can tell, the IP isn't currently logged in these messages... I can see how that would be helpful.
I can certainly do that (which would then show the X-Forwarded when available), but I wanted to make sure I'm looking in the right place. Because I don't see IP info now.
Thanks,
Alana
Hi @felfert ,
As an update, we'd are planning on this pattern instead of row/table locking (PG-3104). It gives us a lot more control and makes it a lot easier to avoid deadlocks.
I still can't reproduce the issue, but I see no reason this won't work.
CREATE OR REPLACE FUNCTION "DockerBlobs_CreateOrUpdateBlob"
(
"@Feed_Id" INT,
"@Blob_Digest" VARCHAR(128),
"@Blob_Size" BIGINT,
"@MediaType_Name" VARCHAR(255) = NULL,
"@Cached_Indicator" BOOLEAN = NULL,
"@Download_Count" INT = NULL,
"@DockerBlob_Id" INT = NULL
)
RETURNS INT
LANGUAGE plpgsql
AS $$
BEGIN
-- avoid race condition when two procs call at exact same time
PERFORM PG_ADVISORY_XACT_LOCK(HASHTEXT(CONCAT_WS('DockerBlobs_CreateOrUpdateBlob', "@Feed_Id", LOWER("@Blob_Digest"))));
SELECT "DockerBlob_Id"
INTO "@DockerBlob_Id"
FROM "DockerBlobs"
WHERE ("Feed_Id" = "@Feed_Id" OR ("Feed_Id" IS NULL AND "@Feed_Id" IS NULL))
AND "Blob_Digest" = "@Blob_Digest";
WITH updated AS
(
UPDATE "DockerBlobs"
SET "Blob_Size" = "@Blob_Size",
"MediaType_Name" = COALESCE("@MediaType_Name", "MediaType_Name"),
"Cached_Indicator" = COALESCE("@Cached_Indicator", "Cached_Indicator")
WHERE ("Feed_Id" = "@Feed_Id" OR ("Feed_Id" IS NULL AND "@Feed_Id" IS NULL))
AND "Blob_Digest" = "@Blob_Digest"
RETURNING *
)
INSERT INTO "DockerBlobs"
(
"Feed_Id",
"Blob_Digest",
"Download_Count",
"Blob_Size",
"MediaType_Name",
"Cached_Indicator"
)
SELECT
"@Feed_Id",
"@Blob_Digest",
COALESCE("@Download_Count", 0),
"@Blob_Size",
"@MediaType_Name",
COALESCE("@Cached_Indicator", 'N')
WHERE NOT EXISTS (SELECT * FROM updated)
RETURNING "DockerBlob_Id" INTO "@DockerBlob_Id";
RETURN "@DockerBlob_Id";
END $$;
@felfert amazing!! That script will come in handy when we need to help users patch their instance; we can also try to add something that allows you to patch via the UI as well!!
@felfert thanks for confirming!!
FYI tThe fix has not been applied yet to code yet, but you can patch the stored procedure (painfully) as a workaround for now. We will try to find a better solution. The only thing I can imagine happening is that the PUT is happening immediately after the PATCH finishes, but before the client receives a 200 response. I have no idea though.
We'll figure something out, now that we know where it is thanks to your help!!
Hi @parthu-reddy,
ProGet 2025 supports existing Maven classic feeds; you should be able to migrate just as you were in ProGet 2024.
Thanks,
Alana
Hi @parthu-reddy ,
Since these are network-level errors, you would need to use a tool like Wireshark or another packet analyzer to troubleshoot these kind of connectivity failures.
Thanks,
Alana
Did that, verified that the function actually has changed and did another test. Unfortunately this did not help, error was exactly the same like in my above wireshark dump.
Or does one have to "compile" the function somehow after replacing? (I never dealt with SQL functions before and in general have very limited SQL knowledge.)
Ah that's a shame! We're kind of new to "patching" functions like this in postgresql, but I think that should have worked to change the code. And also the code change should have worked.
If you don't mind trying one other patch, where we select out the Blob_Id again in the end.
CREATE OR REPLACE FUNCTION "DockerBlobs_CreateOrUpdateBlob"
(
"@Feed_Id" INT,
"@Blob_Digest" VARCHAR(128),
"@Blob_Size" BIGINT,
"@MediaType_Name" VARCHAR(255) = NULL,
"@Cached_Indicator" BOOLEAN = NULL,
"@Download_Count" INT = NULL,
"@DockerBlob_Id" INT = NULL
)
RETURNS INT
LANGUAGE plpgsql
AS $$
BEGIN
SELECT "DockerBlob_Id"
INTO "@DockerBlob_Id"
FROM "DockerBlobs"
WHERE ("Feed_Id" = "@Feed_Id" OR ("Feed_Id" IS NULL AND "@Feed_Id" IS NULL))
AND "Blob_Digest" = "@Blob_Digest"
FOR UPDATE;
WITH updated AS
(
UPDATE "DockerBlobs"
SET "Blob_Size" = "@Blob_Size",
"MediaType_Name" = COALESCE("@MediaType_Name", "MediaType_Name"),
"Cached_Indicator" = COALESCE("@Cached_Indicator", "Cached_Indicator")
WHERE ("Feed_Id" = "@Feed_Id" OR ("Feed_Id" IS NULL AND "@Feed_Id" IS NULL))
AND "Blob_Digest" = "@Blob_Digest"
RETURNING *
)
INSERT INTO "DockerBlobs"
(
"Feed_Id",
"Blob_Digest",
"Download_Count",
"Blob_Size",
"MediaType_Name",
"Cached_Indicator"
)
SELECT
"@Feed_Id",
"@Blob_Digest",
COALESCE("@Download_Count", 0),
"@Blob_Size",
"@MediaType_Name",
COALESCE("@Cached_Indicator", 'N')
WHERE NOT EXISTS (SELECT * FROM updated)
RETURNING "DockerBlob_Id" INTO "@DockerBlob_Id";
SELECT "DockerBlob_Id"
INTO "@DockerBlob_Id"
FROM "DockerBlobs"
WHERE ("Feed_Id" = "@Feed_Id" OR ("Feed_Id" IS NULL AND "@Feed_Id" IS NULL))
AND "Blob_Digest" = "@Blob_Digest"
RETURN "@DockerBlob_Id";
END $$;
If this doesn't do the trick, I think we need to look a lot closer.
Hi @inedo_1308 ,
I forgot how it worked in the preview migration, but the connection string is stored in the database directory (/var/proget/database/.pgsqlconn).
Thanks,
Alana
@inedo_1308 sounds good!
The code would almost certainly be the same, since it hasn't been updated since we did the PostgreSQL version of the script.
So, I think it's a race condition, though I don't know how it would happen. However, if it's a race condition, then it should be solved with an UPDLOCK (or whatever) in PostgreSQL.
DockerBlob_Id is null)insert)insertedIf you're able to patch the procedure, could you add FOR UPDATE as follows? We are still relatively to PostgreSQL so I don't know if this the right way to do it in this case.
I think a second SELECT could also work, but I dunno.
CREATE OR REPLACE FUNCTION "DockerBlobs_CreateOrUpdateBlob"
(
"@Feed_Id" INT,
"@Blob_Digest" VARCHAR(128),
"@Blob_Size" BIGINT,
"@MediaType_Name" VARCHAR(255) = NULL,
"@Cached_Indicator" BOOLEAN = NULL,
"@Download_Count" INT = NULL,
"@DockerBlob_Id" INT = NULL
)
RETURNS INT
LANGUAGE plpgsql
AS $$
BEGIN
SELECT "DockerBlob_Id"
INTO "@DockerBlob_Id"
FROM "DockerBlobs"
WHERE ("Feed_Id" = "@Feed_Id" OR ("Feed_Id" IS NULL AND "@Feed_Id" IS NULL))
AND "Blob_Digest" = "@Blob_Digest"
FOR UPDATE;
WITH updated AS
(
UPDATE "DockerBlobs"
SET "Blob_Size" = "@Blob_Size",
"MediaType_Name" = COALESCE("@MediaType_Name", "MediaType_Name"),
"Cached_Indicator" = COALESCE("@Cached_Indicator", "Cached_Indicator")
WHERE ("Feed_Id" = "@Feed_Id" OR ("Feed_Id" IS NULL AND "@Feed_Id" IS NULL))
AND "Blob_Digest" = "@Blob_Digest"
RETURNING *
)
INSERT INTO "DockerBlobs"
(
"Feed_Id",
"Blob_Digest",
"Download_Count",
"Blob_Size",
"MediaType_Name",
"Cached_Indicator"
)
SELECT
"@Feed_Id",
"@Blob_Digest",
COALESCE("@Download_Count", 0),
"@Blob_Size",
"@MediaType_Name",
COALESCE("@Cached_Indicator", 'N')
WHERE NOT EXISTS (SELECT * FROM updated)
RETURNING "DockerBlob_Id" INTO "@DockerBlob_Id";
RETURN "@DockerBlob_Id";
END $$;
Hmmm the only possibility I can see is that DockerBlobs_CreateOrUpdateBlob is returning NULL , which is failing conversion to int dockerBlobId. That's the only nullable converstion on that line.
There's gotta be some kind of bug with this postresql procedure. Maybe a race condition??
CREATE OR REPLACE FUNCTION "DockerBlobs_CreateOrUpdateBlob"
(
"@Feed_Id" INT,
"@Blob_Digest" VARCHAR(128),
"@Blob_Size" BIGINT,
"@MediaType_Name" VARCHAR(255) = NULL,
"@Cached_Indicator" BOOLEAN = NULL,
"@Download_Count" INT = NULL,
"@DockerBlob_Id" INT = NULL
)
RETURNS INT
LANGUAGE plpgsql
AS $$
BEGIN
SELECT "DockerBlob_Id"
INTO "@DockerBlob_Id"
FROM "DockerBlobs"
WHERE ("Feed_Id" = "@Feed_Id" OR ("Feed_Id" IS NULL AND "@Feed_Id" IS NULL))
AND "Blob_Digest" = "@Blob_Digest";
WITH updated AS
(
UPDATE "DockerBlobs"
SET "Blob_Size" = "@Blob_Size",
"MediaType_Name" = COALESCE("@MediaType_Name", "MediaType_Name"),
"Cached_Indicator" = COALESCE("@Cached_Indicator", "Cached_Indicator")
WHERE ("Feed_Id" = "@Feed_Id" OR ("Feed_Id" IS NULL AND "@Feed_Id" IS NULL))
AND "Blob_Digest" = "@Blob_Digest"
RETURNING *
)
INSERT INTO "DockerBlobs"
(
"Feed_Id",
"Blob_Digest",
"Download_Count",
"Blob_Size",
"MediaType_Name",
"Cached_Indicator"
)
SELECT
"@Feed_Id",
"@Blob_Digest",
COALESCE("@Download_Count", 0),
"@Blob_Size",
"@MediaType_Name",
COALESCE("@Cached_Indicator", 'N')
WHERE NOT EXISTS (SELECT * FROM updated)
RETURNING "DockerBlob_Id" INTO "@DockerBlob_Id";
RETURN "@DockerBlob_Id";
END $$;
Anyway we'll study that another day.. at least we think we know specifically where the issue is.
In case you're curious, here is #381

.... and now to figure out what could possibly be null in that specific area 
@inedo_1308 finally!!! nice find :)
@inedo_1308 said in proget 500 Internal server error when pushing to a proget docker feed:
That change looks wrong to me, because (error.StatusCode == 500) is more specific/restrictive than (error.StatusCode >= 500 || context.Response.HeadersWritten)
In other words: It logs less than before.
Good spot / good find -- though we never actually raise anything except 500 anyway, so i thought it would be fine 
public static DockerException Unknown(string message) => new DockerException(500, "UNKNOWN", message);
Anyway wriiting the detail to that array will hpefully be caught.
@inedo_1308 thanks for continuing to help us figure this out
Do you mind trying inedo/proget:25.0.9-ci.7?
I'm thinking it's some kind of middleware bug (our code? .NET code? who knows), and I can't see why the logging code I added didn't log that in diagnostic center.
Whatever the case, we can see the error JSON is being written: {"errors":[{"code":"UNKNOWN","message":"Nullable object must have a value.","detail":[]}]} ... so I just added the stack trace to detail element.
FYI, the code:
catch (Exception ex)
{
WriteError(context, DockerException.Unknown(ex.Message), feed, w => w.WriteValue(ex.StackTrace)); // I added the final argument
}
....
private static void WriteError(AhHttpContext context, DockerException error, DockerFeed? feed, Action<JsonTextWriter>? writeDetail = null)
{
/// code from before that should have worked
if (error.StatusCode == 500)
WUtil.LogFeedException(error.StatusCode, feed, context, error);
if (!context.Response.HeadersWritten)
{
context.Response.Clear();
context.Response.StatusCode = error.StatusCode;
context.Response.ContentType = "application/json";
using var jsonWriter = new JsonTextWriter(context.Response.Output);
jsonWriter.WriteStartObject();
jsonWriter.WritePropertyName("errors");
jsonWriter.WriteStartArray();
jsonWriter.WriteStartObject();
jsonWriter.WritePropertyName("code");
jsonWriter.WriteValue(error.ErrorCode);
jsonWriter.WritePropertyName("message");
jsonWriter.WriteValue(error.Message);
jsonWriter.WritePropertyName("detail");
jsonWriter.WriteStartArray();
writeDetail?.Invoke(jsonWriter);
jsonWriter.WriteEndArray();
jsonWriter.WriteEndObject();
jsonWriter.WriteEndArray();
jsonWriter.WriteEndObject();
}
}
Hi @james-woods_8996 ,
I looked into this a little more, and it turns out that our cloud storage providers do in fact support chunking -- but the outgoing replication code is not taking advantage of that. We would like to fix that in ProGet 2026.
However, in the meantime, we can fix the incoming replication code (i.e. what's throwing the error) pretty easily via PG-3102 - hopefully we'll get that in the upcoming maintenance release.
Thanks,
Alana