Just a humble reminder - PG-3094 did not fix the problem in 25.0.9 for me.
Thanks
-Fritz
Just a humble reminder - PG-3094 did not fix the problem in 25.0.9 for me.
Thanks
-Fritz
@wechselberg-nisboerge_3629 said in ERROR while migrating maven repository from Jfrog Artifactory.:
I am especially confused about the fact that it prints Windows paths, even though it is running as Linux Docker container.
That one fooled me previously as well, but if you look carefully, those are source paths from their build-system
e.g. C:\Users\builds\AppData\Local\Temp\InedoAgent\BuildMaster\...
@atripp I did write this from memory and I don't recall where exactly I saw the IP logged. But I usually look only in two places:
Unfortunately the system journal has just been rotated (1st of September) and I keep old logs only for the last month so I looked into the August logs, but the error (something reladed to an old Debian feed) wasn't in there either. Must have happened before.
Anyway, it would be very useful if you log either the remote-ip or (if exists) the X-Forwarded-For header value.
Something like this (pseudo code based on C):
printf("some error from %s", (xForwardedValue && strlen(xForwardedValue)) ? xForwardedValue : remoteIp);
Cheers
-Fritz
@atripp I tested this one and it works.
However:
In the meantime, i have updated to the released 25.0.9 (aka latest) and now I cannot reproduce the error either - even if I explicitely revert the function to the old one where it produced the error before
So I'm afraid that i might not be a reliable tester for this specific error anymore.
Cheers
-Fritz
@atripp said in proget 500 Internal server error when pushing to a proget docker feed:
we can also try to add something that allows you to patch via the UI as well!!
Huh? Well, I don't think this would be such a good idea. I assume you guys already have some code in proget for upgrading your DB-Schema when a new version is required. That's where the function should be replaced. This script is only meant as a intermediate solution until you guys have updated the function.
Cheers
-Fritz
@gdivis I just updated to 25.0.9 (aka 2025.9 Build 16), but unfortunately it still behaves the same.
I digged a little deeper and found out, that the files below repodata/ are not updated at all anymore. I tried the following things:
INFO : 2025-08-30 17:43:00Z - Performing reindex for Rpm feed 13: cloudtools-rpm
INFO : 2025-08-30 17:43:00Z - Deactivating feed.
DEBUG: 2025-08-30 17:43:06Z - Checking for orphaned FeedPackages...
DEBUG: 2025-08-30 17:43:06Z - No orphaned FeedPackages found.
DEBUG: 2025-08-30 17:43:06Z - Recalculating latest versions of packages...
DEBUG: 2025-08-30 17:43:06Z - Health check complete; recording results...
INFO : 2025-08-30 17:43:06Z - Re-activating feed.
Here ist a listing of the feed's storage (mapped here to /var/data/proget/packages/.rpm/F13):
ls -ltrR
.:
total 8
drwxr-xr-x. 2 root root 4096 Aug 30 19:41 packages
drwxr-xr-x. 2 root root 4096 Aug 30 19:44 repodata
./packages:
total 309796
-rw-r--r--. 1 root root 12490071 Jun 29 2024 cloudtools-2.5.233-1.noarch.rpm
-rw-r--r--. 1 root root 12490704 Jun 29 2024 cloudtools-2.5.234-1.noarch.rpm
-rw-r--r--. 1 root root 12495077 Jun 29 2024 cloudtools-2.5.236-1.noarch.rpm
-rw-r--r--. 1 root root 12494704 Jun 29 2024 cloudtools-2.5.235-1.noarch.rpm
-rw-r--r--. 1 root root 12495445 Jun 29 2024 cloudtools-2.5.237-1.noarch.rpm
-rw-r--r--. 1 root root 12495891 Jun 29 2024 cloudtools-2.5.240-1.noarch.rpm
-rw-r--r--. 1 root root 12691058 Jun 29 2024 cloudtools-2.5.241-1.noarch.rpm
-rw-r--r--. 1 root root 12691053 Jun 29 2024 cloudtools-2.5.242-1.noarch.rpm
-rw-r--r--. 1 root root 12691035 Jun 29 2024 cloudtools-2.5.243-1.noarch.rpm
-rw-r--r--. 1 root root 12684216 Jul 5 2024 cloudtools-2.5.244-1.noarch.rpm
-rw-r--r--. 1 root root 12684455 Jan 19 2025 cloudtools-2.5.246-1.noarch.rpm
-rw-r--r--. 1 root root 12678409 Apr 12 10:42 cloudtools-2.5.249-1.noarch.rpm
-rw-r--r--. 1 root root 12678450 May 20 14:30 cloudtools-2.5.250-1.noarch.rpm
-rw-r--r--. 1 root root 12678452 May 20 14:48 cloudtools-2.5.251-1.noarch.rpm
-rw-r--r--. 1 root root 12678743 May 27 05:56 cloudtools-2.5.252-1.noarch.rpm
-rw-r--r--. 1 root root 12678824 May 27 10:01 cloudtools-2.5.253-1.noarch.rpm
-rw-r--r--. 1 root root 12678806 May 29 08:45 cloudtools-2.5.254-1.noarch.rpm
-rw-r--r--. 1 root root 12871348 Jun 1 22:59 cloudtools-2.6.255-1.noarch.rpm
-rw-r--r--. 1 root root 12896455 Jun 2 08:36 cloudtools-2.6.256-1.noarch.rpm
-rw-r--r--. 1 root root 12896459 Jun 2 11:39 cloudtools-2.6.257-1.noarch.rpm
-rw-r--r--. 1 root root 12896469 Jun 2 11:41 cloudtools-2.6.258-1.noarch.rpm
-rw-r--r--. 1 root root 12785737 Jun 2 15:22 cloudtools-2.6.262-1.noarch.rpm
-rw-r--r--. 1 root root 12785736 Jun 6 09:55 cloudtools-2.6.263-1.noarch.rpm
-rw-r--r--. 1 root root 12786981 Jun 11 17:24 cloudtools-2.6.265-1.noarch.rpm
-rw-r--r--. 1 root root 12787141 Aug 30 19:41 cloudtools-2.6.267-1.noarch.rpm
./repodata:
total 12
-rw-r--r--. 1 root root 1468 Aug 20 09:02 2b01ecf5699ac41767bfacd995e22b274cfdb81920751989325f34678e402232-filelists.xml.gz
-rw-r--r--. 1 root root 2846 Aug 20 09:02 dd5a0e1508733bde65393ae882018efb96c8af3433913ce07be20e08fad96fae-primary.xml.gz
-rw-r--r--. 1 root root 1354 Aug 20 09:02 280d74eede038f8fede238c851204ad378e0373e473374541891b0937dbc99e9-other.xml.gz
As you can see in this listing, the package cloudtools-2.6.267-1.noarch.rpm
has a timestamp from today (because I just uploaded it), but the *.xml.gz files have not been touched since Aug 20.
And it is those files which contain the actual index and all the info necessary for downloading/installing a package.
If you want these files, I can provide them, but unfortunately this forum has no facility to attach files. (At least I can't find some)
Later:
I just noticed 2 errors in the Diagnostic Center from the time I deleted the cloudtools-2.6.267-1.noarch.rpm
:
An error occurred in the web application: cloudtools 2.6.267-1 not found.
URL: http://proget.graudatastorage.intern/feeds/cloudtools-rpm/cloudtools/2.6.267-1
Referrer: http://proget.graudatastorage.intern/feeds/cloudtools-rpm
User: felfert
User Agent: Mozilla/5.0 (X11; Linux x86_64; rv:141.0) Gecko/20100101 Firefox/141.0
Stack trace: at Inedo.ProGet.WebApplication.Pages.Packages.PackagePageBase.CreateChildControlsAsync() in C:\Users\builds\AppData\Local\Temp\InedoAgent\BuildMaster\192.168.44.60\Temp\_E589193\Src\src\ProGet\WebApplication\Pages\Packages\PackagePageBase.cs:line 85
at Inedo.ProGet.WebApplication.Pages.ProGetSimplePage.InitializeAsync() in C:\Users\builds\AppData\Local\Temp\InedoAgent\BuildMaster\192.168.44.60\Temp\_E589193\Src\src\ProGet\WebApplication\Pages\ProGetSimplePage.cs:line 69
at Inedo.Web.PageFree.SimplePageBase.ExecutePageLifeCycleAsync()
at Inedo.Web.PageFree.SimplePageBase.ProcessRequestAsync(AhHttpContext context)
at Inedo.Web.AhWebMiddleware.InvokeAsync(HttpContext context)
::Web Error on 08/30/2025 19:39:13::
An error occurred in the web application: cloudtools 2.6.267-1 not found.
URL: http://proget.graudatastorage.intern/feeds/cloudtools-rpm/cloudtools/2.6.267-1
Referrer: http://proget.graudatastorage.intern/feeds/cloudtools-rpm
User: felfert
User Agent: Mozilla/5.0 (X11; Linux x86_64; rv:141.0) Gecko/20100101 Firefox/141.0
Stack trace: at Inedo.ProGet.WebApplication.Pages.Packages.PackagePageBase.CreateChildControlsAsync() in C:\Users\builds\AppData\Local\Temp\InedoAgent\BuildMaster\192.168.44.60\Temp\_E589193\Src\src\ProGet\WebApplication\Pages\Packages\PackagePageBase.cs:line 85
at Inedo.ProGet.WebApplication.Pages.ProGetSimplePage.InitializeAsync() in C:\Users\builds\AppData\Local\Temp\InedoAgent\BuildMaster\192.168.44.60\Temp\_E589193\Src\src\ProGet\WebApplication\Pages\ProGetSimplePage.cs:line 69
at Inedo.Web.PageFree.SimplePageBase.ExecutePageLifeCycleAsync()
at Inedo.Web.PageFree.SimplePageBase.ProcessRequestAsync(AhHttpContext context)
at Inedo.Web.AhWebMiddleware.InvokeAsync(HttpContext context)
::Web Error on 08/30/2025 19:39:00::
Thanks
-Fritz
As announced above, I have created a little shell script for applying the fix for the bug discussed in this thread.
Prerequisites:
Usage:
fix-docker-push.sh
)docker exec -it InsertYourContainersNameHere /bin/bash /var/proget/backups/fix-docker-push.sh
Here is the script:
#! /bin/bash
sqlconn=/var/proget/database/.pgsqlconn
if [ -f "${sqlconn}" ] ; then
. ${sqlconn}
PGPASSWORD="${Password}" psql -h ${Host} -p ${Port} -U ${Username} ${Database} <<-"EOF" |
\sf "DockerBlobs_CreateOrUpdateBlob"
EOF
grep -q "FOR UPDATE;"
if [ $? = 0 ] ; then
echo "It looks like the fix was already applied. Aborting"
exit 0
fi
cat<<-EOF
This script applies a fix for pushing images to a proget docker feed.
WARNING: This modifies your proget postgres database. BACKUP YOUR DATABASE FIRST.
Enter YES if you want to continue or hit Ctrl-C to abort.
EOF
read resp; if [ "${resp}" != "YES" ] ; then
echo "Aborting"
exit 0
fi
PGPASSWORD="${Password}" psql -h ${Host} -p ${Port} -U ${Username} ${Database} <<-"EOF"
CREATE OR REPLACE FUNCTION "DockerBlobs_CreateOrUpdateBlob"
(
"@Feed_Id" INT,
"@Blob_Digest" VARCHAR(128),
"@Blob_Size" BIGINT,
"@MediaType_Name" VARCHAR(255) = NULL,
"@Cached_Indicator" BOOLEAN = NULL,
"@Download_Count" INT = NULL,
"@DockerBlob_Id" INT = NULL
)
RETURNS INT
LANGUAGE plpgsql
AS $$
BEGIN
SELECT "DockerBlob_Id"
INTO "@DockerBlob_Id"
FROM "DockerBlobs"
WHERE ("Feed_Id" = "@Feed_Id" OR ("Feed_Id" IS NULL AND "@Feed_Id" IS NULL))
AND "Blob_Digest" = "@Blob_Digest"
FOR UPDATE;
WITH updated AS
(
UPDATE "DockerBlobs"
SET "Blob_Size" = "@Blob_Size",
"MediaType_Name" = COALESCE("@MediaType_Name", "MediaType_Name"),
"Cached_Indicator" = COALESCE("@Cached_Indicator", "Cached_Indicator")
WHERE ("Feed_Id" = "@Feed_Id" OR ("Feed_Id" IS NULL AND "@Feed_Id" IS NULL))
AND "Blob_Digest" = "@Blob_Digest"
RETURNING *
)
INSERT INTO "DockerBlobs"
(
"Feed_Id",
"Blob_Digest",
"Download_Count",
"Blob_Size",
"MediaType_Name",
"Cached_Indicator"
)
SELECT
"@Feed_Id",
"@Blob_Digest",
COALESCE("@Download_Count", 0),
"@Blob_Size",
"@MediaType_Name",
COALESCE("@Cached_Indicator", 'N')
WHERE NOT EXISTS (SELECT * FROM updated)
RETURNING "DockerBlob_Id" INTO "@DockerBlob_Id";
SELECT "DockerBlob_Id"
INTO "@DockerBlob_Id"
FROM "DockerBlobs"
WHERE ("Feed_Id" = "@Feed_Id" OR ("Feed_Id" IS NULL AND "@Feed_Id" IS NULL))
AND "Blob_Digest" = "@Blob_Digest";
RETURN "@DockerBlob_Id";
END $$;
EOF
else
echo "Postgres connection parameters (${sqlconn}) not found. Aborting"
fi
Disclaimer
I am NOT an inedo engineer and not affiliated with them.
You are responsible to backup you database before running this script.
You run this script at your own risk.
I take no responsibility whatsoever for any damages that might occur.
Have fun
-Fritz
@atripp said in proget 500 Internal server error when pushing to a proget docker feed:
The only thing I can imagine happening is that the PUT is happening immediately after the PATCH finishes, but before the client receives a 200 response.
Just to confirm your assumption: Thats exactly what is happening as one can see in my first wireshark stream analysis of this above.
you can patch the stored procedure (painfully) as a workaround for now.
I'm writing a small shell (bash) script, that can be invoked inside the proget container in order to easily apply the fix. I will post this here later. Stay tuned.
Cheers
-Fritz
@pariv_0352 said in proget 500 Internal server error when pushing to a proget docker feed:
Then I tried 25.0.9-ci.14, same result.
I don't think the newer builds contain any fix for this yet. I was able to fix it myself locally in the postgres DB with guidance by @atripp, however she most likely did not see my success report yet, otherwise she very likely would have acknowledged it here.
The newer builds > 25.0.9-ci.7 are probably other developers working on different things.
And yes, the - 500 1091
part in your error message looks suspiciously like mine (before I fixed it locally).
So: Just wait, until @atripp confirms here, that she has actually applied that fix.
Cheers
-Fritz
Hi guys,
If running proget behind a reverse-proxy (or a load-balancer), currently the IP address of that reverse-proxy is shown if some client-related error (or exception) happens.
I had some cases where a client (apt) in the local net was misconfigured and sent requests to obsolete feeds. In order to figure out which client was the culprit, I had to correlate log timestamps from proget and the reverse-proxy.
Therefore:
It would be very useful, to use the value of the X-Forwarded-For
http header (if it exists) when generating Warnings or Exceptions in the logs. So the actual IP-Address of the client causing an error would be shown instead of the proxy address. This also should be configurable to happen only, if the remote address (or perhaps a subnet in case of load balancers) is a proxy known to the administrator in order to prevent spoofing by malicious users.
At least apache and caddy set this header by default, if running in reverse proxy mode. Nginx can do that with a single line in the config which is well documented: proxy_set_header X-Forwarded-For $remote_addr;
What do you think?
Cheers
-Fritz
@atripp Ok this is definitively working now. Pushed approx. 20 different images from 80MiB up to 2GiB. None of them produced an error anymore.
Excellent Support!!!
Thanks alot
-Fritz
Going to push some more images to be on the safe side ...
@atripp
If you don't mind trying one other patch, where we select out the Blob_Id again in the end.
Yippieh - That one did the Trick. Error is gone
There is a semicolon missing after the additional selct before the return
BTW:
Did you guys generate a different password for the postgres DB than for MSQL
during migration?
I'm asking, because i vaguely remember that i could access the postgres DB using the same password (taken from the mssql connection-string environment var).
Now, this didn't work anymore and I had to disable password-auth in pg_hba.conf in order to use pg_dump and psql inside the proget container.
Thanks
-Fritz
@atripp said in proget 500 Internal server error when pushing to a proget docker feed:
The code would almost certainly be the same, since it hasn't been updated since we did the PostgreSQL version of the script.
You are right "production" and sandbox showed the same function
If you're able to patch the procedure, could you add FOR UPDATE as follows? We are still relatively to PostgreSQL so I don't know if this the right way to do it in this case.
Did that, verified that the function actually has changed and did another test. Unfortunately this did not help, error was exactly the same like in my above wireshark dump.
Or does one have to "compile" the function somehow after replacing? (I never dealt with SQL functions before and in general have very limited SQL knowledge.)
Cheers,
-Fritz
@atripp Tomorrow I will compare this function's code on my "production" proget with that on the sandbox. Just to be shure.
Got it from the wireshark dump:
HTTP/1.1 500 Internal Server Error
Date: Fri, 29 Aug 2025 02:08:39 GMT
Server: Kestrel
Content-Length: 1091
Content-Type: application/json
Cache-Control: private
Content-Range: 0-32281444
Vary: Accept-Encoding,Authorization
X-ProGet-Version: 25.0.9.7
X-ProGet-Edition: free
Docker-Distribution-API-Version: registry/2.0
Connection: close
{"errors":[{"code":"UNKNOWN","message":"Nullable object must have a value.","detail":[" at System.Nullable`1.get_Value()\n at Inedo.ProGet.Feeds.Docker.DockerFeed.VerifyAndInstallBlobUpload(String uploadId, DockerDigest digest, String mediaType) in C:\\Users\\builds\\AppData\\Local\\Temp\\InedoAgent\\BuildMaster\\192.168.44.60\\Temp\\_E588882\\Src\\src\\ProGet\\Feeds\\Docker\\DockerFeed.cs:line 381\n at Inedo.ProGet.WebApplication.SimpleHandlers.Docker.DockerHandler.ProcessBlobUploadAsync(AhHttpContext context, WebApiContext apiContext, DockerFeed feed, String repositoryName, String uploadId) in C:\\Users\\builds\\AppData\\Local\\Temp\\InedoAgent\\BuildMaster\\192.168.44.60\\Temp\\_E588882\\Src\\src\\ProGet\\WebApplication\\SimpleHandlers\\Docker\\DockerHandler.cs:line 200\n at Inedo.ProGet.WebApplication.SimpleHandlers.Docker.DockerHandler.ProcessRequestAsync(AhHttpContext context) in C:\\Users\\builds\\AppData\\Local\\Temp\\InedoAgent\\BuildMaster\\192.168.44.60\\Temp\\_E588882\\Src\\src\\ProGet\\WebApplication\\SimpleHandlers\\Docker\\DockerHandler.cs:line 92"]}]}
Hold on. The second number in the last line:
... - 500 1091 application/json 162.3468ms
The 500 obviously is the status code. Is the second number the body size of the error response. If yes, that is more than before (in the previous tests, the second number was 90) and I will reveal that, If I do a wireshark dump .... just a minute ...
Nope, like before: Nothing in the diag center and on the container's output, only this:
Aug 29 03:53:54 gsg1repo.graudatastorage.intern systemd-proget[538207]: Request finished HTTP/1.1 GET http://proget.graudatastorage.intern/0x44/proget/Inedo.ProGet.WebApplication.Controls.Layout.NotificationBar/GetNotifications - 200 30 - 7.0388ms
Aug 29 03:53:58 gsg1repo.graudatastorage.intern systemd-proget[538207]: info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
Aug 29 03:53:58 gsg1repo.graudatastorage.intern systemd-proget[538207]: Request finished HTTP/1.1 PATCH http://proget.graudatastorage.intern/v2/testing/xts-addon-webui/blobs/uploads/ca81bc12-62b5-47f9-9bb3-7ef9b8c1530a - 202 0 - 19396.8436ms
Aug 29 03:53:58 gsg1repo.graudatastorage.intern systemd-proget[538207]: info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Aug 29 03:53:58 gsg1repo.graudatastorage.intern systemd-proget[538207]: Request starting HTTP/1.1 PUT http://proget.graudatastorage.intern/v2/testing/xts-addon-webui/blobs/uploads/ca81bc12-62b5-47f9-9bb3-7ef9b8c1530a?digest=sha256%3Acf93e1bb05f4874a5923244a5600fbc0091ef87879b6c780e7baced4b409daa0 - application/octet-stream 0
Aug 29 03:53:58 gsg1repo.graudatastorage.intern systemd-proget[538207]: A 500 error occurred in testing: Nullable object must have a value.
Aug 29 03:53:58 gsg1repo.graudatastorage.intern systemd-proget[538207]: info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
Aug 29 03:53:58 gsg1repo.graudatastorage.intern systemd-proget[538207]: Request finished HTTP/1.1 PUT http://proget.graudatastorage.intern/v2/testing/xts-addon-webui/blobs/uploads/ca81bc12-62b5-47f9-9bb3-7ef9b8c1530a?digest=sha256%3Acf93e1bb05f4874a5923244a5600fbc0091ef87879b6c780e7baced4b409daa0 - 500 1091 application/json 162.3468ms
@atripp said in proget 500 Internal server error when pushing to a proget docker feed:
Do you mind trying inedo/proget:25.0.9-ci.7?
Coming up in a few minutes ...
@atripp said in proget 500 Internal server error when pushing to a proget docker feed:
Basically, I just changed the code from this...
if (error.StatusCode >= 500 || context.Response.HeadersWritten)
WUtil.LogFeedException(error.StatusCode, feed, context, error);
...to this...
if (error.StatusCode == 500)
WUtil.LogFeedException(error.StatusCode, feed, context, error);
That change looks wrong to me, because (error.StatusCode == 500)
is more specific/restrictive than (error.StatusCode >= 500 || context.Response.HeadersWritten)
In other words: It logs less than before.
@atripp BTW it's getting weirder by the minute,
I now have created a second instance as a sandbox for testing and there I tried a bisection of the proget versions (starting from 25.0.2 until I reached 25.0.9-ci.6) and was NOT able to reproduce the error with that sandbox instance. I did two test variants: The first sequence of tests was using the default DB (mssql). After that, I recreated the sandbox VM from scratch ant repeated the sequece of tests after migrating proget to postgres with 25.0.2 installed.
I did this, because the "production" instance was using postgres and was migrated very early after 25.0 was released.
After learning that I cannot reproduce the error that way, I then had another idea: Performing a DB export on the production proget and importing that on the sandbox. Unfortunately, the import does not work: In the form for importing, I chose upload, then selected the DB-export but after some short time it simply said "Not found" in the form. No errors were shown in the diag center either.
Next steps (tomorrow - its 3 in the morning here) will be:
Will report back when i have more results
Cheers
-Fritz
@atripp said in proget 500 Internal server error when pushing to a proget docker feed:
Sorry but just to confirm, you looked in the Admin > Diagnostic Center?
Yes, exactly. Also: Before perfoming the test, i deleted all messages (there were some unrelated errors from another feed). After the 500 happened, I did a reload in the browser and it still said "There are no errors to display."
Cheers
-Fritz
@thomas_3037 @wechselberg-nisboerge_3629
Another idea (just to find common parts of our proget installations or rule-out differences):
Cheers
-Fritz
Just trying to help. This is a wireshark analysis of the tcp stream where it happens:
POST /v2/testing/xts-addon-webui/blobs/uploads/ HTTP/1.1
Host: proget.graudatastorage.intern
User-Agent: containers/5.35.0 (github.com/containers/image)
Content-Length: 0
Authorization: ----REDACTED----
Docker-Distribution-Api-Version: registry/2.0
Accept-Encoding: gzip
HTTP/1.1 202 Accepted
Date: Thu, 28 Aug 2025 07:11:37 GMT
Server: Kestrel
Content-Length: 0
Cache-Control: private
Location: /v2/testing/xts-addon-webui/blobs/uploads/2006612a-8f6d-4d13-8781-6ceaa1f808fb
Vary: Accept-Encoding,Authorization
X-ProGet-Version: 25.0.8.17
X-ProGet-Edition: free
Docker-Distribution-API-Version: registry/2.0
Range: 0-0
Docker-Upload-UUID: 2006612a-8f6d-4d13-8781-6ceaa1f808fb
PATCH /v2/testing/xts-addon-webui/blobs/uploads/2006612a-8f6d-4d13-8781-6ceaa1f808fb HTTP/1.1
Host: proget.graudatastorage.intern
User-Agent: containers/5.35.0 (github.com/containers/image)
Transfer-Encoding: chunked
Authorization: ----REDACTED----
Content-Type: application/octet-stream
Docker-Distribution-Api-Version: registry/2.0
Accept-Encoding: gzip
---- Binary BLOB data in chunked encoding redacted ----
PUT /v2/testing/xts-addon-webui/blobs/uploads/2006612a-8f6d-4d13-8781-6ceaa1f808fb?digest=sha256%3Acf93e1bb05f4874a5923244a5600fbc0091ef87879b6c780e7baced4b409daa0 HTTP/1.1
Host: proget.graudatastorage.intern
User-Agent: containers/5.35.0 (github.com/containers/image)
Content-Length: 0
Authorization: ----REDACTED----
Content-Type: application/octet-stream
Docker-Distribution-Api-Version: registry/2.0
Accept-Encoding: gzip
HTTP/1.1 500 Internal Server Error
Date: Thu, 28 Aug 2025 07:11:50 GMT
Server: Kestrel
Content-Length: 90
Content-Type: application/json
Cache-Control: private
Content-Range: 0-32281444
Vary: Accept-Encoding,Authorization
X-ProGet-Version: 25.0.8.17
X-ProGet-Edition: free
Docker-Distribution-API-Version: registry/2.0
Connection: close
{"errors":[{"code":"UNKNOWN","message":"Nullable object must have a value.","detail":[]}]}
The 500 happens only after the very last PUT request. Depending on the image being uploaded,
there might be uploads of other image layers before that which always use a PATCH request first for the actual blob and after that a PUT request with Content-Length: 0
Since those previous PUT requests are the same (except the URI of course) and they do NOT get a 500 response, i suspect this must be some code which only runs at the very end of the image upload.
NOTE: This was done before I updated to 25.0.9-ci.6
@atripp said in proget 500 Internal server error when pushing to a proget docker feed:
25.0.9-ci.6
Just tested this 25.0.9-ci.6. Unfortunately, there is neither a stacktrace shown in GUI nor in stdout/stderr on the docker container that runs proget itself :-(
Update:
It's getting more weird: Now the podman push fails too even with that compression-format option. So it seems unrelated.
Cheers
-Fritz
@atripp said in proget 500 Internal server error when pushing to a proget docker feed:
I have no idea what that parameter does
Maybe this sheds some light on the problem (from the containers.conf(5) manpage):
"Specifies the compression format to use when pushing an image. Supported values are: gzip, zstd and zstd:chunked. This field is ignored when pushing images to the docker-daemon and docker-archive formats. It is also ignored when the manifest format is set to v2s2. zstd:chunked is incompatible with encrypting images, and will be treated as zstd with a warning in that case."
The regular docker has an option --output=type=image,compression=zstd
for setting image compression to zstd when running docker build and I tried that here whithout success. This leaves the chunked part. My guess is: It enables chunked http transfer encoding during the actual upload of the image blobs. You can verify that by looking at the http headers sent by the client. If my guess is correct, there should be a Header Content-Transfer-Encoding: chunked
in the PUT request of the client. Just a guess for now. Will do a wireshark session of some push with podman and report back.
Cheers
-Fritz
@thomas_3037 said in proget 500 Internal server error when pushing to a proget docker feed:
Hey Fritz,
a colleague find out that the parameter "--compression-format zstd:chunked" in the podman push command helped.
You can also set this option globally.
Hi Marc,
Many thanks for that workaround. Works perfectly with podman.
Unfortunately, I did not find an equivalent option for the regular docker.
Hi guys,
I recently updated from 25.0.8-ci.3 to the latest (25.0.8) proget docker image.
With the latest proget, I just found another problem when pushing docker images.
I tried both podman on fedora and a real docker client on ubuntu and both report
HTTP status: 500 Internal Server Error at the end of the push
When I look at the proget gui in the diagnostic center, there is nothing logged,
but when I look on the host in the stdout/stderr of the container running proget (using journalctl of the service), I can see the following:
Aug 27 10:56:45 gsg1repo.graudatastorage.intern systemd-proget[226868]: Request starting HTTP/1.1 PUT http://proget.graudatastorage.intern/v2/XTS-docker/xts-addon-webui/blobs/uploads/4aa77b08-f058-4f1f-a6a4-6ff2e5a85f0f?digest=sha256%3A2372176015642ce79b416bed3a8b58832f222f02108a268a740c
6d321d57a1a8 - application/octet-stream 0
Aug 27 10:56:45 gsg1repo.graudatastorage.intern systemd-proget[226868]: A 500 error occurred in XTS-docker: Nullable object must have a value.
Aug 27 10:56:45 gsg1repo.graudatastorage.intern systemd-proget[226868]: info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
Aug 27 10:56:45 gsg1repo.graudatastorage.intern systemd-proget[226868]: Request finished HTTP/1.1 PUT http://proget.graudatastorage.intern/v2/XTS-docker/xts-addon-webui/blobs/uploads/4aa77b08-f058-4f1f-a6a4-6ff2e5a85f0f?digest=sha256%3A2372176015642ce79b416bed3a8b58832f222f02108a268a740c6d321d57a1a8 - 500 90 application/json 40.3088ms
This happens regardless of using an existing docker-feed which has worked befor as well as on a newly created docker feed.
Out of curiosity:
Is there a way to increase proget's debugging output (specifically: The name of that Nullable object mentioned in the error log?)
Cheers
-Fritz
@gdivis Thanks for the update
-Fritz
Hi,
Just noticed after update to proget (docker) 25.0.8:
In the gui, the version of a rpm package is displayed correctly:
But dnf (or yum) cannot find the package anymore. sudo dnf update should update the locally installed cloudtools, but instead the following is displayed:
Notice the release number in the listing is cut off (dash remains) and when the actual download is attempted, both the dash and the number are missing.
Suspect: PG-3074 (just a gut feeling )
Cheers,
-Fritz
@rhessinger said in ProGet bug - Duplicate custom Feed Usage Instructions for Debian feeds:
I plan to work on these early this week. If you want, I can provide you with a CI release as soon as these are ready if you want to test them early.
I would appreciate that very much :-)
Thanks
-Fritz
@rhessinger said in ProGet bug - Duplicate custom Feed Usage Instructions for Debian feeds:
In the latest updates for the updated Debian feed, most customers were still using the legacy way for adding ProGet as a source.
Actually this feature (signed-by) is quite old. It just was badly documented for a long time (See this question) and therefore nobody used it.
So, for ubuntu, it should work since 16.04. On older systems however, the directory where the keys are to be stored did not exist and the recommended path of that directory changed over time.
The implementation in apt works with any directory (as long as it is readable by apt).
Therefore, a slightly modified command for downloading/generating of the key file should help (first line of existing instructions) like this:
Old:
curl -fsSL http://my.proget.dom/debian/my-repo/keys/my-repo.asc | sudo gpg --dearmor -o /etc/apt/keyrings/my-repo.gpg
New:
sudo mkdir -p /etc/apt/keyrings; curl -fsSL http://my.proget.dom/debian/my-repo/keys/my-repo.asc | sudo gpg --dearmor -o /etc/apt/keyrings/my-repo.gpg
This should work even on older systems.
Sorry I forgot to mention that in the first place.
Cheers
-Fritz
@rhessinger said in ProGet bug - Duplicate custom Feed Usage Instructions for Debian feeds:
Hi @inedo_1308,
Thanks for sending this over to us. I created a ticket, PG-3069, to fix the exception with editing. I'll also get the feed usage instruction and our documentation updated with the signed by parameter.
Thanks,
Rich
Hi Rich,
I just looked at the ticket you created and I think, I was not clear enough in my description.
(I can see the ticket title only, so forgive me, i I'm overly pedantic :-)
Also: the actual reason for always requiring custom instructions is, that the built-in instructions are missing an important detail:
Existing example (2nd line):
echo "deb http://...
should read:
echo "deb [signed-by=/etc/apt/keyrings/key.gpg] http://...
where key.gpg is the name of the key generated by the first line.
I remember that already being reported here by another user in the past.
Maybe you can change the builtin instructions, so that the custom edit is not necessary anymore.
Cheers
-Fritz
I first noticed this in 2025.2 and still happens after updating to 2025.6:
How to reproduce:
Result:
The duplication happens twice, resulting in a total of 3 Feed Usage Instructions.
After that, clicking on any of the 2 "duplicated" instances in order to edit it, triggers the following error dialog:
(500) Server Error
Sequence contains more than one matching element
For more information, visit the Error Log Page.
The error log shows the following:
An error occurred in the web application: Sequence contains more than one matching element
URL: http://proget.graudatastorage.intern/feed/edit-usage-instructions?feedId=28&feedUsageInstructionId=14&duplicate=False
Referrer: http://proget.graudatastorage.intern/feed/manage?feedId=28
User: felfert
User Agent: Mozilla/5.0 (X11; Linux x86_64; rv:141.0) Gecko/20100101 Firefox/141.0
Stack trace: at System.Linq.ThrowHelper.ThrowMoreThanOneMatchException()
at System.Linq.Enumerable.TryGetSingle[TSource](IEnumerable`1 source, Func`2 predicate, Boolean& found)
at System.Linq.Enumerable.SingleOrDefault[TSource](IEnumerable`1 source, Func`2 predicate)
at Inedo.ProGet.WebApplication.Pages.Feeds.CreateOrUpdateFeedUsageInstructionsPage.CreateChildControls() in C:\Users\builds\AppData\Local\Temp\InedoAgent\BuildMaster\192.168.44.60\Temp\_E578130\Src\src\ProGet\WebApplication\Pages\Feeds\CreateOrUpdateFeedUsageInstructionsPage.cs:line 30
at Inedo.ProGet.WebApplication.Pages.ProGetSimplePage.InitializeAsync() in C:\Users\builds\AppData\Local\Temp\InedoAgent\BuildMaster\192.168.44.60\Temp\_E578130\Src\src\ProGet\WebApplication\Pages\ProGetSimplePage.cs:line 70
at Inedo.Web.PageFree.SimplePageBase.ExecutePageLifeCycleAsync()
at Inedo.Web.PageFree.SimplePageBase.ProcessRequestAsync(AhHttpContext context)
at Inedo.Web.AhWebMiddleware.InvokeAsync(HttpContext context)
::Web Error on 07/30/2025 10:30:28::
So: The custom instructions can not be edited anymore. Deleting one of them deletes both.
Note:
DB is postgres (was migrated after v2025 was released), running in a docker container
Regards
-Fritz
Hi,
I know this is experimental Hopefully this helps you guys improving this cool feature.
I was testing proget in a podman/quadlet environment running on a virtualized Rocky9 (RHEL9 clone). First startup with mssql and migration to postgres went well and everything worked so far. However when the container is stopped&destroyed and then recreated, the postgres process does not startup.
I made the following observations:
Cannot connect to database; will retry in 1 second... Full error: Failed to connect to 127.0.0.1:5728
podman exec -it systemd-proget /bin/bash
chown -R postgres:postgres /var/proget/packages/database
rm -f /var/proget/packages/database/postmaster.pid
su -g postgres postgres -c ". /var/proget/packages/database/postmaster.opts"
My conclusions:
Cheers
-Fritz
Forgot to mention the image i used:
proget.inedo.com/productimages/inedo/proget-postgres:24.0.37-ci.2
Many thanks, Alana!
I can confirm, that the proget:24.0.37-ci.2 container image works like a charm
Thanks for the quick fix!
Just tried migration of an old legacy Debian feed to a normal Debian feed. No errors were reported during the migration and in proget's web-UI, everything looks fine. However the feed is unusable with apt on any distro.
When running apt-get update (or apt update), it complains about the Packages file not being parsable like this:
Reading package lists... Error!
E: Encountered a section with no Package: header
E: Problem with MergeList /var/lib/apt/lists/proget.graudatastorage.intern_debian_XTC-deb_dists_jammy_main_binary-amd64_Packages
E: The package lists or status file could not be parsed or opened.
If I look at the file mentioned above, It turns out, apt is correct in that there are multiple errors:
I also noticed some inconvenience with the Migration itself:
When migrating, proget uses the name of the legacy feed as Distro for the new feed. So if I want one of the usual Distro names (e.g. like bookworm), then I have to rename the old feed accordingly before starting the Migration. This might impact availability. It would be easier, if the Distro name could be entered manually when starting the migration.
Proget version: 2024.36 (Build 5) running in a docker container
Cheers
-Fritz