Welcome to the Inedo Forums! Check out the Forums Guide for help getting started.
If you are experiencing any issues with the forum software, please visit the Contact Form on our website and let us know!
proget 500 Internal server error when pushing to a proget docker feed
-
Hi guys,
I recently updated from 25.0.8-ci.3 to the latest (25.0.8) proget docker image.
With the latest proget, I just found another problem when pushing docker images.
I tried both podman on fedora and a real docker client on ubuntu and both report
HTTP status: 500 Internal Server Error at the end of the pushWhen I look at the proget gui in the diagnostic center, there is nothing logged,
but when I look on the host in the stdout/stderr of the container running proget (using journalctl of the service), I can see the following:Aug 27 10:56:45 gsg1repo.graudatastorage.intern systemd-proget[226868]: Request starting HTTP/1.1 PUT http://proget.graudatastorage.intern/v2/XTS-docker/xts-addon-webui/blobs/uploads/4aa77b08-f058-4f1f-a6a4-6ff2e5a85f0f?digest=sha256%3A2372176015642ce79b416bed3a8b58832f222f02108a268a740c 6d321d57a1a8 - application/octet-stream 0 Aug 27 10:56:45 gsg1repo.graudatastorage.intern systemd-proget[226868]: A 500 error occurred in XTS-docker: Nullable object must have a value. Aug 27 10:56:45 gsg1repo.graudatastorage.intern systemd-proget[226868]: info: Microsoft.AspNetCore.Hosting.Diagnostics[2] Aug 27 10:56:45 gsg1repo.graudatastorage.intern systemd-proget[226868]: Request finished HTTP/1.1 PUT http://proget.graudatastorage.intern/v2/XTS-docker/xts-addon-webui/blobs/uploads/4aa77b08-f058-4f1f-a6a4-6ff2e5a85f0f?digest=sha256%3A2372176015642ce79b416bed3a8b58832f222f02108a268a740c6d321d57a1a8 - 500 90 application/json 40.3088ms
This happens regardless of using an existing docker-feed which has worked befor as well as on a newly created docker feed.
Out of curiosity:
Is there a way to increase proget's debugging output (specifically: The name of that Nullable object mentioned in the error log?)Cheers
-Fritz
-
Hi @inedo_1308,
Another user has reported this via ticket EDO-12089 (tagging for my reference), and we're currently analyzing their PCAP file.
We cannot reproduce it at all and have absolutely no idea where the error could be coming from -- the current theory is that it's a middleware issue by the way podman did requests (which was different than Docker client),.... but since you mentioned you can repro it on Docker client, then that's probably not it.
Anyway, if you're able to identify a version where this regressed that would be really helpful.
The reason nothing is logged is because it's explicitly suppressed in the code, but I don't know why. We may have to change that so we can see this message logged.
Please stay tuned.
Thanks,
Alana
-
Hey Fritz,
a colleague find out that the parameter "--compression-format zstd:chunked" in the podman push command helped.
You can also set this option globally.
# cat ~/.config/containers/containers.conf [engine] compression_format = "zstd:chunked"
Its a workaround, but it helped.
Thanks,
Marc
-
@thomas_3037 said in proget 500 Internal server error when pushing to a proget docker feed:
Hey Fritz,
a colleague find out that the parameter "--compression-format zstd:chunked" in the podman push command helped.
You can also set this option globally.
Hi Marc,
Many thanks for that workaround. Works perfectly with podman.
Unfortunately, I did not find an equivalent option for the regular docker.
-
Thanks for the find @thomas_3037 !
What a strange scenario... I have no idea what that parameter does (aside from what's written in the docs). Do you know if that has to do with how the layers are compressed?
I think this definitely points to some kind of middleware bug. But still aren't sure how to better error report. We'll update a bit later!
Thanks,
Alana
-
@atripp said in proget 500 Internal server error when pushing to a proget docker feed:
I have no idea what that parameter does
Maybe this sheds some light on the problem (from the containers.conf(5) manpage):
"Specifies the compression format to use when pushing an image. Supported values are: gzip, zstd and zstd:chunked. This field is ignored when pushing images to the docker-daemon and docker-archive formats. It is also ignored when the manifest format is set to v2s2. zstd:chunked is incompatible with encrypting images, and will be treated as zstd with a warning in that case."
The regular docker has an option
--output=type=image,compression=zstd
for setting image compression to zstd when running docker build and I tried that here whithout success. This leaves the chunked part. My guess is: It enables chunked http transfer encoding during the actual upload of the image blobs. You can verify that by looking at the http headers sent by the client. If my guess is correct, there should be a HeaderContent-Transfer-Encoding: chunked
in the PUT request of the client. Just a guess for now. Will do a wireshark session of some push with podman and report back.Cheers
-Fritz
-
Update:
It's getting more weird: Now the podman push fails too even with that compression-format option. So it seems unrelated.
Cheers
-Fritz
-
Chiming in here:
I have the same issue and it is behaving very erratic.- I have some images that are pushed fine without any parameter.
- I have some images that are pushed fine with zstd
- I have some images that are pushed fine with zstd:chunked
This is getting very weird ...
-
I just changed some of the logging code via PG-3101, which will probably cause the exception to be logged.
It just finished building the image
inedo/proget:25.0.9-ci.6
-- would someone mind giving that a try? Once we we get the stack trace it'll probably be one of thosetype of errors
@thomas_3037 @inedo_1308 @wechselberg-nisboerge_3629
Thanks!!!!
-
Well ...
info: Microsoft.AspNetCore.Hosting.Diagnostics[1] Request starting HTTP/1.1 HEAD http://vl-dev-repo.ki.lan/v2/sst-devops-oci-prod/confluence-cmdb/blobs/sha256:341aba9cd1b201183186977186c3e3c8e8a9f79967b31f6fc2bff2a8da510c5d - - - A 500 error occurred in sst-devops-oci-prod: Nullable object must have a value. info: Microsoft.AspNetCore.Hosting.Diagnostics[2] Request finished HTTP/1.1 PUT http://vl-dev-repo.ki.lan/v2/sst-devops-oci-prod/confluence-cmdb/blobs/uploads/88cd2aa3-f537-4816-bcd8-23b0ed5eefa4?digest=sha256%3A31e705e6bb0246691b44e7e35991da5066444c0e861d9ef40703695b037552b8 - 500 90 application/json 32.8098ms info: Microsoft.AspNetCore.Hosting.Diagnostics[2] Request finished HTTP/1.1 HEAD http://vl-dev-repo.ki.lan/v2/sst-devops-oci-prod/confluence-cmdb/blobs/sha256:fdab6c583bb88b36e350d7c2011d1a6490bc8a501cc503d9b4a0198ca1f1cfb0 - 404 158 application/json 13.9966ms info: Microsoft.AspNetCore.Hosting.Diagnostics[1] Request starting HTTP/1.1 HEAD http://vl-dev-repo.ki.lan/v2/sst-devops-oci-prod/confluence-cmdb/blobs/sha256:3626f18144516cf17d3c15f60a55c075d34130f1cc3a00d43fe68edf5568d994 - - - info: Microsoft.AspNetCore.Hosting.Diagnostics[2] Request finished HTTP/1.1 PATCH http://vl-dev-repo.ki.lan/v2/sst-devops-oci-prod/confluence-cmdb/blobs/uploads/1cf960f6-1b21-4a89-9eb7-1064b2f9a9ad - 202 0 - 11.7030ms info: Microsoft.AspNetCore.Hosting.Diagnostics[2] Request finished HTTP/1.1 HEAD http://vl-dev-repo.ki.lan/v2/sst-devops-oci-prod/confluence-cmdb/blobs/sha256:123542a56fc98af9ea8a10cb74548ace0c5e05db21e2f82e4ef989870dc72adf - 200 246 application/octet-stream 17.2649ms A 500 error occurred in sst-devops-oci-prod: Nullable object must have a value. info: Microsoft.AspNetCore.Hosting.Diagnostics[2] Request finished HTTP/1.1 PUT http://vl-dev-repo.ki.lan/v2/sst-devops-oci-prod/confluence-cmdb/blobs/uploads/ba1a754f-3c96-4b3a-8d04-8dd9c90e3db3?digest=sha256%3Aa2c2378190e0b91ba3e274e1d2c236873b1960d15602027fb9110f787feee901 - 500 90 application/json 25.0325ms info: Microsoft.AspNetCore.Hosting.Diagnostics[2] Request finished HTTP/1.1 HEAD http://vl-dev-repo.ki.lan/v2/sst-devops-oci-prod/confluence-cmdb/blobs/sha256:3626f18144516cf17d3c15f60a55c075d34130f1cc3a00d43fe68edf5568d994 - 200 2338 application/octet-stream 6.4659ms info: Microsoft.AspNetCore.Hosting.Diagnostics[2] Request finished HTTP/1.1 HEAD http://vl-dev-repo.ki.lan/v2/sst-devops-oci-prod/confluence-cmdb/blobs/sha256:341aba9cd1b201183186977186c3e3c8e8a9f79967b31f6fc2bff2a8da510c5d - 404 158 application/json 16.9192ms info: Microsoft.AspNetCore.Hosting.Diagnostics[1] Request starting HTTP/1.1 PUT http://vl-dev-repo.ki.lan/v2/sst-devops-oci-prod/confluence-cmdb/blobs/uploads/1cf960f6-1b21-4a89-9eb7-1064b2f9a9ad?digest=sha256%3A7af4036786251654395076ed8392df109d652ae2e1c6de74aac94db89a9f9235 - application/octet-stream 0 A 500 error occurred in sst-devops-oci-prod: Nullable object must have a value. info: Microsoft.AspNetCore.Hosting.Diagnostics[2] Request finished HTTP/1.1 PUT http://vl-dev-repo.ki.lan/v2/sst-devops-oci-prod/confluence-cmdb/blobs/uploads/1cf960f6-1b21-4a89-9eb7-1064b2f9a9ad?digest=sha256%3A7af4036786251654395076ed8392df109d652ae2e1c6de74aac94db89a9f9235 - 500 90 application/json 16.1379ms info: Microsoft.AspNetCore.Hosting.Diagnostics[1] Request starting HTTP/1.1 HEAD http://vl-dev-repo.ki.lan/v2/sst-devops-oci-prod/confluence-cmdb/blobs/sha256:972115bef0bf0586114d827ecd50b5591446d6e260d2f734e2861280eb620422 - - - info: Microsoft.AspNetCore.Hosting.Diagnostics[2] Request finished HTTP/1.1 HEAD http://vl-dev-repo.ki.lan/v2/sst-devops-oci-prod/confluence-cmdb/blobs/sha256:972115bef0bf0586114d827ecd50b5591446d6e260d2f734e2861280eb620422 - 200 14142880 application/octet-stream 3.5634ms info: Microsoft.AspNetCore.Hosting.Diagnostics[1] Request starting HTTP/1.1 GET http://vl-dev-repo.ki.lan/0x44/proget/Inedo.ProGet.WebApplication.Controls.Layout.NotificationBar/GetNotifications - - - info: Microsoft.AspNetCore.Hosting.Diagnostics[2] Request finished HTTP/1.1 GET http://vl-dev-repo.ki.lan/0x44/proget/Inedo.ProGet.WebApplication.Controls.Layout.NotificationBar/GetNotifications - 200 30 - 6.2359ms
doesn't look like the logging is extended ... @atripp
Something I found: I switched from podman to docker as a test and that is able to push images that podman can't
-
@atripp said in proget 500 Internal server error when pushing to a proget docker feed:
25.0.9-ci.6
Just tested this 25.0.9-ci.6. Unfortunately, there is neither a stacktrace shown in GUI nor in stdout/stderr on the docker container that runs proget itself :-(
-
Just trying to help. This is a wireshark analysis of the tcp stream where it happens:
POST /v2/testing/xts-addon-webui/blobs/uploads/ HTTP/1.1 Host: proget.graudatastorage.intern User-Agent: containers/5.35.0 (github.com/containers/image) Content-Length: 0 Authorization: ----REDACTED---- Docker-Distribution-Api-Version: registry/2.0 Accept-Encoding: gzip HTTP/1.1 202 Accepted Date: Thu, 28 Aug 2025 07:11:37 GMT Server: Kestrel Content-Length: 0 Cache-Control: private Location: /v2/testing/xts-addon-webui/blobs/uploads/2006612a-8f6d-4d13-8781-6ceaa1f808fb Vary: Accept-Encoding,Authorization X-ProGet-Version: 25.0.8.17 X-ProGet-Edition: free Docker-Distribution-API-Version: registry/2.0 Range: 0-0 Docker-Upload-UUID: 2006612a-8f6d-4d13-8781-6ceaa1f808fb PATCH /v2/testing/xts-addon-webui/blobs/uploads/2006612a-8f6d-4d13-8781-6ceaa1f808fb HTTP/1.1 Host: proget.graudatastorage.intern User-Agent: containers/5.35.0 (github.com/containers/image) Transfer-Encoding: chunked Authorization: ----REDACTED---- Content-Type: application/octet-stream Docker-Distribution-Api-Version: registry/2.0 Accept-Encoding: gzip ---- Binary BLOB data in chunked encoding redacted ---- PUT /v2/testing/xts-addon-webui/blobs/uploads/2006612a-8f6d-4d13-8781-6ceaa1f808fb?digest=sha256%3Acf93e1bb05f4874a5923244a5600fbc0091ef87879b6c780e7baced4b409daa0 HTTP/1.1 Host: proget.graudatastorage.intern User-Agent: containers/5.35.0 (github.com/containers/image) Content-Length: 0 Authorization: ----REDACTED---- Content-Type: application/octet-stream Docker-Distribution-Api-Version: registry/2.0 Accept-Encoding: gzip HTTP/1.1 500 Internal Server Error Date: Thu, 28 Aug 2025 07:11:50 GMT Server: Kestrel Content-Length: 90 Content-Type: application/json Cache-Control: private Content-Range: 0-32281444 Vary: Accept-Encoding,Authorization X-ProGet-Version: 25.0.8.17 X-ProGet-Edition: free Docker-Distribution-API-Version: registry/2.0 Connection: close {"errors":[{"code":"UNKNOWN","message":"Nullable object must have a value.","detail":[]}]}
The 500 happens only after the very last PUT request. Depending on the image being uploaded,
there might be uploads of other image layers before that which always use a PATCH request first for the actual blob and after that a PUT request with Content-Length: 0
Since those previous PUT requests are the same (except the URI of course) and they do NOT get a 500 response, i suspect this must be some code which only runs at the very end of the image upload.
NOTE: This was done before I updated to 25.0.9-ci.6
-
@thomas_3037 @wechselberg-nisboerge_3629
Another idea (just to find common parts of our proget installations or rule-out differences):
- In my proget, I use the embedded postgres as db
- The proget container itself is running on Rocky 9 in a quadlet.
Cheers
-Fritz
-
My setup:
- Host is Ubuntu 24.04 LTS
- Running proget in a quadlet
- Using embedded Database
- With a caddy as a reverse proxy in front of it
- Caddy also running in a quadlet
- proget and caddy in a common podman network
- Caddyfile as simple as possible:
<hostname { tls <certpath> <keypath> reverse_proxy http://proget:80 }
-
Hi @inedo_1308
Just tested this 25.0.9-ci.6. Unfortunately, there is neither a stacktrace shown in GUI nor in stdout/stderr on the docker container that runs proget itself :-(
Sorry but just to confirm, you looked in the Admin > Diagnostic Center?
Basically, I just changed the code from this...
if (error.StatusCode >= 500 || context.Response.HeadersWritten) WUtil.LogFeedException(error.StatusCode, feed, context, error);
...to this...
if (error.StatusCode == 500) WUtil.LogFeedException(error.StatusCode, feed, context, error);
I really don't see how that wouldn't work to log it. I guess, next thing I could try is to write the stack trace in the
detail
.Thanks,
Alana
-
@atripp said in proget 500 Internal server error when pushing to a proget docker feed:
Sorry but just to confirm, you looked in the Admin > Diagnostic Center?
Yes, exactly. Also: Before perfoming the test, i deleted all messages (there were some unrelated errors from another feed). After the 500 happened, I did a reload in the browser and it still said "There are no errors to display."
Cheers
-Fritz
-
@atripp BTW it's getting weirder by the minute,
I now have created a second instance as a sandbox for testing and there I tried a bisection of the proget versions (starting from 25.0.2 until I reached 25.0.9-ci.6) and was NOT able to reproduce the error with that sandbox instance. I did two test variants: The first sequence of tests was using the default DB (mssql). After that, I recreated the sandbox VM from scratch ant repeated the sequece of tests after migrating proget to postgres with 25.0.2 installed.
I did this, because the "production" instance was using postgres and was migrated very early after 25.0 was released.After learning that I cannot reproduce the error that way, I then had another idea: Performing a DB export on the production proget and importing that on the sandbox. Unfortunately, the import does not work: In the form for importing, I chose upload, then selected the DB-export but after some short time it simply said "Not found" in the form. No errors were shown in the diag center either.
Next steps (tomorrow - its 3 in the morning here) will be:
- Freshly recreate the sandbox with 25.0.9-ci.6 and migrated to postgres.
- Stop the proget container
- Copy the content of all mapped docker volumes (/var/proget/{backup,database,extensions,packages}) from "production" to sandbox
- Start proget container
- Retry to reproduce the error on the sandbox
Will report back when i have more results
Cheers
-Fritz
-
@atripp said in proget 500 Internal server error when pushing to a proget docker feed:
Basically, I just changed the code from this...
if (error.StatusCode >= 500 || context.Response.HeadersWritten) WUtil.LogFeedException(error.StatusCode, feed, context, error);
...to this...
if (error.StatusCode == 500) WUtil.LogFeedException(error.StatusCode, feed, context, error);
That change looks wrong to me, because
(error.StatusCode == 500)
is more specific/restrictive than(error.StatusCode >= 500 || context.Response.HeadersWritten)
In other words: It logs less than before.
-
@inedo_1308 thanks for continuing to help us figure this out
Do you mind trying
inedo/proget:25.0.9-ci.7
?I'm thinking it's some kind of middleware bug (our code? .NET code? who knows), and I can't see why the logging code I added didn't log that in diagnostic center.
Whatever the case, we can see the error JSON is being written:
{"errors":[{"code":"UNKNOWN","message":"Nullable object must have a value.","detail":[]}]}
... so I just added the stack trace todetail
element.FYI, the code:
catch (Exception ex) { WriteError(context, DockerException.Unknown(ex.Message), feed, w => w.WriteValue(ex.StackTrace)); // I added the final argument } .... private static void WriteError(AhHttpContext context, DockerException error, DockerFeed? feed, Action<JsonTextWriter>? writeDetail = null) { /// code from before that should have worked if (error.StatusCode == 500) WUtil.LogFeedException(error.StatusCode, feed, context, error); if (!context.Response.HeadersWritten) { context.Response.Clear(); context.Response.StatusCode = error.StatusCode; context.Response.ContentType = "application/json"; using var jsonWriter = new JsonTextWriter(context.Response.Output); jsonWriter.WriteStartObject(); jsonWriter.WritePropertyName("errors"); jsonWriter.WriteStartArray(); jsonWriter.WriteStartObject(); jsonWriter.WritePropertyName("code"); jsonWriter.WriteValue(error.ErrorCode); jsonWriter.WritePropertyName("message"); jsonWriter.WriteValue(error.Message); jsonWriter.WritePropertyName("detail"); jsonWriter.WriteStartArray(); writeDetail?.Invoke(jsonWriter); jsonWriter.WriteEndArray(); jsonWriter.WriteEndObject(); jsonWriter.WriteEndArray(); jsonWriter.WriteEndObject(); } }
-
@inedo_1308 said in proget 500 Internal server error when pushing to a proget docker feed:
That change looks wrong to me, because (error.StatusCode == 500) is more specific/restrictive than (error.StatusCode >= 500 || context.Response.HeadersWritten)
In other words: It logs less than before.Good spot / good find -- though we never actually raise anything except 500 anyway, so i thought it would be fine
public static DockerException Unknown(string message) => new DockerException(500, "UNKNOWN", message);
Anyway wriiting the detail to that array will hpefully be caught.
-
@atripp said in proget 500 Internal server error when pushing to a proget docker feed:
Do you mind trying inedo/proget:25.0.9-ci.7?
Coming up in a few minutes ...
-
Nope, like before: Nothing in the diag center and on the container's output, only this:
Aug 29 03:53:54 gsg1repo.graudatastorage.intern systemd-proget[538207]: Request finished HTTP/1.1 GET http://proget.graudatastorage.intern/0x44/proget/Inedo.ProGet.WebApplication.Controls.Layout.NotificationBar/GetNotifications - 200 30 - 7.0388ms Aug 29 03:53:58 gsg1repo.graudatastorage.intern systemd-proget[538207]: info: Microsoft.AspNetCore.Hosting.Diagnostics[2] Aug 29 03:53:58 gsg1repo.graudatastorage.intern systemd-proget[538207]: Request finished HTTP/1.1 PATCH http://proget.graudatastorage.intern/v2/testing/xts-addon-webui/blobs/uploads/ca81bc12-62b5-47f9-9bb3-7ef9b8c1530a - 202 0 - 19396.8436ms Aug 29 03:53:58 gsg1repo.graudatastorage.intern systemd-proget[538207]: info: Microsoft.AspNetCore.Hosting.Diagnostics[1] Aug 29 03:53:58 gsg1repo.graudatastorage.intern systemd-proget[538207]: Request starting HTTP/1.1 PUT http://proget.graudatastorage.intern/v2/testing/xts-addon-webui/blobs/uploads/ca81bc12-62b5-47f9-9bb3-7ef9b8c1530a?digest=sha256%3Acf93e1bb05f4874a5923244a5600fbc0091ef87879b6c780e7baced4b409daa0 - application/octet-stream 0 Aug 29 03:53:58 gsg1repo.graudatastorage.intern systemd-proget[538207]: A 500 error occurred in testing: Nullable object must have a value. Aug 29 03:53:58 gsg1repo.graudatastorage.intern systemd-proget[538207]: info: Microsoft.AspNetCore.Hosting.Diagnostics[2] Aug 29 03:53:58 gsg1repo.graudatastorage.intern systemd-proget[538207]: Request finished HTTP/1.1 PUT http://proget.graudatastorage.intern/v2/testing/xts-addon-webui/blobs/uploads/ca81bc12-62b5-47f9-9bb3-7ef9b8c1530a?digest=sha256%3Acf93e1bb05f4874a5923244a5600fbc0091ef87879b6c780e7baced4b409daa0 - 500 1091 application/json 162.3468ms
-
Hold on. The second number in the last line:
... - 500 1091 application/json 162.3468ms
The 500 obviously is the status code. Is the second number the body size of the error response. If yes, that is more than before (in the previous tests, the second number was 90) and I will reveal that, If I do a wireshark dump .... just a minute ...
-
This post is deleted!
-
Got it from the wireshark dump:
HTTP/1.1 500 Internal Server Error Date: Fri, 29 Aug 2025 02:08:39 GMT Server: Kestrel Content-Length: 1091 Content-Type: application/json Cache-Control: private Content-Range: 0-32281444 Vary: Accept-Encoding,Authorization X-ProGet-Version: 25.0.9.7 X-ProGet-Edition: free Docker-Distribution-API-Version: registry/2.0 Connection: close {"errors":[{"code":"UNKNOWN","message":"Nullable object must have a value.","detail":[" at System.Nullable`1.get_Value()\n at Inedo.ProGet.Feeds.Docker.DockerFeed.VerifyAndInstallBlobUpload(String uploadId, DockerDigest digest, String mediaType) in C:\\Users\\builds\\AppData\\Local\\Temp\\InedoAgent\\BuildMaster\\192.168.44.60\\Temp\\_E588882\\Src\\src\\ProGet\\Feeds\\Docker\\DockerFeed.cs:line 381\n at Inedo.ProGet.WebApplication.SimpleHandlers.Docker.DockerHandler.ProcessBlobUploadAsync(AhHttpContext context, WebApiContext apiContext, DockerFeed feed, String repositoryName, String uploadId) in C:\\Users\\builds\\AppData\\Local\\Temp\\InedoAgent\\BuildMaster\\192.168.44.60\\Temp\\_E588882\\Src\\src\\ProGet\\WebApplication\\SimpleHandlers\\Docker\\DockerHandler.cs:line 200\n at Inedo.ProGet.WebApplication.SimpleHandlers.Docker.DockerHandler.ProcessRequestAsync(AhHttpContext context) in C:\\Users\\builds\\AppData\\Local\\Temp\\InedoAgent\\BuildMaster\\192.168.44.60\\Temp\\_E588882\\Src\\src\\ProGet\\WebApplication\\SimpleHandlers\\Docker\\DockerHandler.cs:line 92"]}]}
-
@inedo_1308 finally!!! nice find :)
-
In case you're curious, here is #381
.... and now to figure out what could possibly be null in that specific area
-
Hmmm the only possibility I can see is that
DockerBlobs_CreateOrUpdateBlob
is returning NULL , which is failing conversion toint dockerBlobId
. That's the only nullable converstion on that line.There's gotta be some kind of bug with this postresql procedure. Maybe a race condition??
CREATE OR REPLACE FUNCTION "DockerBlobs_CreateOrUpdateBlob" ( "@Feed_Id" INT, "@Blob_Digest" VARCHAR(128), "@Blob_Size" BIGINT, "@MediaType_Name" VARCHAR(255) = NULL, "@Cached_Indicator" BOOLEAN = NULL, "@Download_Count" INT = NULL, "@DockerBlob_Id" INT = NULL ) RETURNS INT LANGUAGE plpgsql AS $$ BEGIN SELECT "DockerBlob_Id" INTO "@DockerBlob_Id" FROM "DockerBlobs" WHERE ("Feed_Id" = "@Feed_Id" OR ("Feed_Id" IS NULL AND "@Feed_Id" IS NULL)) AND "Blob_Digest" = "@Blob_Digest"; WITH updated AS ( UPDATE "DockerBlobs" SET "Blob_Size" = "@Blob_Size", "MediaType_Name" = COALESCE("@MediaType_Name", "MediaType_Name"), "Cached_Indicator" = COALESCE("@Cached_Indicator", "Cached_Indicator") WHERE ("Feed_Id" = "@Feed_Id" OR ("Feed_Id" IS NULL AND "@Feed_Id" IS NULL)) AND "Blob_Digest" = "@Blob_Digest" RETURNING * ) INSERT INTO "DockerBlobs" ( "Feed_Id", "Blob_Digest", "Download_Count", "Blob_Size", "MediaType_Name", "Cached_Indicator" ) SELECT "@Feed_Id", "@Blob_Digest", COALESCE("@Download_Count", 0), "@Blob_Size", "@MediaType_Name", COALESCE("@Cached_Indicator", 'N') WHERE NOT EXISTS (SELECT * FROM updated) RETURNING "DockerBlob_Id" INTO "@DockerBlob_Id"; RETURN "@DockerBlob_Id"; END $$;
Anyway we'll study that another day.. at least we think we know specifically where the issue is.
-
@atripp Tomorrow I will compare this function's code on my "production" proget with that on the sandbox. Just to be shure.
-
@inedo_1308 sounds good!
The code would almost certainly be the same, since it hasn't been updated since we did the PostgreSQL version of the script.
So, I think it's a race condition, though I don't know how it would happen. However, if it's a race condition, then it should be solved with an
UPDLOCK
(or whatever) in PostgreSQL.- SELECT finds no matching blob in the database (thus
DockerBlob_Id
is null) - ... small delay ...
- UPDATE finds the matching blob because it was added (thus a row gets added to
insert
) - INSERT does run because there is a row in
inserted
- A NULL DockerBlob_Id is returned
If you're able to patch the procedure, could you add
FOR UPDATE
as follows? We are still relatively to PostgreSQL so I don't know if this the right way to do it in this case.I think a second SELECT could also work, but I dunno.
CREATE OR REPLACE FUNCTION "DockerBlobs_CreateOrUpdateBlob" ( "@Feed_Id" INT, "@Blob_Digest" VARCHAR(128), "@Blob_Size" BIGINT, "@MediaType_Name" VARCHAR(255) = NULL, "@Cached_Indicator" BOOLEAN = NULL, "@Download_Count" INT = NULL, "@DockerBlob_Id" INT = NULL ) RETURNS INT LANGUAGE plpgsql AS $$ BEGIN SELECT "DockerBlob_Id" INTO "@DockerBlob_Id" FROM "DockerBlobs" WHERE ("Feed_Id" = "@Feed_Id" OR ("Feed_Id" IS NULL AND "@Feed_Id" IS NULL)) AND "Blob_Digest" = "@Blob_Digest" FOR UPDATE; WITH updated AS ( UPDATE "DockerBlobs" SET "Blob_Size" = "@Blob_Size", "MediaType_Name" = COALESCE("@MediaType_Name", "MediaType_Name"), "Cached_Indicator" = COALESCE("@Cached_Indicator", "Cached_Indicator") WHERE ("Feed_Id" = "@Feed_Id" OR ("Feed_Id" IS NULL AND "@Feed_Id" IS NULL)) AND "Blob_Digest" = "@Blob_Digest" RETURNING * ) INSERT INTO "DockerBlobs" ( "Feed_Id", "Blob_Digest", "Download_Count", "Blob_Size", "MediaType_Name", "Cached_Indicator" ) SELECT "@Feed_Id", "@Blob_Digest", COALESCE("@Download_Count", 0), "@Blob_Size", "@MediaType_Name", COALESCE("@Cached_Indicator", 'N') WHERE NOT EXISTS (SELECT * FROM updated) RETURNING "DockerBlob_Id" INTO "@DockerBlob_Id"; RETURN "@DockerBlob_Id"; END $$;
- SELECT finds no matching blob in the database (thus
-
@atripp said in proget 500 Internal server error when pushing to a proget docker feed:
The code would almost certainly be the same, since it hasn't been updated since we did the PostgreSQL version of the script.
You are right "production" and sandbox showed the same function
If you're able to patch the procedure, could you add FOR UPDATE as follows? We are still relatively to PostgreSQL so I don't know if this the right way to do it in this case.
Did that, verified that the function actually has changed and did another test. Unfortunately this did not help, error was exactly the same like in my above wireshark dump.
Or does one have to "compile" the function somehow after replacing? (I never dealt with SQL functions before and in general have very limited SQL knowledge.)Cheers,
-Fritz
-
BTW:
Did you guys generate a different password for the postgres DB than for MSQL
during migration?I'm asking, because i vaguely remember that i could access the postgres DB using the same password (taken from the mssql connection-string environment var).
Now, this didn't work anymore and I had to disable password-auth in pg_hba.conf in order to use pg_dump and psql inside the proget container.
Thanks
-Fritz
-
Hi @inedo_1308 ,
I forgot how it worked in the preview migration, but the connection string is stored in the database directory (
/var/proget/database/.pgsqlconn
).Thanks,
Alana
-
Did that, verified that the function actually has changed and did another test. Unfortunately this did not help, error was exactly the same like in my above wireshark dump.
Or does one have to "compile" the function somehow after replacing? (I never dealt with SQL functions before and in general have very limited SQL knowledge.)Ah that's a shame! We're kind of new to "patching" functions like this in postgresql, but I think that should have worked to change the code. And also the code change should have worked.
If you don't mind trying one other patch, where we select out the Blob_Id again in the end.
CREATE OR REPLACE FUNCTION "DockerBlobs_CreateOrUpdateBlob" ( "@Feed_Id" INT, "@Blob_Digest" VARCHAR(128), "@Blob_Size" BIGINT, "@MediaType_Name" VARCHAR(255) = NULL, "@Cached_Indicator" BOOLEAN = NULL, "@Download_Count" INT = NULL, "@DockerBlob_Id" INT = NULL ) RETURNS INT LANGUAGE plpgsql AS $$ BEGIN SELECT "DockerBlob_Id" INTO "@DockerBlob_Id" FROM "DockerBlobs" WHERE ("Feed_Id" = "@Feed_Id" OR ("Feed_Id" IS NULL AND "@Feed_Id" IS NULL)) AND "Blob_Digest" = "@Blob_Digest" FOR UPDATE; WITH updated AS ( UPDATE "DockerBlobs" SET "Blob_Size" = "@Blob_Size", "MediaType_Name" = COALESCE("@MediaType_Name", "MediaType_Name"), "Cached_Indicator" = COALESCE("@Cached_Indicator", "Cached_Indicator") WHERE ("Feed_Id" = "@Feed_Id" OR ("Feed_Id" IS NULL AND "@Feed_Id" IS NULL)) AND "Blob_Digest" = "@Blob_Digest" RETURNING * ) INSERT INTO "DockerBlobs" ( "Feed_Id", "Blob_Digest", "Download_Count", "Blob_Size", "MediaType_Name", "Cached_Indicator" ) SELECT "@Feed_Id", "@Blob_Digest", COALESCE("@Download_Count", 0), "@Blob_Size", "@MediaType_Name", COALESCE("@Cached_Indicator", 'N') WHERE NOT EXISTS (SELECT * FROM updated) RETURNING "DockerBlob_Id" INTO "@DockerBlob_Id"; SELECT "DockerBlob_Id" INTO "@DockerBlob_Id" FROM "DockerBlobs" WHERE ("Feed_Id" = "@Feed_Id" OR ("Feed_Id" IS NULL AND "@Feed_Id" IS NULL)) AND "Blob_Digest" = "@Blob_Digest" RETURN "@DockerBlob_Id"; END $$;
If this doesn't do the trick, I think we need to look a lot closer.
-
I dump the schema from the Database. The Function looks nearly identical. Only the varchar sizes are different.
CREATE FUNCTION public."DockerBlobs_CreateOrUpdateBlob"("@Feed_Id" integer, "@Blob_Digest" character varying, "@Blob_Size" bigint, "@MediaType_Name" character varying DEFAULT NULL::character varying, "@Cached_Indicator" boolean DEFAULT NULL::boolean, "@Download_Count" integer DEFAULT NULL::integer, "@DockerBlob_Id" integer DEFAULT NULL::integer) RETURNS integer LANGUAGE plpgsql AS $$ BEGIN SELECT "DockerBlob_Id" INTO "@DockerBlob_Id" FROM "DockerBlobs" WHERE ("Feed_Id" = "@Feed_Id" OR ("Feed_Id" IS NULL AND "@Feed_Id" IS NULL)) AND "Blob_Digest" = "@Blob_Digest"; WITH updated AS ( UPDATE "DockerBlobs" SET "Blob_Size" = "@Blob_Size", "MediaType_Name" = COALESCE("@MediaType_Name", "MediaType_Name"), "Cached_Indicator" = COALESCE("@Cached_Indicator", "Cached_Indicator") WHERE ("Feed_Id" = "@Feed_Id" OR ("Feed_Id" IS NULL AND "@Feed_Id" IS NULL)) AND "Blob_Digest" = "@Blob_Digest" RETURNING * ) INSERT INTO "DockerBlobs" ( "Feed_Id", "Blob_Digest", "Download_Count", "Blob_Size", "MediaType_Name", "Cached_Indicator" ) SELECT "@Feed_Id", "@Blob_Digest", COALESCE("@Download_Count", 0), "@Blob_Size", "@MediaType_Name", COALESCE("@Cached_Indicator", 'N') WHERE NOT EXISTS (SELECT * FROM updated) RETURNING "DockerBlob_Id" INTO "@DockerBlob_Id"; RETURN "@DockerBlob_Id"; END $$;
and the table schema
proget=# \d "DockerBlobs" Table "public.DockerBlobs" Column | Type | Collation | Nullable | Default --------------------+--------------------------+-----------+----------+------------------------------ Feed_Id | integer | | | Blob_Digest | character varying(128) | | not null | Download_Count | integer | | not null | LastRequested_Date | timestamp with time zone | | | Blob_Size | bigint | | not null | Cached_Indicator | boolean | | not null | MediaType_Name | character varying(255) | | | DockerBlob_Id | integer | | not null | generated always as identity LastScan_Date | timestamp with time zone | | | Indexes: "PK__DockerBlobs" PRIMARY KEY, btree ("DockerBlob_Id") "IX__DockerBlobs__Blob_Digest" btree ("Blob_Digest") "UQ__DockerBlobs" UNIQUE CONSTRAINT, btree ("Feed_Id", "Blob_Digest") Foreign-key constraints: "FK__DockerBlobs__Feeds" FOREIGN KEY ("Feed_Id") REFERENCES "Feeds"("Feed_Id") ON DELETE CASCADE Referenced by: TABLE ""DockerImageLayers"" CONSTRAINT "FK__DockerBlobIndex__DockerBlobs" FOREIGN KEY ("DockerBlob_Id") REFERENCES "DockerBlobs"("DockerBlob_Id") ON DELETE CASCADE TABLE ""DockerBlobInfos"" CONSTRAINT "FK__DockerBlobInfos__DockerBlobs" FOREIGN KEY ("DockerBlob_Id") REFERENCES "DockerBlobs"("DockerBlob_Id") ON DELETE CASCADE TABLE ""DockerBlobPackages"" CONSTRAINT "FK__DockerBlobPackages__DockerBlobs" FOREIGN KEY ("DockerBlob_Id") REFERENCES "DockerBlobs"("DockerBlob_Id") ON DELETE CASCADE TABLE ""DockerImages"" CONSTRAINT "FK__DockerImages__DockerBlobs" FOREIGN KEY ("ContainerConfigBlob_Id") REFERENCES "DockerBlobs"("DockerBlob_Id") ON DELETE CASCADE
-
@atripp
If you don't mind trying one other patch, where we select out the Blob_Id again in the end.Yippieh - That one did the Trick. Error is gone
There is a semicolon missing after the additional selct before the return
-
Going to push some more images to be on the safe side ...
-
@atripp Ok this is definitively working now. Pushed approx. 20 different images from 80MiB up to 2GiB. None of them produced an error anymore.
Excellent Support!!!
Thanks alot
-Fritz
-
I was about to start a new thread, but this one might be related to my issue. I can't push a Docker image to ProGet (version 25.0.8). Docker keeps retrying, but without success. The Diagnostic Center shows no errors. In the logs, I see:
A 500 error occurred in docker_feed: Nullable object must have a value. info: Microsoft.AspNetCore.Hosting.Diagnostics[2] Request finished HTTP/1.1 PUT http://mydomain.com/v2/docker_feed/my_service/blobs/uploads/1f3bb6af-6666-497e-bb8e-6b621d584b9c?digest=sha256%3Addfc8032fb97dfcf37359445fcb93b9742fcdf6f5bfdd288081557bf461edac6 - 500 1091 application/json 10.8169ms
Then I tried 25.0.9-ci.14, same result.
-
@pariv_0352 said in proget 500 Internal server error when pushing to a proget docker feed:
Then I tried 25.0.9-ci.14, same result.
I don't think the newer builds contain any fix for this yet. I was able to fix it myself locally in the postgres DB with guidance by @atripp, however she most likely did not see my success report yet, otherwise she very likely would have acknowledged it here.
The newer builds > 25.0.9-ci.7 are probably other developers working on different things.And yes, the
- 500 1091
part in your error message looks suspiciously like mine (before I fixed it locally).So: Just wait, until @atripp confirms here, that she has actually applied that fix.
Cheers
-Fritz
-
@felfert thanks for confirming!!
FYI tThe fix has not been applied yet to code yet, but you can patch the stored procedure (painfully) as a workaround for now. We will try to find a better solution. The only thing I can imagine happening is that the
PUT
is happening immediately after thePATCH
finishes, but before the client receives a200
response. I have no idea though.We'll figure something out, now that we know where it is thanks to your help!!
-
@atripp said in proget 500 Internal server error when pushing to a proget docker feed:
The only thing I can imagine happening is that the PUT is happening immediately after the PATCH finishes, but before the client receives a 200 response.
Just to confirm your assumption: Thats exactly what is happening as one can see in my first wireshark stream analysis of this above.
you can patch the stored procedure (painfully) as a workaround for now.
I'm writing a small shell (bash) script, that can be invoked inside the proget container in order to easily apply the fix. I will post this here later. Stay tuned.
Cheers
-Fritz
-
As announced above, I have created a little shell script for applying the fix for the bug discussed in this thread.
Prerequisites:
- ProGet >= 5.0.8 running in a docker container and already migrated to postgres DB
- backup-directory mapped to /var/proget/backups in the proget container
Usage:
- Copy the script from below and save it in the backup-directory using a name that makes sense for you (e.g.
fix-docker-push.sh
) - Backup your database
- On the docker host running the proget container, execute the following cmdline (obviously inserting your container's name here and supplying the correct script-filename)
docker exec -it InsertYourContainersNameHere /bin/bash /var/proget/backups/fix-docker-push.sh
- The script will abort, if it cannot find the connection string of the postgres DB
- The script will abort, if it detects, that the fix has already applied
- Otherwise it will show a final warning and will abort, if you do not enter YES (all capitals)
- If you enter YES, it will apply the fix and psql will print CREATE FUNCTION
Here is the script:
#! /bin/bash sqlconn=/var/proget/database/.pgsqlconn if [ -f "${sqlconn}" ] ; then . ${sqlconn} PGPASSWORD="${Password}" psql -h ${Host} -p ${Port} -U ${Username} ${Database} <<-"EOF" | \sf "DockerBlobs_CreateOrUpdateBlob" EOF grep -q "FOR UPDATE;" if [ $? = 0 ] ; then echo "It looks like the fix was already applied. Aborting" exit 0 fi cat<<-EOF This script applies a fix for pushing images to a proget docker feed. WARNING: This modifies your proget postgres database. BACKUP YOUR DATABASE FIRST. Enter YES if you want to continue or hit Ctrl-C to abort. EOF read resp; if [ "${resp}" != "YES" ] ; then echo "Aborting" exit 0 fi PGPASSWORD="${Password}" psql -h ${Host} -p ${Port} -U ${Username} ${Database} <<-"EOF" CREATE OR REPLACE FUNCTION "DockerBlobs_CreateOrUpdateBlob" ( "@Feed_Id" INT, "@Blob_Digest" VARCHAR(128), "@Blob_Size" BIGINT, "@MediaType_Name" VARCHAR(255) = NULL, "@Cached_Indicator" BOOLEAN = NULL, "@Download_Count" INT = NULL, "@DockerBlob_Id" INT = NULL ) RETURNS INT LANGUAGE plpgsql AS $$ BEGIN SELECT "DockerBlob_Id" INTO "@DockerBlob_Id" FROM "DockerBlobs" WHERE ("Feed_Id" = "@Feed_Id" OR ("Feed_Id" IS NULL AND "@Feed_Id" IS NULL)) AND "Blob_Digest" = "@Blob_Digest" FOR UPDATE; WITH updated AS ( UPDATE "DockerBlobs" SET "Blob_Size" = "@Blob_Size", "MediaType_Name" = COALESCE("@MediaType_Name", "MediaType_Name"), "Cached_Indicator" = COALESCE("@Cached_Indicator", "Cached_Indicator") WHERE ("Feed_Id" = "@Feed_Id" OR ("Feed_Id" IS NULL AND "@Feed_Id" IS NULL)) AND "Blob_Digest" = "@Blob_Digest" RETURNING * ) INSERT INTO "DockerBlobs" ( "Feed_Id", "Blob_Digest", "Download_Count", "Blob_Size", "MediaType_Name", "Cached_Indicator" ) SELECT "@Feed_Id", "@Blob_Digest", COALESCE("@Download_Count", 0), "@Blob_Size", "@MediaType_Name", COALESCE("@Cached_Indicator", 'N') WHERE NOT EXISTS (SELECT * FROM updated) RETURNING "DockerBlob_Id" INTO "@DockerBlob_Id"; SELECT "DockerBlob_Id" INTO "@DockerBlob_Id" FROM "DockerBlobs" WHERE ("Feed_Id" = "@Feed_Id" OR ("Feed_Id" IS NULL AND "@Feed_Id" IS NULL)) AND "Blob_Digest" = "@Blob_Digest"; RETURN "@DockerBlob_Id"; END $$; EOF else echo "Postgres connection parameters (${sqlconn}) not found. Aborting" fi
Disclaimer
I am NOT an inedo engineer and not affiliated with them.
You are responsible to backup you database before running this script.
You run this script at your own risk.
I take no responsibility whatsoever for any damages that might occur.Have fun
-Fritz
-
@felfert amazing!! That script will come in handy when we need to help users patch their instance; we can also try to add something that allows you to patch via the UI as well!!
-
@atripp said in proget 500 Internal server error when pushing to a proget docker feed:
we can also try to add something that allows you to patch via the UI as well!!
Huh? Well, I don't think this would be such a good idea. I assume you guys already have some code in proget for upgrading your DB-Schema when a new version is required. That's where the function should be replaced. This script is only meant as a intermediate solution until you guys have updated the function.
Cheers
-Fritz
-
Hi @felfert ,
As an update, we'd are planning on this pattern instead of row/table locking (PG-3104). It gives us a lot more control and makes it a lot easier to avoid deadlocks.
I still can't reproduce the issue, but I see no reason this won't work.
CREATE OR REPLACE FUNCTION "DockerBlobs_CreateOrUpdateBlob" ( "@Feed_Id" INT, "@Blob_Digest" VARCHAR(128), "@Blob_Size" BIGINT, "@MediaType_Name" VARCHAR(255) = NULL, "@Cached_Indicator" BOOLEAN = NULL, "@Download_Count" INT = NULL, "@DockerBlob_Id" INT = NULL ) RETURNS INT LANGUAGE plpgsql AS $$ BEGIN -- avoid race condition when two procs call at exact same time PERFORM PG_ADVISORY_XACT_LOCK(HASHTEXT(CONCAT_WS('DockerBlobs_CreateOrUpdateBlob', "@Feed_Id", LOWER("@Blob_Digest")))); SELECT "DockerBlob_Id" INTO "@DockerBlob_Id" FROM "DockerBlobs" WHERE ("Feed_Id" = "@Feed_Id" OR ("Feed_Id" IS NULL AND "@Feed_Id" IS NULL)) AND "Blob_Digest" = "@Blob_Digest"; WITH updated AS ( UPDATE "DockerBlobs" SET "Blob_Size" = "@Blob_Size", "MediaType_Name" = COALESCE("@MediaType_Name", "MediaType_Name"), "Cached_Indicator" = COALESCE("@Cached_Indicator", "Cached_Indicator") WHERE ("Feed_Id" = "@Feed_Id" OR ("Feed_Id" IS NULL AND "@Feed_Id" IS NULL)) AND "Blob_Digest" = "@Blob_Digest" RETURNING * ) INSERT INTO "DockerBlobs" ( "Feed_Id", "Blob_Digest", "Download_Count", "Blob_Size", "MediaType_Name", "Cached_Indicator" ) SELECT "@Feed_Id", "@Blob_Digest", COALESCE("@Download_Count", 0), "@Blob_Size", "@MediaType_Name", COALESCE("@Cached_Indicator", 'N') WHERE NOT EXISTS (SELECT * FROM updated) RETURNING "DockerBlob_Id" INTO "@DockerBlob_Id"; RETURN "@DockerBlob_Id"; END $$;
-
@atripp I tested this one and it works.
However:
In the meantime, i have updated to the released 25.0.9 (aka latest) and now I cannot reproduce the error either - even if I explicitely revert the function to the old one where it produced the error beforeSo I'm afraid that i might not be a reliable tester for this specific error anymore.
Cheers
-Fritz
-
I updated to 25.0.9 as well, and still have the issue.