Thanks for looking into this so promptly. We are indeed using cloud storage (AWS S3 and Pure Storage S3) so that sounds like the root cause. Quite a few of our assets are several GB in size so it may be that using S3 is just not suitable for us. We'll look into alternative storage mechanisms. Thanks again.
James Woods
@james.woods_8996
Best posts made by james.woods_8996
Latest posts made by james.woods_8996
-
RE: ProGet - Limit on Replicated File Size
-
ProGet - Limit on Replicated File Size
I'm testing replication of an asset directory. When the replication encounters a file greater than two GB in size it throws the following exception:
DEBUG: 2025-08-27 21:42:41Z - Requesting feed state from https://proget-au.wtg.zone/api/feed-replication/WTG-Assets/state...
INFO : 2025-08-27 21:42:42Z - Fetching 6d4e7d87-3a0d-4fb5-90bb-c73f9624d97c_ContentPackage.zip from target...
ERROR: 2025-08-27 21:42:49Z - Execution #143 failed at 08/27/2025 21:42:49 after 00:00:07.2831542.
ERROR: 2025-08-27 21:42:49Z - Unhandled exception: System.Net.Http.HttpRequestException: Cannot write more bytes to the buffer than the configured maximum buffer size: 2147483647.
at System.Net.Http.HttpClient.<SendAsync>g__Core|83_0(HttpRequestMessage request, HttpCompletionOption completionOption, CancellationTokenSource cts, Boolean disposeCts, CancellationTokenSource pendingRequestsCts, CancellationToken originalCancellationToken)
at Inedo.ProGet.Service.Replication.FeedReplicator.HandlePossiblyChunkedDownloadAsync(Func1 createRequest, Func
5 installAsync, CancellationToken cancellationToken) in C:\Users\builds\AppData\Local\Temp\InedoAgent\BuildMaster\192.168.44.60\Temp_E584854\Src\src\ProGet\Service\Replication\FeedReplicator.cs:line 267
at Inedo.ProGet.Service.Replication.AssetFeedReplicator.PullRemoteAssetAsync(RemoteAsset asset, CancellationToken cancellationToken) in C:\Users\builds\AppData\Local\Temp\InedoAgent\BuildMaster\192.168.44.60\Temp_E584854\Src\src\ProGet\Service\Replication\AssetFeedReplicator.cs:line 459
at Inedo.ProGet.Service.Replication.AssetFeedReplicator.ReplicateAsync(CancellationToken cancellationToken) in C:\Users\builds\AppData\Local\Temp\InedoAgent\BuildMaster\192.168.44.60\Temp_E584854\Src\src\ProGet\Service\Replication\AssetFeedReplicator.cs:line 205
at Inedo.ProGet.Service.Replication.FeedReplicator.ReplicateAsync(Int32 executionId, Int32 scopeSequence, CancellationToken cancellationToken) in C:\Users\builds\AppData\Local\Temp\InedoAgent\BuildMaster\192.168.44.60\Temp_E584854\Src\src\ProGet\Service\Replication\FeedReplicator.cs:line 199
at Inedo.ProGet.Service.Executions.ActiveFeedSyncExecution.ExecuteAsync() in C:\Users\builds\AppData\Local\Temp\InedoAgent\BuildMaster\192.168.44.60\Temp_E584854\Src\src\ProGet\Service\Executions\ActiveFeedSyncExecution.cs:line 31Is there a way of increasing the limit of file sizes that can be replicated?
-
RE: ProGet - Error Pulling from Docker Repository with Multiple Servers
Thanks for that, I'd put an alphanumeric rather than a hex key in two of my sites. Once I changed them to hex then all started working.
-
RE: ProGet - Unsupported Header when Uploading to Pure Storage S3
Thanks for that, we enabled the multipart header but then encountered another error. We've raised a ticket with the vendor.
-
ProGet - Error Pulling from Docker Repository with Multiple Servers
I'm testing an installation of ProGet on Openshift Kubernetes using Linux pods. The database is SQL Server. We have no shared disk storage provisioned as we want to exclusively use S3 for artifact storage. Testing a Docker feed with an AWS S3 bucket with only one pod in our cluster worked fine. Here are the commands and the logs for that:
Command Line
PS C:\Users\james.woods> podman pull proget-au.wtg.zone/cargowise-cloud-docker/hello:latest
Trying to pull proget-au.wtg.zone/cargowise-cloud-docker/hello:latest...
Getting image source signatures
Copying blob sha256:ce980a8f5545faa3125a489aad32c00d6cf13d80a302308c3963b524085657af
Copying config sha256:5dd467fce50b56951185da365b5feee75409968cbab5767b9b59e325fb2ecbc0
Writing manifest to image destination
5dd467fce50b56951185da365b5feee75409968cbab5767b9b59e325fb2ecbc0Logs
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/1.1 GET http://proget-au.wtg.zone/v2/ - - -
info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
Request finished HTTP/1.1 GET http://proget-au.wtg.zone/v2/ - 401 145 application/json 0.2699ms
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/1.1 GET http://proget-au.wtg.zone/v2/_auth?account=james.woods&scope=repository%3Acargowise-cloud-docker%2Fhello%3Apull&service=proget-au.wtg.zone - - -
info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
Request finished HTTP/1.1 GET http://proget-au.wtg.zone/v2/_auth?account=james.woods&scope=repository%3Acargowise-cloud-docker%2Fhello%3Apull&service=proget-au.wtg.zone - 200 369 application/json 15.4765ms
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/1.1 GET http://proget-au.wtg.zone/v2/cargowise-cloud-docker/hello/manifests/latest - - -
info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
Request finished HTTP/1.1 GET http://proget-au.wtg.zone/v2/cargowise-cloud-docker/hello/manifests/latest - 200 427 application/vnd.docker.distribution.manifest.v2+json 12.2290ms
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/1.1 GET http://proget-au.wtg.zone/v2/cargowise-cloud-docker/hello/blobs/sha256:5dd467fce50b56951185da365b5feee75409968cbab5767b9b59e325fb2ecbc0 - - -
info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
Request finished HTTP/1.1 GET http://proget-au.wtg.zone/v2/cargowise-cloud-docker/hello/blobs/sha256:5dd467fce50b56951185da365b5feee75409968cbab5767b9b59e325fb2ecbc0 - 200 3319 application/octet-stream 99.8158ms
Running Feed Replication...
Feed Replication completed.However as soon as we increase the number of pods to more than one the pull no longer works. Here are the commands and logs for that case:
Command Line
PS C:\Users\james.woods> podman pull proget-au.wtg.zone/cargowise-cloud-docker/hello:latest
Trying to pull proget-au.wtg.zone/cargowise-cloud-docker/hello:latest...
Error: initializing source docker://proget-au.wtg.zone/cargowise-cloud-docker/hello:latest: reading manifest latest in proget-au.wtg.zone/cargowise-cloud-docker/hello: received unexpected HTTP status: 500 Internal Server ErrorLogs
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/1.1 GET http://proget-au.wtg.zone/v2/cargowise-cloud-docker/hello/manifests/latest - - -
A 500 error occurred in (unknown feed): The key {ba4cc1b5-2f99-49ac-bf9d-9f7d7f696431} was not found in the key ring. For more information go to https://aka.ms/aspnet/dataprotectionwarning
info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
Request finished HTTP/1.1 GET http://proget-au.wtg.zone/v2/cargowise-cloud-docker/hello/manifests/latest - 500 204 application/json 0.7512msIs there some shared state that each server needs to be able to access?
-
ProGet - Unsupported Header when Uploading to Pure Storage S3
We're running ProGet in OpenShift with Linux containers. We're testing using Pure Storage S3 for the storage of our artifacts. We have successfully tested creating an assets folder on the S3 system and have uploaded documents to it. However, when we attempted creating a Docker repository on the same S3 bucket we got an AWS S3 exception stating that an unsupported header exception. The log from the pod is shown below:
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/1.1 PUT http://proget-au.wtg.zone/v2/cargowise-cloud-docker/hello/blobs/uploads/5d7c5817-3cc7-47d3-ace5-bc3b5b827164?digest=sha256%3Ace980a8f5545faa3125a489aad32c00d6cf13d80a302308c3963b524085657af - application/octet-stream 0
Amazon S3 Exception; Request Id: ; A header you provided implies functionality that is not implemented.
A 500 error occurred in cargowise-cloud-docker: A header you provided implies functionality that is not implementedIs the code that writes to S3 common across all feed types? Or does each feed type use S3 in their own way? Has anybody else had any experience with Pure Storage S3, and what were the outcomes?
-
RE: ProGet 2025 Rootless Containers
@stevedennis Thanks for confirming Steve. I can say that modifications to the config files have no effect whatsoever, only changing that environment variable made a difference.
-
ProGet 2025 Rootless Containers
I'm deploying ProGet in Kubernetes and the containers are restricted to run as non-root users. The ProGet container tries to bind to port 80 by default and this is denied for non-root users. As expected, the logs showed this error:
Shared configuration file not found at /etc/inedo/ProGet.config. No encryption key is configured. Credentials will be stored in plain text. info: Inedo.Web.BackgroundTaskQueueService[0] Background Task Queue is starting. warn: Microsoft.AspNetCore.Hosting.Diagnostics[15] Overriding HTTP_PORTS '8080' and HTTPS_PORTS ''. Binding to values defined by URLS instead 'http://*:80'. warn: Microsoft.AspNetCore.Server.Kestrel[0] Overriding address(es) 'http://*:80'. Binding to endpoints defined via IConfiguration and/or UseKestrel() instead. fail: Microsoft.Extensions.Hosting.Internal.Host[11] Hosting failed to start System.Net.Sockets.SocketException (13): Permission denied at System.Net.Sockets.Socket.DoBind(EndPoint endPointSnapshot, SocketAddress socketAddress)
I followed the guide in https://docs.inedo.com/docs/installation/linux/installation-troubleshooting-docker-installations#root-less-containers to address this but the problem persisted. I tried setting the URLS environment variable but to no avail. Then I set the ASPNETCORE_URLS environment variable to http://*:8080 and this worked. I removed all modifications to config files and it continued to function, thus showing this truly was the fix.
Is this behaviour expected in the latest ProGet Docker deployments? Has Indeo made ProGet align with BuildMaster and Otter in this regard?