Welcome to the Inedo Forums! Check out the Forums Guide for help getting started.
If you are experiencing any issues with the forum software, please visit the Contact Form on our website and let us know!
ProGet - Limit on Replicated File Size
-
I'm testing replication of an asset directory. When the replication encounters a file greater than two GB in size it throws the following exception:
DEBUG: 2025-08-27 21:42:41Z - Requesting feed state from https://proget-au.wtg.zone/api/feed-replication/WTG-Assets/state...
INFO : 2025-08-27 21:42:42Z - Fetching 6d4e7d87-3a0d-4fb5-90bb-c73f9624d97c_ContentPackage.zip from target...
ERROR: 2025-08-27 21:42:49Z - Execution #143 failed at 08/27/2025 21:42:49 after 00:00:07.2831542.
ERROR: 2025-08-27 21:42:49Z - Unhandled exception: System.Net.Http.HttpRequestException: Cannot write more bytes to the buffer than the configured maximum buffer size: 2147483647.
at System.Net.Http.HttpClient.<SendAsync>g__Core|83_0(HttpRequestMessage request, HttpCompletionOption completionOption, CancellationTokenSource cts, Boolean disposeCts, CancellationTokenSource pendingRequestsCts, CancellationToken originalCancellationToken)
at Inedo.ProGet.Service.Replication.FeedReplicator.HandlePossiblyChunkedDownloadAsync(Func1 createRequest, Func
5 installAsync, CancellationToken cancellationToken) in C:\Users\builds\AppData\Local\Temp\InedoAgent\BuildMaster\192.168.44.60\Temp_E584854\Src\src\ProGet\Service\Replication\FeedReplicator.cs:line 267
at Inedo.ProGet.Service.Replication.AssetFeedReplicator.PullRemoteAssetAsync(RemoteAsset asset, CancellationToken cancellationToken) in C:\Users\builds\AppData\Local\Temp\InedoAgent\BuildMaster\192.168.44.60\Temp_E584854\Src\src\ProGet\Service\Replication\AssetFeedReplicator.cs:line 459
at Inedo.ProGet.Service.Replication.AssetFeedReplicator.ReplicateAsync(CancellationToken cancellationToken) in C:\Users\builds\AppData\Local\Temp\InedoAgent\BuildMaster\192.168.44.60\Temp_E584854\Src\src\ProGet\Service\Replication\AssetFeedReplicator.cs:line 205
at Inedo.ProGet.Service.Replication.FeedReplicator.ReplicateAsync(Int32 executionId, Int32 scopeSequence, CancellationToken cancellationToken) in C:\Users\builds\AppData\Local\Temp\InedoAgent\BuildMaster\192.168.44.60\Temp_E584854\Src\src\ProGet\Service\Replication\FeedReplicator.cs:line 199
at Inedo.ProGet.Service.Executions.ActiveFeedSyncExecution.ExecuteAsync() in C:\Users\builds\AppData\Local\Temp\InedoAgent\BuildMaster\192.168.44.60\Temp_E584854\Src\src\ProGet\Service\Executions\ActiveFeedSyncExecution.cs:line 31Is there a way of increasing the limit of file sizes that can be replicated?
-
That's unexpected. Large files are supposed to be chunked into 50mb segments.
However, hooking over the code, chunking requires a seekable storage (i.e. random access) on the incoming server (i.e.
proget-au.wtg.zone
). Are you using cloud storage by chance?If so, cloud providers do not currently support random access, so this is somewhat expected. I'm not sure if we could add that support, but we could likely work-around it on the Incoming code (i.e. what produces the logs) as well.
Open to ideas - not sure how important this is to get working or if it's just something you happened to notice -- let us know.
If you are NOT using cloud storage, can you temporarily enable verbose replication under Admin > Advanced Settings, and share the results up until that error?
Thanks,
Alana
-
Thanks for looking into this so promptly. We are indeed using cloud storage (AWS S3 and Pure Storage S3) so that sounds like the root cause. Quite a few of our assets are several GB in size so it may be that using S3 is just not suitable for us. We'll look into alternative storage mechanisms. Thanks again.
-
Hi @james-woods_8996 ,
I looked into this a little more, and it turns out that our cloud storage providers do in fact support chunking -- but the outgoing replication code is not taking advantage of that. We would like to fix that in ProGet 2026.
However, in the meantime, we can fix the incoming replication code (i.e. what's throwing the error) pretty easily via PG-3102 - hopefully we'll get that in the upcoming maintenance release.
Thanks,
Alana