Navigation

    Inedo Community Forums

    Forums

    • Login
    • Search
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    1. Home
    2. cole.brand_2889
    C
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    cole.brand_2889

    @cole.brand_2889

    0
    Reputation
    2
    Posts
    1
    Profile views
    0
    Followers
    0
    Following
    Joined Last Online

    cole.brand_2889 Follow

    Best posts made by cole.brand_2889

    This user hasn't posted anything yet.

    Latest posts made by cole.brand_2889

    • Amazon.S3.AmazonS3Exception: Please reduce your request rate.
      An error occurred processing a GET request to <snip>: Please reduce your request rate.
      
      Amazon.S3.AmazonS3Exception: Please reduce your request rate.
       ---> Amazon.Runtime.Internal.HttpErrorResponseException: Exception of type 'Amazon.Runtime.Internal.HttpErrorResponseException' was thrown.
         at Amazon.Runtime.HttpWebRequestMessage.ProcessHttpResponseMessage(HttpResponseMessage responseMessage)
         at Amazon.Runtime.HttpWebRequestMessage.GetResponseAsync(CancellationToken cancellationToken)
         at Amazon.Runtime.Internal.HttpHandler`1.InvokeAsync[T](IExecutionContext executionContext)
         at Amazon.Runtime.Internal.RedirectHandler.InvokeAsync[T](IExecutionContext executionContext)
         at Amazon.Runtime.Internal.Unmarshaller.InvokeAsync[T](IExecutionContext executionContext)
         at Amazon.S3.Internal.AmazonS3ResponseHandler.InvokeAsync[T](IExecutionContext executionContext)
         at Amazon.Runtime.Internal.ErrorHandler.InvokeAsync[T](IExecutionContext executionContext)
      

      Are there any options to enforce retry-backoff in Proget itself? I'm working to reduce the caller frequency with some backoffs, but it's a complicated web, isn't it?

      Alternately, has anyone found a good proxy solution to cache packages and ensure the S3 storage is fast and global (DR concerns)? We run Proget directly in k8s to S3, with no dedicated servers in the mix, and our storage is currently around 10TB of data, spread scross dozens of feeds, but some of them are hit frequently, unfortunately, given the developer and tooling mixes present in house.

      posted in Support
      C
      cole.brand_2889
    • Unhandled exception in execution #xxx: 42702: column reference "DatabasePath_Text" is ambiguous
      Npgsql.PostgresException (0x80004005): 42702: column reference "DatabasePath_Text" is ambiguous
         at Npgsql.Internal.NpgsqlConnector.ReadMessageLong(Boolean async, DataRowLoadingMode dataRowLoadingMode, Boolean readingNotifications, Boolean isReadingPrependedMessage)
         at 
      <snip>
      Inedo.ProGet.Feeds.Docker.BlobScanner.DockerBlobScanner.ScanBlobAsync(DockerBlobs blobData, Stream blobStream, ILogSink log) in C:\Users\builds\AppData\Local\Temp\InedoAgent\BuildMaster\192.168.44.60\Temp\_E653823\Src\src\ProGet\Feeds\Docker\BlobScanner\DockerBlobScanner.cs:line 112
      <snip>
        Exception data:
          Severity: ERROR
          SqlState: 42702
          MessageText: column reference "DatabasePath_Text" is ambiguous
          InternalPosition: 750
          InternalQuery: WITH BlobPackages_Table AS (
              SELECT * FROM jsonb_to_recordset("@BlobPackages_Table") AS ("DatabasePath_Text" VARCHAR(200), "PackageVersion_Id" INT)
          ),
          packagesToRemove AS (
              SELECT *
                FROM "DockerBlobPackages" DBP
                LEFT JOIN BlobPackages_Table BPT 
                       ON BPT."DatabasePath_Text" = DBP."DatabasePath_Text" 
                      AND BPT."PackageVersion_Id" = DBP."PackageVersion_Id"
               WHERE DBP."DockerBlob_Id" = "@DockerBlob_Id" 
                 AND BPT."PackageVersion_Id" IS NULL
          ),
          deletes AS  (
              DELETE FROM "DockerBlobPackages" DBP
                    USING packagesToRemove PTR
                    WHERE DBP."DockerBlob_Id" = "@DockerBlob_Id"
                      AND DBP."DatabasePath_Text" = PTR."DatabasePath_Text" 
                      AND DBP."PackageVersion_Id" = PTR."PackageVersion_Id"
          ),
          newBlobPackages AS (
              SELECT BPT.*
                FROM BlobPackages_Table BPT
                     LEFT JOIN "DockerBlobPackages" DBP 
                            ON DBP."DockerBlob_Id" = "@DockerBlob_Id" 
                           AND BPT."DatabasePath_Text" = DBP."DatabasePath_Text" 
                           AND BPT."PackageVersion_Id" = DBP."PackageVersion_Id"
               WHERE DBP."PackageVersion_Id" IS NULL
          )
          INSERT INTO "DockerBlobPackages"
               SELECT "@DockerBlob_Id",
                      "DatabasePath_Text",
                      "PackageVersion_Id"
                 FROM newBlobPackages BPT
          Where: PL/pgSQL function "DockerBlobs_RecordScanData"(integer,xml,jsonb) line 18 at SQL statement
          File: parse_relation.c
          Line: 831
          Routine: scanRTEForColumn
      

      Is it possible we have something misconfigured on our Aurora Postgres instance? Or is this a bug in the application?

      Version 2025.25 (Build 11)

      posted in Support
      C
      cole.brand_2889