Navigation

    Inedo Community Forums

    Forums

    • Login
    • Search
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    1. Home
    2. atripp
    3. Posts
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    Posts made by atripp

    • RE: Support for Air-gapped environments

      Hi @steviecoaster ,

      Offline / air-gapped installation is common and a documented use-case:
      https://docs.inedo.com/docs/installation/windows/inedo-hub/offline

      As the article mentions, you can download the "offline installer", which is essentially a
      self-extracting zip file that runs a Custom Installer created using the Inedo Hub.

      That .exe file is not suitable for automation, so if that's a requirement then you'll need to use an alternative if you wanted to automate upgrade/installation. That article outlines a few concepts, but ultimately it really depends how "air-gapped" we're talking here.

      If we're talking a SCIF with "security-guard inspected installation media", then I don't think automation is really going to really get you much ;)

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Allow networkservice to use the DB in Proget

      Hi @reseau_6272 ,

      Just to confirm, you've switched the ProGet service from a domain account to use Network Service, and when starting the service you're getting some kind of permission error from SQL Server?

      The easiest solution is to simply switch to using a username/password instead of Windows Integrated Authentication and edit the connection string appropriately. Keeping in mind that, eventually, you will need to move away from SQL Server and migrate to PostgreSQL, which will not have these issues.

      Otherwise, you will explicitly need to grant a login to the machine account. Network Service is represented in SQL Server as the machine account (e.g., DOMAIN\MACHINENAME$), and the identity needs to be explicitly created CREATE LOGIN [MYDOMAIN\WEB01$] FROM WINDOWS; before you can assign permissions.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: [ProGet] Unexpected redirect when accessing Maven package with non-standard version starting with a character

      Hi @koksime-yap_5909,

      Good news, it's available now for testing! We're considering merging to ProGet 2025, or maybe keeping for ProGet 2026?

      Anyway, I posted a lot more detail now:
      https://forums.inedo.com/topic/5696/proget-is-unable-to-download-maven-packages-that-use-a-nonstandard-versioning-scheme/2

      Thanks,
      Alana

      FYI -- locked this, in case anyone has comments/questions on that change, I guess that post will be the "official" thread at this point :)

      posted in Support
      atripp
      atripp
    • RE: Symbol Server id issue

      Hi @it_9582 ,

      Thanks for checking that! Well, I'm not sure then :)

      From here, how about sending us the package? Then I can upload it and see about debugging in ProGet to find out where it's coming from.

      If you can open a ticket and reference QA-3010 somewhere it'll link the issues right-up on our dashboard. Then you can attach the file to that ticket.

      We'll respond on there, and eventually update this thread once we figure out the issue.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Proget is unable to download Maven packages that use a nonstandard versioning scheme

      Hi @joshua-mitchell_8090 ,

      Thanks for the inquiry! The changes are available in the inedo/proget:25.0.24-ci.4 container image, and we'd love to get a second set of eyes. Are you using Docker?

      They're relatively simple, but we just avoid changing stuff like this in maintenance releases... so it's currently slated for ProGet 2026.

      But it should be okay for a maintenance release. Please let us know, we'll decide to release based on your or other user feedback.

      Here's what we changed.

      First, we added a "sixth" component called IncrementalVersion2 that will support versions like 1.2.3.4-mybuild-678 (where 4 is the second incrementing version), so that vulnerability identification can work better. Our implementation is based on the the Maven version specs, which in retrospect, seems to be followed only by ProGet. Pretty low risk here.

      Second, we changed our "path parsing" logic, which identifies the groupId, artifactId, version, artifactType from a string like /junit/junit/4.8.2/junit-4.8.2.jar into /mygroup/more-group/group-42/my-artifact/1.0-SNAPSHOT/maven-metadata.xml.

      It's a little hard to explain, so I'll just share the new and old logic:

      //OLD: if (urlPartsQ.TryPeek(out string? maybeVersion) && char.IsNumber(maybeVersion, 0))
      if (urlPartsQ.TryPeek(out string? maybeVersion) && (
          char.IsNumber(maybeVersion, 0)
          || maybeVersion.EndsWith("-SNAPSHOT", StringComparison.OrdinalIgnoreCase)
          || (this.FileName is not null && !this.FileName.Equals("maven-metadata.xml", StringComparison.OrdinalIgnoreCase))
          ))
      {
          this.Version = maybeVersion;
          urlPartsQ.Pop();
      }
      

      Long story short, this seems to work fine for v8.5.0 and shouldn't break unless someone is uploading improperly named artifact files (e.g. my-group/my-artifact/version-1000/maven-metadata.xml or e.g. my-photo/cool-snapshot/hello-kitty.jpg).

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: [ProGet] Unexpected redirect when accessing Maven package with non-standard version starting with a character

      Hi @koksime-yap_5909 ,

      Just as a quick update! Given that this is a more wide-spread problem, we've fixed the code and plan to release in ProGet 2026 (or possibly sooner, if we can make it low-risk enough in a mainteannce release).

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Symbol Server id issue

      Hi @it_9582 ,

      Sorry it looks like we're dealing with a lot more code than I expected we would. I really don't know what to look at, and neither your code or our code makes sense (it's been many many years since anyone edited it.

      I'm not sure if it's helpful, but I'll share the code of our class. If you spot anything simple to change, we can explore it. Otherwise, I think the only way to move forward would be for you to share us some example nuget packages that we can set a debugger to.

      Here's the MicrosoftPdbFile class, which i combined into one giant string here:

      using System;
      using System.Collections;
      using System.Collections.Generic;
      using System.Collections.Immutable;
      using System.IO;
      using System.Linq;
      using System.Text;
      
      namespace Inedo.ProGet.Symbols;
      
      /// <summary>
      /// Provides access to the data contained in a Microsoft PDB file.
      /// </summary>
      public sealed class MicrosoftPdbFile : IDisposable, IPdbFile
      {
          private RootIndex root;
          private Dictionary<string, int> nameIndex;
          private bool leaveStreamOpen;
          private bool disposed;
      
          /// <summary>
          /// Initializes a new instance of the <see cref="MicrosoftPdbFile"/> class.
          /// </summary>
          /// <param name="stream">Stream which is backed by a PDB file.</param>
          /// <param name="leaveStreamOpen">Value indicating whether to leave the stream open after this instance is disposed.</param>
          public MicrosoftPdbFile(Stream stream, bool leaveStreamOpen)
          {
              if (stream == null)
                  throw new ArgumentNullException(nameof(stream));
      
              this.leaveStreamOpen = leaveStreamOpen;
              this.Initialize(stream);
          }
      
          /// <summary>
          /// Gets the PDB signature.
          /// </summary>
          public uint Signature { get; private set; }
          /// <summary>
          /// Gets the PDB age.
          /// </summary>
          public uint Age { get; private set; }
          /// <summary>
          /// Gets the PDB guid.
          /// </summary>
          public Guid Guid { get; private set; }
      
          ImmutableArray<byte> IPdbFile.Id => this.Guid.ToByteArray().ToImmutableArray();
          bool IPdbFile.IsPortable => false;
      
          /// <summary>
          /// Returns a stream backed by the data in a named PDB stream.
          /// </summary>
          /// <param name="streamName">Name of the PDB stream to open.</param>
          /// <returns>Stream backed by the specified named stream.</returns>
          public Stream OpenStream(string streamName)
          {
              if (streamName == null)
                  throw new ArgumentNullException(nameof(streamName));
      
              int? streamIndex = this.TryGetStream(streamName);
              if (streamIndex == null)
                  throw new InvalidOperationException($"Stream {streamName} was not found.");
      
              return this.root.OpenRead((int)streamIndex);
          }
          /// <summary>
          /// Returns an enumeration of all of the stream names in the PDB file.
          /// </summary>
          /// <returns>Enumeration of all stream names.</returns>
          public IEnumerable<string> EnumerateStreams() => this.nameIndex.Keys;
          /// <summary>
          /// Returns an enumeration of all of the source file names in the PDB file.
          /// </summary>
          /// <returns>Enumeration of all of the source file names.</returns>
          public IEnumerable<string> GetSourceFileNames()
          {
              var srcFileNames = this.EnumerateStreams()
                  .Where(s => s.StartsWith("/src/files/", StringComparison.OrdinalIgnoreCase))
                  .Select(s => s.Substring("/src/files/".Length))
                  .ToHashSet(StringComparer.OrdinalIgnoreCase);
      
              try
              {
                  using (var namesStream = this.OpenStream("/names"))
                  using (var namesReader = new BinaryReader(namesStream))
                  {
                      namesStream.Position = 8;
                      int length = namesReader.ReadInt32();
                      long endPos = length + 12;
      
                      while (namesStream.Position < endPos && namesStream.Position < namesStream.Length)
                      {
                          try
                          {
                              var name = ReadNullTerminatedString(namesReader);
                              if (name.Length > 0 && Path.IsPathRooted(name))
                                  srcFileNames.Add(name);
                          }
                          catch
                          {
                              // Can't read name
                          }
                      }
                  }
              }
              catch
              {
                  // Can't enumerate names stream
              }
      
              return srcFileNames;
          }
      
          /// <summary>
          /// Closes the PDB file.
          /// </summary>
          public void Close()
          {
              if (!this.disposed)
              {
                  this.root.Close(this.leaveStreamOpen);
                  this.disposed = true;
              }
          }
          void IDisposable.Dispose() => this.Close();
      
          private void Initialize(Stream stream)
          {
              var fileSignature = new byte[0x20];
              stream.Read(fileSignature, 0, fileSignature.Length);
      
              this.root = new RootIndex(stream);
      
              using (var sigStream = this.root.OpenRead(1))
              using (var reader = new BinaryReader(sigStream))
              {
                  uint version = reader.ReadUInt32();
                  this.Signature = reader.ReadUInt32();
                  this.Age = reader.ReadUInt32();
                  this.Guid = new Guid(reader.ReadBytes(16));
      
                  this.nameIndex = ReadNameIndex(reader);
              }
          }
          private int? TryGetStream(string name) => this.nameIndex.TryGetValue(name, out int index) ? (int?)index : null;
      
          private static Dictionary<string, int> ReadNameIndex(BinaryReader reader)
          {
              int stringOffset = reader.ReadInt32();
      
              var startOffset = reader.BaseStream.Position;
              reader.BaseStream.Seek(stringOffset, SeekOrigin.Current);
      
              int count = reader.ReadInt32();
              int hashTableSize = reader.ReadInt32();
      
              var present = new BitArray(reader.ReadBytes(reader.ReadInt32() * 4));
              var deleted = new BitArray(reader.ReadBytes(reader.ReadInt32() * 4));
              if (deleted.Cast<bool>().Any(b => b))
                  throw new InvalidDataException("PDB format not supported: deleted bits are not 0.");
      
              var nameIndex = new Dictionary<string, int>(hashTableSize + 100, StringComparer.OrdinalIgnoreCase);
      
              for (int i = 0; i < hashTableSize; i++)
              {
                  if (i < present.Length && present[i])
                  {
                      int ns = reader.ReadInt32();
                      int ni = reader.ReadInt32();
      
                      var pos = reader.BaseStream.Position;
                      reader.BaseStream.Position = startOffset + ns;
                      var name = ReadNullTerminatedString(reader);
                      reader.BaseStream.Position = pos;
      
                      nameIndex.Add(name, ni);
                  }
              }
      
              return nameIndex;
          }
          private static string ReadNullTerminatedString(BinaryReader reader)
          {
              var data = new List<byte>();
              var b = reader.ReadByte();
              while (b != 0)
              {
                  data.Add(b);
                  b = reader.ReadByte();
              }
      
              return Encoding.UTF8.GetString(data.ToArray());
          }
      
          private sealed class PagedFile : IDisposable
          {
              private LinkedList<CachedPage> pages = new LinkedList<CachedPage>();
              private Stream baseStream;
              private readonly object lockObject = new object();
              private BitArray freePages;
              private uint pageSize;
              private uint pageCount;
              private bool disposed;
      
              public PagedFile(Stream baseStream, uint pageSize, uint pageCount)
              {
                  this.baseStream = baseStream;
                  this.pageSize = pageSize;
                  this.pageCount = pageCount;
                  this.CacheSize = 1000;
              }
      
              public int CacheSize { get; }
              public uint PageSize => this.pageSize;
              public uint PageCount => this.pageCount;
      
              public void InitializeFreePageList(byte[] data)
              {
                  this.freePages = new BitArray(data);
              }
              public byte[] GetFreePageList()
              {
                  var data = new byte[this.freePages.Count / 8];
                  for (int i = 0; i < data.Length; i++)
                  {
                      for (int j = 0; j < 8; j++)
                      {
                          if (this.freePages[(i * 8) + j])
                              data[i] |= (byte)(1 << j);
                      }
                  }
      
                  return data;
              }
              public byte[] GetPage(uint pageIndex)
              {
                  if (this.disposed)
                      throw new ObjectDisposedException(nameof(PagedFile));
                  if (pageIndex >= this.pageCount)
                      throw new ArgumentOutOfRangeException();
      
                  lock (this.lockObject)
                  {
                      var page = this.pages.FirstOrDefault(p => p.PageIndex == pageIndex);
                      if (page != null)
                      {
                          this.pages.Remove(page);
                      }
                      else
                      {
                          var buffer = new byte[this.pageSize];
                          this.baseStream.Position = this.pageSize * pageIndex;
                          this.baseStream.Read(buffer, 0, buffer.Length);
                          page = new CachedPage
                          {
                              PageIndex = pageIndex,
                              PageData = buffer
                          };
                      }
      
                      while (this.pages.Count >= this.CacheSize)
                      {
                          this.pages.RemoveLast();
                      }
      
                      this.pages.AddFirst(page);
      
                      return page.PageData;
                  }
              }
              public void Dispose()
              {
                  this.baseStream.Dispose();
                  this.pages = null;
                  this.disposed = true;
              }
      
              private sealed class CachedPage : IEquatable<CachedPage>
              {
                  public uint PageIndex;
                  public byte[] PageData;
      
                  public bool Equals(CachedPage other) => this.PageIndex == other.PageIndex && this.PageData == other.PageData;
                  public override bool Equals(object obj) => obj is CachedPage p ? this.Equals(p) : false;
                  public override int GetHashCode() => this.PageIndex.GetHashCode();
              }
          }
          private sealed class PdbStream : Stream
          {
              private RootIndex root;
              private StreamInfo streamInfo;
              private uint position;
      
              public PdbStream(RootIndex root, StreamInfo streamInfo)
              {
                  this.root = root;
                  this.streamInfo = streamInfo;
              }
      
              public override bool CanRead => true;
              public override bool CanSeek => true;
              public override bool CanWrite => false;
              public override long Length => this.streamInfo.Length;
              public override long Position
              {
                  get => this.position;
                  set => this.position = (uint)value;
              }
      
              public override void Flush()
              {
              }
              public override int Read(byte[] buffer, int offset, int count)
              {
                  if (buffer == null)
                      throw new ArgumentNullException(nameof(buffer));
      
                  int bytesRemaining = Math.Min(count, (int)(this.Length - this.position));
                  int bytesRead = 0;
      
                  while (bytesRemaining > 0)
                  {
                      uint currentPage = this.position / this.root.Pages.PageSize;
                      uint currentPageOffset = this.position % this.root.Pages.PageSize;
      
                      var page = this.root.Pages.GetPage(this.streamInfo.Pages[currentPage]);
      
                      int bytesToCopy = Math.Min(bytesRemaining, (int)(this.root.Pages.PageSize - currentPageOffset));
      
                      Array.Copy(page, currentPageOffset, buffer, offset + bytesRead, bytesToCopy);
                      bytesRemaining -= bytesToCopy;
                      this.position += (uint)bytesToCopy;
                      bytesRead += bytesToCopy;
                  }
      
                  return bytesRead;
              }
              public override int ReadByte()
              {
                  if (this.position >= this.Length)
                      return -1;
      
                  uint currentPage = this.position / this.root.Pages.PageSize;
                  uint currentPageOffset = this.position % this.root.Pages.PageSize;
      
                  var page = this.root.Pages.GetPage(this.streamInfo.Pages[currentPage]);
                  this.position++;
      
                  return page[currentPageOffset];
              }
              public override long Seek(long offset, SeekOrigin origin)
              {
                  switch (origin)
                  {
                      case SeekOrigin.Begin:
                          this.position = (uint)offset;
                          break;
      
                      case SeekOrigin.Current:
                          this.position = (uint)((long)this.position + offset);
                          break;
      
                      case SeekOrigin.End:
                          this.position = (uint)(this.Length + offset);
                          break;
                  }
      
                  return this.position;
              }
              public override void SetLength(long value) => throw new NotSupportedException();
              public override void Write(byte[] buffer, int offset, int count) => throw new NotSupportedException();
              public override void WriteByte(byte value) => throw new NotSupportedException();
          }
          private sealed class RootIndex
          {
              private BinaryReader reader;
              private List<StreamInfo> streams = new List<StreamInfo>();
              private StreamInfo rootStreamInfo;
              private StreamInfo rootPageListStreamInfo;
              private uint freePageMapIndex;
      
              public RootIndex(Stream stream)
              {
                  this.reader = new BinaryReader(stream);
                  this.Initialize();
              }
      
              public PagedFile Pages { get; private set; }
      
              public Stream OpenRead(int streamIndex)
              {
                  var streamInfo = this.streams[streamIndex];
                  return new PdbStream(this, streamInfo);
              }
              public void Close(bool leaveStreamOpen)
              {
                  if (!leaveStreamOpen)
                      this.reader.Dispose();
              }
      
              private void Initialize()
              {
                  this.reader.BaseStream.Position = 0x20;
                  var pageSize = this.reader.ReadUInt32();
                  var pageFlags = this.reader.ReadUInt32();
                  var pageCount = this.reader.ReadUInt32();
                  var rootSize = this.reader.ReadUInt32();
                  this.reader.ReadUInt32(); // skip reserved
      
                  this.Pages = new PagedFile(this.reader.BaseStream, pageSize, pageCount);
                  this.freePageMapIndex = pageFlags;
      
                  // Calculate the number of pages needed to store the root data
                  int rootPageCount = (int)(rootSize / pageSize);
                  if ((rootSize % pageSize) != 0)
                      rootPageCount++;
      
                  // Calculate the number of pages needed to store the list of pages
                  int rootIndexPages = (rootPageCount * 4) / (int)pageSize;
                  if (((rootPageCount * 4) % (int)pageSize) != 0)
                      rootIndexPages++;
      
                  // Read the page indices of the pages that contain the root pages
                  var rootIndices = new List<uint>(rootIndexPages);
                  for (int i = 0; i < rootIndexPages; i++)
                      rootIndices.Add(this.reader.ReadUInt32());
      
                  // Read the free page map
                  this.reader.BaseStream.Position = pageFlags * pageSize;
                  this.Pages.InitializeFreePageList(this.reader.ReadBytes((int)pageSize));
      
                  this.rootPageListStreamInfo = new StreamInfo(rootIndices.ToArray(), (uint)rootPageCount * 4);
      
                  // Finally actually read the root indices themselves
                  var rootPages = new List<uint>(rootPageCount);
                  using (var rootPageListStream = new PdbStream(this, this.rootPageListStreamInfo))
                  using (var pageReader = new BinaryReader(rootPageListStream))
                  {
                      for (int i = 0; i < rootPageCount; i++)
                          rootPages.Add(pageReader.ReadUInt32());
                  }
      
                  this.rootStreamInfo = new StreamInfo(rootPages.ToArray(), rootSize);
                  using (var rootStream = new PdbStream(this, this.rootStreamInfo))
                  {
                      var rootReader = new BinaryReader(rootStream);
      
                      uint streamCount = rootReader.ReadUInt32();
      
                      var streamLengths = new uint[streamCount];
                      for (int i = 0; i < streamLengths.Length; i++)
                          streamLengths[i] = rootReader.ReadUInt32();
      
                      var streamPages = new uint[streamCount][];
                      for (int i = 0; i < streamPages.Length; i++)
                      {
                          if (streamLengths[i] > 0 && streamLengths[i] < int.MaxValue)
                          {
                              uint streamLengthInPages = streamLengths[i] / pageSize;
                              if ((streamLengths[i] % pageSize) != 0)
                                  streamLengthInPages++;
      
                              streamPages[i] = new uint[streamLengthInPages];
                              for (int j = 0; j < streamPages[i].Length; j++)
                                  streamPages[i][j] = rootReader.ReadUInt32();
                          }
                      }
      
                      for (int i = 0; i < streamLengths.Length; i++)
                      {
                          this.streams.Add(
                              new StreamInfo(streamPages[i], streamLengths[i])
                          );
                      }
                  }
              }
          }
          private sealed class StreamInfo
          {
              private uint[] pages;
              private uint length;
      
              public StreamInfo(uint[] pages, uint length, bool dirty = false)
              {
                  this.pages = pages;
                  this.length = length;
                  this.IsDirty = dirty;
              }
      
              public uint[] Pages
              {
                  get => this.pages;
                  set
                  {
                      if (this.pages != value)
                      {
                          this.pages = value;
                          this.IsDirty = true;
                      }
                  }
              }
              public uint Length
              {
                  get => this.length;
                  set
                  {
                      if (this.length != value)
                      {
                          this.length = value;
                          this.IsDirty = true;
                      }
                  }
              }
              public bool IsDirty { get; private set; }
          }
      }
      
      posted in Support
      atripp
      atripp
    • RE: Symbol Server id issue

      Hi @it_9582 ,

      It's certainly possible; there's a few hundred lines of code that make up the MicrosoftPdbFile class, so I don't know which parts to share with you. Of course I'm happy to share it all if you'd like.

      Since you mentioned your colleague was able to read the file, perhaps you can share what you did, and I can see how it compares to our code?

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Symbol Server id issue

      Hi @it_9582 ,

      If there's an error reading the file using GetMetadataReader, then we load it using the MicrosoftPdbFile class that we wrote. So, I'm guessing that's what's causing the wrong information?

      Anyway, let me share you the full code for the PortablePdbFile class. I summarized it before, but you can see the full context of what we're doing any why.

      using System.Collections.Immutable;
      using System.IO;
      using System.Reflection.Metadata;
      
      namespace Inedo.ProGet.Symbols;
      
      public sealed class PortablePdbFile : IPdbFile
      {
          private readonly MetadataReader metadataReader;
      
          private PortablePdbFile(MetadataReader metadataReader) => this.metadataReader = metadataReader;
      
          // visual studio always treats this value like a guid, despite the portable pdb spec
          public ImmutableArray<byte> Id => this.metadataReader.DebugMetadataHeader.Id.RemoveRange(16, 4);
      
          // not really age, but actually last 4 bytes of id - ignored by visual studio
          uint IPdbFile.Age => BitConverter.ToUInt32(this.metadataReader.DebugMetadataHeader.Id.ToArray(), 16);
          bool IPdbFile.IsPortable => true;
      
          public IEnumerable<string> GetSourceFileNames()
          {
              foreach (var docHandle in this.metadataReader.Documents)
              {
                  if (!docHandle.IsNil)
                  {
                      var doc = this.metadataReader.GetDocument(docHandle);
                      yield return this.metadataReader.GetString(doc.Name);
                  }
              }
          }
      
          public static PortablePdbFile Load(Stream source)
          {
              if (source == null)
                  throw new ArgumentNullException(nameof(source));
      
              try
              {
                  var provider = MetadataReaderProvider.FromPortablePdbStream(source, MetadataStreamOptions.LeaveOpen);
                  var reader = provider.GetMetadataReader();
                  if (reader.MetadataKind != MetadataKind.Ecma335)
                      return null;
      
                  return new PortablePdbFile(reader);
              }
              catch
              {
                  return null;
              }
          }
      
          void IDisposable.Dispose()
          {
          }
      }
      
      posted in Support
      atripp
      atripp
    • RE: Proget 25.x and Azure PostGres

      Hi @certificatemanager_4002 ,

      From the cybersecurity perspective, it's fine to leave it as root since the core process is run by the non-root user postgres inside of the container. You're never exposing a network service while the containerized process has root privileges.

      Here is more information on this if you're curious:
      https://stackoverflow.com/questions/73672857/how-to-run-postgres-in-docker-as-non-root-user

      As you can see in that link, it's technically possible to configure as non-root, but it requires more effort and doesn't really get you any benefit.

      As for load-testing and restarting, it really depends on the hardware and similar factors. Keep in mind that InedoDB is simply the postgresql container image with some minor configuration tweaks/changes. So any question you ask about InedoDB you can really ask about postgresql as well.

      As for using an external PostgreSQL server, the only information we have at this time is in the link I sent you before. You really need to be an expert on PostgreSQL if you wish to run your own server.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: An error occurred in the web application: v3 index.json not found.

      It looks like someone (at IP 10.2.12.133) has configured the wrong URL in Visual Studio or something. They are trying to access the NuGet API via the wrong URL (note the /feeds vs /nuget at the base url).

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Update stats on MSSQL

      hi @sigurd-hansen_7559 ,

      ProGet runs EXEC sp_updatestats, which "runs UPDATE STATISTICS against all user-defined and internal tables in the current database."

      According to the documentation, "you must be the owner of the database (dbo)," so a dbo-user should have permission. Maybe they did something strange and somehow blocked it.

      Please note that the ProGet database user must be a db_owner, or ProGet will not function. This is a requirement starting in ProGet 2025.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Proget 25.x and Azure PostGres

      Hi @certificatemanager_4002 ,

      InedoDB is relatively new, so we're still working through some of the setup and installation documentation. We recently published a new release with installer improvements that address some default configuration issues.

      Otherwise, here is the documentation we have on using an External PostgreSQL server:
      https://docs.inedo.com/docs/installation/postgresql#external-postgres

      Note that we don't recommend it unless you have PostgreSQL server expertise - especially when it comes to Azure PostgreSQL, since they make certain customizations to the engine that will be difficult to troubleshoot/diagnose if you're not familiar with them. In particular when it comes to resource constraints and limiting/throttling usage.

      So, unless you have that in-house expertise, we suggest InedoDB.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Symbol Server id issue

      Hi @it_9582 ,

      I'm not really sure the differences between the files to be honest. But that could be possible?

      I know some of the PDB code is really ancient (like 30+ years) and it uses old .exe files that no other program can read. I'm not sure if that is true.

      This is the code we use, by the way:

      var provider = MetadataReaderProvider.FromPortablePdbStream(source, MetadataStreamOptions.LeaveOpen);
      var reader = provider.GetMetadataReader();
      if (reader.MetadataKind != MetadataKind.Ecma335)
          return null;
      reader.DebugMetadataHeader.Id //<-- that is how we get the PDB ID
      

      If that function is returning the wrong Id, then I don't know if we can solve the problem. It doesn't seem very feasible to write our own PDB parsing, and I don't think there's another way to do it...

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Download the cached packages to local machine

      Hi @Johnpeter-Commons_1617 ,

      Is this network totally air-gapped, or is the ProGet kind of like a DMZ?

      In general, you shouldn't be accessing the internal file store... but in air-gapped scenarios, sometimes you need to be creative. Here is where you can find the location of the packages

      https://docs.inedo.com/docs/proget/feeds/feed-overview/proget-feed-storage

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Symbol Server id issue

      Hi @it_9582 ,

      If I'm understanding correctly, the issue is that windbg is attempting to download the symbol with an ID of 0511335..., but the only symbol you're seeing in ProGet is 58544d5...?

      Since this is for C++, this is the Windows/PortablePdb format. In that case, ProGet is using the built-in class called MetadataReader to parse this information. I mean it's possible there's a bug in there, but I think it's more likely that it's the wrong file getting uploaded or something to that effect.

      As far as the URLS... I'm not sure what the correct one is, but windbg seems to try a whole bunch of URLS before it lands on the correct one in ProGet. But if you're seeing that 58544d5... symbol in ProGet, then it would be downloadable.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Is it possible to run PostgreSQL using ProGet.exe without writing a file to disk?

      Hi @dev_7037 ,

      We've actually just changed this and, in the upcoming maintenance release, you'll be able to specify - for the file name. When you do that, the query can be entered via stdin.

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Symbol Server id issue

      Hi @it_9582 ,

      What version of ProGet are you using? There was a recent regression (PG-3204) that was fixed in ProGet 2025.19 with regards to symbol server. S hopefully upgrading will fix the issue.

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: The SSL connection could not be established, see inner exception.

      Hi @jeff-williams_1864 ,

      I'm not quite sure why nuget.org would report using a self-singed certificate? That seems off, but it sounds like you're doing "something" with regards to certificates that I don't quite understand :)

      On that note, the /usr/local/share/ca-certificates volume store the certificates to be included in the container's certificate authority, which is used when connecting to a server with self-signed certificates: https://docs.inedo.com/docs/installation/linux/docker-guide#supported-volumes

      Hope that helps,

      Alana

      posted in Support
      atripp
      atripp
    • RE: ProGet license injection in AKS Pod

      hi @certificatemanager_4002 ,

      The 500 is occurring on /health because licenseStatus=Error and the software is basically unusable until you correct the license issue.

      You would see a similar "blocking" error in the ProGet UI as well - so just check that, and once you correct the license error, the health check will return to normal..

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: ProGet license injection in AKS Pod

      Hi @certificatemanager_4002 ,

      The license key is set via the UI, so you can browse/access the service as per normal. Then, you will prompted to do that right away when there is no key or it expired: https://docs.inedo.com/docs/myinedo/activating-a-license-key

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Unverified/not approved chocolatey package categorized with Vulnerabilities:None

      Hi @svc-4x9p2a_6341 ,

      First and foremost, Chocolatey does not incorporate "Vulnerabilities" (i.e. centrally aggregated reports of vendor-reported weaknesses in software) into the package ecosystem. This is just not something that's a part of the Windows ecosystem as a whole, unlike the Linux ecosystem (e.g. Ubuntu OVALs).

      Chocolatey does, however, perform automated malware/virus scanning on packages. That's a totally different thing... please read our How Virus Scanning in Chocolatey Works article to learn more.

      From a technical standpoint, ProGet will use (abuse?) the vulnerability subsystem to treat "flagged" packages as vulnerable. This was a "quick and dirty" way for us to experiment with exposing this data through ProGet without having to build an entirely new subsystem just for Chocolatey packages.

      As for crystalreports2008runtime, it did not fail the virus/malware checking, so it's not going to be seen as "vulnerable" by ProGet. Instead, it hasn't been "validated" by Chocolatey's automated system. That's a different feature altogether (i.e. unrelated to virus checking) - and that ancient crystal reports package long predates the moderation feature in Chocolatey I believe.

      In any case, ProGet does not expose nor allow users to "filter" on this validation status, and it's highly unlikely such a capability would add much value to users - especially considering no one has asked for it, and the cost of developing an entirely new, Chocolatey-only feature is nontrivial.

      The reason is that everyone internalizes their packages; see Why You Should Privatize and Internalize your Chocolatey Packages
      to learn more

      Hope that helps, maybe @steviecoaster can assist more.

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Unverified/not approved chocolatey package categorized with Vulnerabilities:None

      Hi @svc-4x9p2a_6341 ,

      First and foremost, Chocolatey does not incorporate "Vulnerabilities" (i.e. centrally aggregated reports of vendor-reported weaknesses in software) into the package ecosystem. This is just not something that's a part of the Windows ecosystem as a whole, unlike the Linux ecosystem (e.g. Ubuntu OVALs).

      Chocolatey does, however, perform automated malware/virus scanning on packages. That's a totally different thing... please read our How Virus Scanning in Chocolatey Works article to learn more.

      From a technical standpoint, ProGet will use (abuse?) the vulnerability subsystem to treat "flagged" packages as vulnerable. This was a "quick and dirty" way for us to experiment with exposing this data through ProGet without having to build an entirely new subsystem just for Chocolatey packages.

      As for crystalreports2008runtime, it did not fail the virus/malware checking, so it's not going to be seen as "vulnerable" by ProGet. Instead, it hasn't been "validated" by Chocolatey's automated system. That's a different feature altogether (i.e. unrelated to virus checking) - and that ancient crystal reports package long predates the moderation feature in Chocolatey I believe.

      In any case, ProGet does not expose nor allow users to "filter" on this validation status, and it's highly unlikely such a capability would add much value to users - especially considering no one has asked for it, and the cost of developing an entirely new, Chocolatey-only feature is nontrivial.

      The reason is that everyone internalizes their packages; see Why You Should Privatize and Internalize your Chocolatey Packages
      to learn more

      Hope that helps, maybe @steviecoaster can assist more.

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Universal Package Versioning

      hi @tyler_5201,

      For a case like this, I'd recommend using a custom metadata field like _vendorVersion or something like that? Of course, that's going to be relatively easy.

      The hart part is to "map" the vendor numbers to a SemVer. I would look at the data, and decide how you want to "pack" them into three segments.

      2024.3.201 might work, assuming there are less than 100 revisions per service pack. Or maybe 2024.302.1. The number is really just for you, so whatever makes sense to you :)

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Using curl to either check or download a script file in Otter

      Hi @scusson_9923 ,

      One idea ... how about a try/catch block?

      It's not great.... but the catch will indicate the file doesn't exist.

      Just a thought...

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Zabbix rpm feed not working correctly

      Hi @Sigve-opedal_6476 , we're currently investigating and will let you know more later this week

      posted in Support
      atripp
      atripp
    • RE: Vulnerability checking on Maven packages

      Hi @davi-morris_9177 ,

      Unfortunately, the source data for these particular vulnerabilities specify invalid version numbers. A valid Maven version is a 5-part number consisting of 1-3 integer segments (separated by a .), an optional build number (prefixed with a -), and then an optional qualifier (another -). Following these rules, 2.9.10.8, is invalid.

      Valid versions are semantically sorted, where as invalid versions are alphabetically sorted -- which is what's causing the big headache here, since "2.21.1" < "2.9.10.8" when you sort alphabetically.

      At this time, we don't have any means to "override / bypass" source data, and rewriting/updating our Maven version parsing for just a small corner case (i.e. these old/irrelevant vulnerabilities in particular) doesn't seem worthwhile.

      As such, for the time being, your best solution is just to "Ignore" these vulnerabilities via an assessment. They are totally irrelevant now, not just because they refer to ancient versions, but there is simply no realistic real-world exploit path: https://cowtowncoder.medium.com/on-jackson-cves-dont-panic-here-is-what-you-need-to-know-54cd0d6e8062

      FYI - for ProGet 2026, we are working on a lot of improvements in vulnerability management that will reduce the noise of these non-exploitable vulnerabilities so teams can address actual risk and focus on delivering value instead of constant patching.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Layer Scanning is not working with images which is pushed with --compression-format zstd:chunked

      Hi @geraldizo_0690 ,

      Nice find with the busybox image... that makes it a lot easier to test/debug on our end!!

      We already have a ZST library in ProGet so, In theory, it shouldn't be that difficult to use that for layers like this. We'll add that via PG-3218 in an upcoming maintenance release -- currently targeting February 20.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Layer Scanning is not working with images which is pushed with --compression-format zstd:chunked

      Hi @geraldizo_0690 ,

      Are you seeing any errors/messages logged like, Blob xxxxxxx is not a .tar.gz file; nothing to scan.? If you go to Admin > Executions, you may see some historic logs about Container scanning.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Zabbix rpm feed not working correctly

      Hi @Sigve-opedal_6476 ,

      Could you give some tips/guidance on how to repro the error? Ideally, it's something we can see only in ProGet :)

      It's probably some quirk in how they implement things, but I wanted to make sure we're looking at the right things before starting.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Using curl to either check or download a script file in Otter

      Hi @scusson_9923 ,

      That is an internal/web-only API url, so it wouldn't behave quite right outside a web browser.

      I can't think of an easy way to accomplish what you're looking to do.... if you could share some of the bigger picture, maybe we can come up with a different approach / idea that would be easier to accomplish.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: InitContainers never start with Azure Sql on ProGet 25.0.18

      Hi @certificatemanager_4002 ,

      I'm sorry but I'm not familiar enough with Kubernetes to help troubleshoot this issue.

      All that I recognize here is the upgradedb command, which is documented here:
      https://docs.inedo.com/docs/installation/linux/installation-upgrading-docker-containers#upgrading-the-database-only-optional

      If you run that command from the command-line (on either linux or windows), things will written to the console. I wish I could tell you why you aren't seeing the messages.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Proget apt snapshot support?

      Hi @phil-sutherland_3118 ,

      This is not on a roadmap. Honestly we really don't really understand what a "snapshot" repository is or how they are used.

      We surveyed some customers about it a while ago, and this summarizes what they said: repository snapshots are archaic; they made sense a long time ago, but Docker changed all that. It's so much simpler to use container images like FROM debian:buster-20230919. That's effectively our snapshot, and when we need to main old releases (which happens more often than I'd like), we just rebuild the image from that. The other big advantage is that build time is easily 10x faster if not more.

      And then we saw that Debian also to maintains their own snapshots (https://snapshot.debian.org/), so we don't quite get how they are used outside of a handful of use cases (like a build process for a specialized appliance OS without Docker).

      Anyway we're open to considering it.... but only two people (including you) have asked in the past several years, so there's no real interest... and we're not sure what they even do :)

      That said, it's possible there's a way to accomplish something that has the same outcomes. For example:

      • create a public aggregate feed (jammy-all) with multiple connectors to Debian, Ubuntu, NGINX, Elasticsearch, etc.
      • create a release feed (jammy-20231101) that snapshots jammy-all

      But we don't know enough to answer that :)

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Http Logs enabled on only one server

      Hi @parthu-reddy ,

      I'm not sure if there's a relation here, but perhaps. The "running out of disk space" is not surpsing if you're indexing mega-repositories like the public debian repos. They are gigabytes in size. Here's some more info about those:
      https://blog.inedo.com/inedo/proget-2025-14-major-updates-to-debian-feeds

      You definitely want to switch to Indexing Jobs when you connect to public repops.

      This can be set at operating system level (it's the %ProgramData% special folder) or in ProGet under Admin > Advanced Settings > LocalStorage.

      Anyway this is somethitng best brought up as a separate topic if you have follow-ups (if you don't mind), I'd hate to pollute this thread with debian/indexing questions :)

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Error importing from Artifactory: The SSL connection could not be established, see inner exception.

      Hi @michael-day_7391 ,

      I can't really comment on what you're seeing in the Artifactory logs (i.e. [1] and [2]), but when an Access Token is specified, that token is sent on requests via a Bearer authorization header (unless Use Legacy API Header is selected). Otherwise, the Username/Password are sent via a Basic header. This happens on each and every request, regardless of whether it's a file download, api call, etc.

      Probably just easier to disable authentication during the import if this keeps coming up.

      OCI Registries (i.e. what you're using for your Helm charts, as opposed to a regular Helm registry) are not supported, so you'd need to export those files and use disk-based import or something like that.

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Deleting and creating a signing key for a Debian Feed doesn't give a success feedback, also still signature v3 is used?

      Hi @frei_zs,

      ProGet 2025.12 does not support the PGP v3 format, and there's no way you can get it working. So, you'll need to upgrade to the latest version, which does support the format.

      Here's some more information on the changes:
      https://blog.inedo.com/inedo/proget-2025-14-major-updates-to-debian-feeds

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Feed Group and Feed

      Hi @mikael ,

      We plan to add this support via PG-3213 in an upcoming maintenance release -- perhaps Feb 20 if all goes well!

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Ability to show usage of (e.g. script) assets

      Hi @jonathan-simmonds_0798,

      Thanks for the suggestion! This has been a long-standing wish-list item, but it's deceptively complicated.

      The "current idea" is a feature called "raft analysis" that will create a list of all raft items that depend on other raft items. For example, a pipeline that references a script. Or, a script that calls a module, and so on. It could also detect warnings/errors and report them.

      Creating this list often involves opening thousands of files and "parsing" them, which is not a trivial operation... but we're only talking "a few minutes" in most cases. However, the main challenges arise with invalidating this list (many edits will cause that to happen), and then communicating the status of the rebuild to users.

      I'll add a note to our BuildMaster 2026 roadmap though and see if we can explore it again; the current focus is boring..... modernization (PostgreSQL).

      That said, you probably noticed it but.... you should be able to see if a particular pipeline has an error (like a missing script) on the piepline overview page. Not as nice.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Deleting Debian feed and connectors didn't delete local index files

      Hi @parthu-reddy ,

      At this time, we don't have a disk cleanup procedure for local storage like this; we may add it in the future, but for the time being you can just delete them. The LocalStorage folder is ephemeral -- not quite "temp" storage, but the contents can be deleted. They will just be recreated next time it's needed.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Error importing from Artifactory: The SSL connection could not be established, see inner exception.

      Hi @michael-day_7391,

      ProGet is not designed to provide many details on network- or OS-level errors; that's where tools like Invoke-WebRequest come in. And it sounds like you've already discovered the root cause (failed certificate revocation check) that way.

      Anyway... when hosting ProGet on Windows, the Windows network stack will be used. So, if Windows is refusing to connect for whatever reason, then ProGet will also not connect. There's unfortunately no way around this, and we do not allow bypassing of SSL in ProGet.

      The good news is, once you get Invoke-WebReuqest working, then you'll be able to connect. There's probably some magical registry setting out there that will help :)

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Nuget connector stuck in failed state ("'0x00' is an invalid start of a property name")

      Hi @mayorovp_3701

      Actually zero byte in position 1 looks like attempt to read UTF16-LE-encoded json as UTF8

      Oh that's a great observation! Yeah that sounds like a reasonable explanation. But still... how could that even be possible?

      It's not like ProGet is going to randomly swap an encoding like that, and it's not like NuGet is going to store .json files incorrectly.

      As for experimentation, next time it happens:

      • remove one of the connectors from the feed to which one
      • navigate to the JSON endpoints of the connector in question, to see if you see the bad JSON
      • try to identify a pattern of behavior that causes this
      • watch for HTTP access logs to see if you can find the exact URL that's being accessed at the time of the connector failure (assuming it's a self-connector)
      • be prepared to attach a MITM proxy to ProGet (Admin > Proxy)

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Nuget connector stuck in failed state ("'0x00' is an invalid start of a property name")

      Hi @mayorovp_3701,

      That's a really strange error; it's basically saying that, somehow, a 0x0 character found its way into some JSON returned by the API. This character is invisible, and you'd need to use a kind of hex editor or developer tool to even see it.

      I guess, in theory it could be inserted by some intermediate device (firewall, gateway, etc), but who knows at this point. I can't imagine how that could happen on either NuGet or ProGet, but that's the first place to start looking.

      I suspect the server restart is unrelated; that certainly wouldn't cause a random 0x0 unless there's something really broken with the computer.

      From here, you'll want to keep isolating the issue, and try to figure out which connector is "bad"

      • If it's NuGet.org -- the issue is most certainly a network/gateway that's doing that.
      • If it's ProGet -- it's likely some strange bug, where 0x0 got inserted to the Database for a connector or feed or something. We saw that during some migrations, but it's realy hard to track down.

      I would just keep experimenting. If it's related to a reboot, just stop/start the service. That should be the same.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Error importing from Artifactory: The SSL connection could not be established, see inner exception.

      Hi @michael-day_7391,

      That's just a generic SSL error, which as you may know, is happening at the operating-system level. That quick-connect screen won't provide details. There may be a redirect happening, but it's hard to say.

      It's unusual that it would work from the web browser, but also not uncommon. If you use Invoke-WebRequest that would reproduce the error. If not, then stop the service and run ProGet manually (proget.exe run) so it's the same user/account.

      You should also be able to get a stack trace by adding a connector; that would be logged as a connector error.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Multiple deployment targets on same server

      Hi @koe ,

      This is definitely a problem that you can solve with BuildMaster, but before giving any kind of technical guidance, I'd like to understand the business processes.

      On first glance, this sounds like one of two scenarios:

      • Quasi-custom Software, where you create a customized build of a software application (perhaps bundled with their plugins, etc)
      • User-driven Deployments, where you maintain a single application but deploy a new version of that application based on user requirements (new feature they requested, bug fix, etc)

      Are either of those close?

      Whatever the case, can you describe the decision-making process or rationale that goes into "deploy a software release to either all production systems, all test systems or just a single one out of all these systems?"

      Are there different types of releases (e.g. a "patch" release of an old version)? Or is everyone "forward only, latest version"?

      BuildMaster is, of course, an automation platform - but more importantly, it's about modeling process and visualization. And when it comes to process, consistency is key - even when there are variations.

      We don't believe a decision like above is "arbitrary, and based on the whims of an application director", but there's probably some rationale that goes into it. So, with BuildMaster, our goal is to help get everyone on the same page about which process to follow for different releases.

      Anyway, how you model this will have a big impact down the line.

      Cheers,

      Alana

      posted in Support
      atripp
      atripp
    • RE: [Feature] Scope SCA permissions to Project or "Project Group"/Assign Project to Feed Group

      Hi @Nils-Nilsson ,

      Good news - this is actually on our ProGet 2026 roadmap.

      The general idea is to "reuse" Feed Groups -- I guess we'd call them "Feed & Project Groups" or something? Anyway, the projects would be grouped in the UI similarly, and you could scope project-based permissions to a group.

      We will try to get it as a preview feature in the coming weeks, assuming it can be done in low risk. It seems like this would be the case.

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Feed Group and Feed

      Hi @mikael ,

      Oh sorry, we decided not to refactor/rewrite the API after all -- and I guess we threw out all the "ideas" attached to that initiative as well. This was on our roadmap for several years, so we didn't realize there was a customer-facing request attached, hence how so we forgot.

      Anyway, I've moved this back as feature request, and we'll look to add this to the existing API. It probably won't be that bad! Please stay tuned, hopefully we'll evaluate within the next couple weeks.

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Local index file update failure: The remote server returned an error: (403) Forbidden.

      hi @michael-day_7391 ,

      The correct connector URL would be:

      https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.31/rpm/
      

      I added that connector and coould see/browse/download packages in the repository.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: AD integration not working in ProGet 2025.18

      Hi @michael-day_7391 ,

      I guess not? I've ever heard of StarTLS, and no one else seems to ask -- so it's probably not worth investigating. I guess LDAPs is what's popular, so probably easier to just go that route.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: An error duing cargo build

      Hi @caspacokku_2900 ,

      Yeah it's pretty weird. The errors are all over the place - not like in a specific database query or anything like that. This also isn't a sign of server-overload that we've seen.

      It's as if internal network connectivity is somehow breaking within a container? Or there's "something else" wrong with the internal PostgreSQL server? These are all "deep system" level errors in basically operating-system level code (drivers, etc).

      We've seen this with another user with really weird errors, but no idea on how to reproduce it. Maybe it's an "error reporting an error".

      As for the feed... there's nothing special about cargo vs other feeds from an API/usage standpoint. If anything, npm hammers the server a lot harder with its 1000+ package restores. And this has nothing to do with connectors.

      Unfortunately we don't have a lot to go on:

      • how about increasing hardware
      • can you try a different physical server
      • could it somehow be the underlying operating system?
      • any patterns as to when this is happening (lots of traffic etc)

      Any clues or consistency would help.

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Note on the instructions for downloading packages from Debian Feed

      Hi @geraldizo_0690 ,

      Thanks for the report! Sometimes bug fixes are a single character like this ...
      1c7d2a43-f6ca-41b0-9853-65cdea7cb5b7-image.png

      It'll be be in the next maintenance release via PG-3205 :)

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • 1
    • 2
    • 3
    • 4
    • 5
    • 37
    • 38
    • 1 / 38