Navigation

    Inedo Community Forums

    Forums

    • Login
    • Search
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    1. Home
    2. stevedennis
    3. Posts
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    Posts made by stevedennis

    • RE: Date in Debian feed release file malformed

      Hi @daniel-scati , thanks for the analysis!

      We'll get this fixed via PG-2635 in an upcoming maintenance release (hopefully 2024.2), which is targeted for next Friday.

      posted in Support
      stevedennis
      stevedennis
    • RE: Proget as registry proxy

      Hi @aharalambopoulos_3520 ,

      ProGet works as a "Private Docker Registry" which seems to be different than a "Docker Hub Mirror".

      Last time we researched Docker Hub Mirrors, they seemed to be primarily intended to provide image to certain geographic regions (like China) where Docker Hub content would otherwise be restricted. They could also be used to set up a "local mirror" of Docker Hub, but in all cases, it seemed to basically just redirect traffic from the default docker.io URL (or whatever) - so it wasn't intended to be used as "Private Docker Registries".

      In any case, Mirrors don't seem to be a good fit for ProGet; instead, if you wish to use nginx, we would advise to "privatizing" and "lock" images using sematic tag, so that you can be assured that corp.local/images/nginx is a tested/safe image with tags you control.

      Best,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: What is the general design philosophy regarding permissions and visibility in ProGet?

      @jw excellent, thanks for the finds! I also added this to our "final touches" for ProGet 2024 to address all these-- all pretty easy fixes I think 👍

      posted in Support
      stevedennis
      stevedennis
    • RE: ProGet feature request: Dedicated permission for marking packages as deprecated

      Hi @jw ,

      I'm afraid this is a bit too granular for us now, but it's something we can consider re-evaluating down the line, especially as we will likely want to add specialize permissions for projects, policies, etc. We expect that will happen later in the year, after ProGet 2024's new features get more adoption. We'll see if anyone else requests package-level permissions, etc.

      As for the Advanced, I put a note in ProGet 2024's final touches to address that. Honestly I thought those were only on Debug builds only, but clearly not... thanks for reporting!

      Cheers,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: ProGet SAML group mapping

      Hi @proget-markus-koban_7308 ,

      It's highly unlikely we would consider implementing anything Keycloak-specific, but if it's something that SAML supports - and something done by the major providers like Azure, Ping ID, Okta, etc -- we definitely consider it. We just don't know much about it.

      We haven't done any further research since that post and we likely won't do any further research on our own, since only one user asked in a few years (and they ended up not needing it anyway).

      If this is something you'd be interested in exploring, it'd be best to collaborate and help us bridge the gap between SAML and ProGet.

      Here's some relevant questions/discussion from that topic:

      I'm not so familiar with SAML behind the scenes... do you know how "SAML group claims" work? For example...

      • Is it something that comes back in the XML response, or does it require a separate request?
      • What do the "group claims" look like? Like a list of human-readable group names?

      And them most importantly... what should ProGet do with such claims upon receipt? Treat the user as if they're in the group (kind of like LDAP groups), and allow permissions to be assigned against that group (like LDAp, but without searching)?

      The hardest part is going to be figuring out how to set this up in a SAML provider, document it, etc.

      Thanks,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: [BM] /!\ Proget Integration broken - given key was not present

      Hi @philippe-camelio_3885 ,

      No idea I'm afraid; there's clearly some issue with unexpected data coming from your ProGet server that's not being validated.

      Can you share the results of /health (first API call) then /api/management/feeds/list (using the API token you specified)?

      With that we can hopefully spot something.

      Thanks,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: Build execution failure for .Net projects

      Hi @bbalavikram ,

      The framework option may not do what you think; first and foremost, 4.5 is not a valid option for framework. You need to use a "framework monifier" that's defined here:
      https://learn.microsoft.com/en-us/dotnet/standard/frameworks

      But keep in mind that framework monifier must also be in your project file. The framework argument for dotnet simply selects which of the frameworks in your project file to build. It's really only useful for multi-targeted builds, which you probably don't have.

      It's possible that dotnet simply will not work with your project. This is unfortunately the case with many old projects. You can continue to try to "play" with your csproj files to try get it to work (note: you can run the same dotnet commands on your workstation).

      If you can't get it to work, then you'll need to use MSBuild:Build-Project or DevEnv::Build. We do not have script templates from these, but you can convert your script template to OtterScript and then try modifying the script that way.

      Here is some information on build scripts:
      https://docs.inedo.com/docs/buildmaster-platforms-dotnet#creating-build-scripts

      Best,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: ProGet internal webserver HTTP->HTTPS redirection

      Hi @jw ,

      We'll add this as a "if time" on our ProGet 2024 roadmap... and hopefully it's as simple as just setting that flag. We'll update as we get closer to the final release.

      Cheers,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: Build execution failure for .Net projects

      Hi @bbalavikram ,

      This error message is coming from the dotnet command-line tool, and I think it has something to with an old/legacy project file format. If you were to run the same command on your workstation, you would get the same rror.

      From here, I would compare/contrast the .csproj files in the broken projects, and see if you can figure out what's the difference.

      Note that if you search "root element is visualstudioproject expected project", you'll see a lot of people have a similar error, but their solution is also to do similar things - i.e. edit the file and fix the format.

      Once you fix the project file, if you check the code back into Git, it shoudl work the next time.

      Best,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: Debian Feed - Package can only be downloaded by apt once

      Hi @dan-brown_0128 , @gdivis ,

      We received a support ticket that seems to be a similar topic...

      I am using a ProGet Debian feed to mirror http://deb.debian.org/debian. I am using it on Debian containers running on my desktop. I have noticed that packages that I had installed already once are no longer available for a second installation. I have attached a screenshot that uses the command "apt update && apt install -y git" as example. According to the error message, the package "git" is unknown.
      However when looking it up in the ProGet feed, it is listed as a cached package as it was installed in an earlier execution of the container. I have then set a retention rule on the feed to delete absolutely all packages and run the job manually to clean all packages cached on the feed. When re-executing the exact same command, the package git and its dependencies are found and installed.

      It appears that cached packages are no longer returned to the Linux instance when executing "apt update". I can force a complete cleanup of the ProGet feed prior to an installation as a workaround but this is a bit tedious.

      So I wonder if this has something to do with how we're generating indexes, or something to that effect?

      I also asked the other user for input - but we're currently stuck trying to reproduce this, so any other insight would be helpful.

      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: Debian Feed (New) connector errors

      Hi @dan-brown_0128 ,

      The error that you discovered is about all that would be logged on ProGet; you'll need to use a tool like ProcMon to monitor file system activity for the processes.

      Best,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: Debian Feed (New) connector errors

      Hi @dan-brown_0128 ,

      I'm afraid that's not really possible; the sqllite databases are stored in the package store, which needs to be shared/common across servers.

      Cheers,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: [BM] Getting 500 Error when accessing Global Pipelines

      Hi @andy222 ,

      This was fixed in 2023.11 via BM-3933; as a work-around, you can see the global pipelines by clicking "Shared (Global)" under any application > settings > pipelines.

      Cheers,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: Debian Feed (New) connector errors

      Hi @dan-brown_0128,

      Hmm, that's strange, and I can't imagine anything in ProGet that would yield the behavior that you're describing. We've been following this pattern for a few years now and haven't had any issues like this. Behind the scenes, here is what's happening:

      1. If the index.sqllite3 file exists, then ProGet attempts to opens it
      2. If the file cannot be opened (corrupt, old schema version, etc), then it is deleted
      3. ProGet then instructs SqlLite library to open a database with the file;
      4. If the file does not exist, SQL Lite will create it

      There's really no difference between writing to a disk path or a UNC path - that's all handled at the operating system level. Few things to note...

      • A health check for a Debian2 feed simply downloads and updates the index file - i.e. it's the same thing that happens when you browse the feed the UI.
      • Connector Health Check is performed by the ProGet Service, which is a separate program than the ProGet Web application

      Cheers,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: Debian Feed (New) connector errors

      Hi @dan-brown_0128,

      ProGet uses SqlLite databases as a cache for a variety of operations. I would first try to delete and the recreate the connector, to see if it makes any difference. That will create a new disk path.

      If ProGet does indeed have full control over that directory, you'd need to use some sort of file system monitoring tool like ProcMon to see what activity is happening when attempting to create/open the database file.

      It wouldn't surprise me if there is some kind of "malware" or "rootkit" installed on your server that is interfering with ProGet's operation. We see these cause all sorts of strange problems, especially when they start "quarantining" package files that users upload because they contain "dangerous executable code" like .dll files 🙄

      Best,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: Debian Feed (New) connector errors

      Hi @dan-brown_0128 ,

      Thanks for the stack trace and additional information; that definitely helps pinpoint the issue.

      Debian require that the entire remote repository is downloaded (which can sometimes take more than 10 seconds on slow connections), and this repository information is cached in a disk-based SQL Lite database. And that error message is coming from SqlLite, which is implying some kind of file access issue.

      The database is stored in «Storage.DebianPackagesLibrary»\C«connector-id»\index.sqlite3", and you can find the values under Admin > Advanced Settings.

      Do you see that index.sqllite3 file on disk? Any reason that ProGet wouldn't be able to write to that directory?

      Thanks,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: License not found in package

      Hi @v-makkenze_6348,

      thanks for the report, I was able to reproduce it with that particular package. We plan to resolve this via PG-2587 in the next maintenance release scheduled for next Friday.

      posted in Support
      stevedennis
      stevedennis
    • RE: Debian Feeds and PGVC

      Hi @dan-brown_0128 ,

      Yes, but only once you've enabled ProGet 2024 Vulnerability Preview features (available in ProGet 2023.29+).

      Thanks,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: Unable to "Request License key"

      Hi @bbalavikram ,

      Sorry about that! How strange..... it looks like there was a weird issue with your MyInedo account, and it was missing some internal data. This lead to an error in generating the key with your email address.

      I corrected your account, so please try again (etiher from BuildMaster or MyInedo).

      And let us know if you have ay qeustions about BuildMaster too -- happy to help!

      Best,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: Enable Vulnerability Feature Preview... timeout

      Hi @v-makkenze_6348,

      We've rewritten this to not use MERGE, which should help; we plan to ship PG-2583 in the upcoming maintenance release on Mar 1.

      Cheers,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: ProGet product version api

      Hi @davidroberts63 ,

      That sounds like a neat use case :)

      The ProGet Health API will report the version of a ProGet instance, and you can use the Universal Package API - List Packages Endpoint to query our products feed for the ProGet package.

      Cheers,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: "The SSL connection could not be established" and "Authentication failed because the remote party sent a TLS alert: 'DecryptError'" errors for unknown reasons.

      Hi @c-schuette_7781 ,

      This error is occurring on the remote server (i.e. nuget.devexpress.com). This error can happen when a server is overloaded... so you're basically doing a DoS on DevExpress's server. You'll need to try again or contact DevExpress for help,.

      I believe that DevExpress wrote their own, custom NuGet server. We've had several issues with it in the past. While talking to them, you should also suggest they switch to ProGet ISV Edition like some other component vendors 😉

      Best,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: Lots of "The operation has timed out." errors when ProGet tries to read other feeds from same server?

      Hi @c-schuette_7781,

      I have a NuGet "default" feed that is connected to the standard nuget.org feed and also includes connectors to three "local" NuGet feeds. I

      So basically, you're doing a Denial of Service attack against your ProGet server ;)

      When the NuGet client makes that FindPackagesById() request, ProGet needs to now make four separate web requests (nuget.org plus the three other feeds). Considering that NuGet client makes 100's of simultaneous requests for different packages, you're going to run into errors like this. Especially with multiple builds (multiple sets of 100's of requests / second).

      If you want to handle this level of traffic, you need to use load balancing. See How to Prevent Server Overload in ProGet to learn more.

      Otherwise, you need to reduce traffic. Switch to the NuGet v3 API, use connector metadata caching, reduce the number of connectors, set a Web.ConcurrentRequestLimit in the admin > advanced, etc.

      posted in Support
      stevedennis
      stevedennis
    • RE: "Error: ProGet license violations detected." on the UI of the PRoGet admin page.

      Hi @pallavi-tarigonda_9617,

      Adding an entry to the hosts file should not cause a "blocked" connection, so it sounds like there's definitely something strange going with your machine's configuration. I'm not sure how to troubleshoot this further. Self-connectors work fine in testing Free edition, and other users don't have an issue.

      If it helps, here's the code that ProGet uses to determine if a connection is local:

      public bool IsLocal
      {
          get
          {
              var connection = this.NativeRequest.HttpContext.Connection;
              if (connection.RemoteIpAddress != null)
              {
                  if (connection.LocalIpAddress != null)
                      return connection.RemoteIpAddress.Equals(connection.LocalIpAddress);
                  else
                      return IPAddress.IsLoopback(connection.RemoteIpAddress);
              }
      
              if (connection.RemoteIpAddress == null && connection.LocalIpAddress == null)
                  return true;
      
              return false;
          }
      }
      

      I would explore the hosts file issue; the fact that a loopback (127.0.0.1) entry wouldn't work sounds like there was some kind of data entry error/typo in your hosts error, but hard to say.

      Best,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: [PROGET] Double PVGC config

      Hi @philippe-camelio_3885 ,

      This behavior is expected, though a little confusing. The blue info box explains it a little bit, but if you add a second PGVC vulnerability source, then you'll see two entries for PGVC in your list. Those are separate sources that point to the same database. It's not recommended, and only acts as a work-around to allow for different aassessments for different feeds..

      What are you trying to accomplish? If it's just basic vulnerability scanning, then I recommend doing the following:

      1. remove everything
      2. Disable PGVC
      3. Enable PVGC

      Hope that helps,

      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: Proget - Can't use the native API even with an API Key with Native API access

      Hi @m-webster_0049 ,

      The first thing I would try is to troubleshoot this is to switch to a very basic API key like hello. That just eliminates any typos, spacing, etc.

      Next, I would try specifying the API Key via X-ApiKey header (see docs) - just to see if you get a different error. It's possible there is a regression somewhere.

      Best,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: [PROGET] Double PVGC config

      Hi @philippe-camelio_3885 ,

      Can you share a screenshot of your Admin > Vulnerability Sources screen? It looks like you have three vulnerability sources configured.

      Note that we no longer recommend using OSS Index, and instead just having (one) PGVC enabled.

      Thanks,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: "Error: ProGet license violations detected." on the UI of the PRoGet admin page.

      Hello,

      That makes sense; there's a few threads with this similar issue, so you may want to search and follow some of the troubleshooting steps.

      But basically, ProGet checks for local requests using HttpRequest.IsLocal, which basically just looks for 127.0.0.1. If it's not local, then a license violation is recorded.

      Try using 127.0.0.1 for your connectors, or if that's not possible, and your server doesn't resolve proget.xxxx.com as 127.0.0.1, you may need to add a /etc/hosts entry for proget.xxxx.com 127.0.0.1so that it will come across as a local.

      Cheers,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: ProGet SCA Cannot get NuGet vulnerability scanning to work

      Hi @jw ,

      There is one other setting, under SCA > Vulnerabilities > Download Blocking. Try setting that, then maybe you'll also need to run Package Analysis again.

      Let us know -- we can try to add a few more hints/clues in the UI to make this less confusing, at least as a temporary measure before tying this together better in the back-end.

      Thanks,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: ProGet SCA Cannot get NuGet vulnerability scanning to work

      Hi @jw ,

      One thing to check --- is "vulnerability blocking" enabled on the nuget-proxy feed? That's currently how the SCA Projects know to pick up if a vulnerability issue is desired.

      Thanks,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: Download a NuGet package using cmd

      Hi @hashim-abu-gellban_3562,

      I see a few issues here...

      First, the URL you're using is not correct; the easiest way to to find the url by clicking the "download package link" in the ProGet UI. It will look like this: /nuget/your-feed-name/package/your-package-name/your-version-number

      Second, you're downloading a file - so you want to use a powershell command like this:

      $url = "https://myprogetserver/feeds/mynuggets/package/mypackage/1.0.0"
      $destination = "c:\mypackages\mypackage-1.0.0.zip"
      Invoke-WebRequest -Uri $url -OutFile $destination
      

      Best,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: ProGet - Deleting a SCA release leads to error message

      @jw thanks for the bug report!

      We'll get this fixed in the an upcoming maintenance release via PG-2491 :)

      posted in Support
      stevedennis
      stevedennis
    • RE: Azure blob storage gives 500 internal server error

      Hi @carl-westman_8110 ,

      This is likely due to some authentication or other configuration with Azure Blob. You will see the specific error on the ProGet, logged under Admin > Diagnostic Center.

      Best,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: End of Central Directory record could not be found.

      Hi @avoisin_5738 ,

      The error message means that an invalid zip file was received in the request; so the file can't even be opened. I don't know how Klondike works.

      If you're totally sure that you're uploading the nupkg file, I would try opening it as a zip file (like rename it to .zip, use 7zip, etc.). I would expect a similar error.

      If it's a valid zip file, I would upload it via the UI; if that works, it means your script has some kind of issue in corrupting the stream, or not sending the complete file, etc.

      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: End of Central Directory record could not be found.

      Hi @avoisin_5738 ,

      The error "End of Central Directory record could not be found" basically means that the file is not a valid ZIP file. The most common case for this is pushing the wrong file (.nuspec instead of .nupkg, or a dll or .psm file). There are some other rare cases where the stream can be corrupted on the way to ProGet, but that's not common.

      Hope that helps,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: Some API questions

      Hi @k_2363,

      We cannot get the endpoint ApiKeys_CreateOrUpdateApiKey to get to work. It seems that the JSON requires ApiKeyTasks_Table (IEnumerable1)`. Unfortunately we cannot find what we have to provide here. If i look at the Stored Procedure, it seems that this cannot be filled with an API request.

      Hmm, it looks like you may not be able to use table-value parameters via JSON. I don't know how easy that will be to add support for; one option is just to just do direct INSERT statements into ApiKeys and ApiKeyTasks tables. I'm aware of at least one other user that does that, since it was significantly easier to do a single statement that joined on another table in another database on the same SQL Server.

      It's not ideal, but this is pretty rare use case.

      Would that work?

      Thanks,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: Some API questions

      Hi @k_2363,

      However your explanation for [3] doesn't seem to be right in our case. We're using the 'Pull to ProGet' button to download packages from an Azure DevOps Artifact to our ProGet feed, however when the package hasn't been downloaded yet it shows in the Feed with a Antenna icon.

      Actually this won't work with the Common Package API; this API works only with local packages, and does not query connectors. So instead, you'll need to download the package file from the NuGet endpoint (which does connectors).

      You can find the download URL by looking in the front-end, and generating a file url from that. But it's basically like this for NuGet/chocolatey packages: /nuget/<feed_name>/package/<package_name>/<package_version>

      Best,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: ProGet and MSSQL license

      Hi @w-repinski_1472 ,

      Based on your initial usage, I think SQL Server Express will suffice. The ProGet database is basically a package metadata index, and essentially stores things like package name, version number, and a variety of things including the manifest file (.nuspec, package.json, etc). It's maybe a few kb per package, and you'll need 100k's of packages to even reach 1GB of metadata storage.

      In your information you state that network connections are the bottleneck. I don't understand this completely in times when we have 100G cards, maybe I don't understand the scale on which ProGet is used in other companies.

      The issue is with the number of connections, and a single server struggling with 100's of expensive queries/requests per second. Running "nuget restore" or "npm restore" will hammer the repository with 1000's of simultaneous requests, and many of those need to go to nuget.org or npm.json to be resolved. When you have multiple users and multiple build servers running these kinds of restores, then you run into load issues.

      At about 50 users, a load-balanced / high-availability cluster starts to make sense. After 250 users, sticking to just a single server doesn't make a lot of sense (cost of downtime is expensive). Once you need a server cluster, then upgrading SQL Server would probably make sense.

      There's a big cost difference between a single server and a server cluster - in part the ProGet licensing fees, but also managing a server cluster is more complicated. Some organizations prefer to start with high-availability right away rather than worry about upgrading later.

      See How to Prevent Server Overload in ProGet to learn more.

      hope that helps clarify!

      Best,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: Mixing ProGet Instances

      Hi @cimen-eray1_6870 ,

      Great questions; there's no problem having a second instance with ProGet Free Edition.

      The relevant restriction is that you can't use a Connector in ProGet Free Edition to connect to another instance of ProGet (either another Free Edition or your paid edition).

      Hopefully you can use your Maven feed as a proof of concept for implementing it in the main instance. Good luck!

      Cheers,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: Helm Chart installation

      Hi @ccordova_8628 ,

      ProGet requires SQL Server; Postgre/MySQl are not supported.

      Cheers,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: Error from PyPI feed with "pip search"

      Hi @brett-polivka ,

      It looks like you've got something configured incorrectly; the endpoint should be something like:
      http://<redacted>/pypi/maglabs/simple

      Cheers.
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: SQL Database Error for ProGet

      Hi @priyanka-m_4184 ,

      It sounds like you have package statistics enabled; as you can see, this table gets really big over several years.

      If you aren't using this data and don't care about it, then just run TRUNCATE PackageDownloads and disable the feature.

      Another big table is often EventOccurences, but usually that's much smaller.

      Here is query that will purge data from those tables before 2023:

      DECLARE @DELETED INT = 1
      WHILE (@DELETED > 0)
      BEGIN
          BEGIN TRANSACTION
          
          -- PURGE OLD DATA
          DELETE TOP (10000) [PackageDownloads]
          WHERE [Download_Date] < '2023-01-01'
          SET @DELETED = @@ROWCOUNT
         
          -- PURGE OLD EVENTS
          DELETE TOP (10000) [EventOccurrences]
          WHERE [Occurrence_Date] < '2023-01-01'
          SET @DELETED = @DELETED + @@ROWCOUNT
          
          COMMIT
          CHECKPOINT 
      END
      

      Best,
      steve

      posted in Support
      stevedennis
      stevedennis
    • RE: SQL Timeout When Starting Agent Service

      Hi @justin-zollar_1098 ,

      At first, it looks like debug-level logging is enabled, so I would definitely disable that under Admin > Advanced Settings > Diagnostics.MinimumLogLevel. It should be 20.

      The most common reason for a SQL Timeout (i.e. if you google the problem) is a SQL query that is taking too long. That shouldn't happen in Otter, but it sometimes does, especially when there is a lot of data and some non-optimzed queries.

      A SQL Timeout when starting the Otter Service is unusual, and it may not be related to SQL queries.

      The first thing I would check... are these queries actually taking that long to run in the database? You can use a tool like SQL Server Profiler or resource monitor, which will show you what's going on. You can then try running those queries directly against the database, and see if they're also taking an eternity.

      It's very possible that SQL Server isn't the issue at all. It could be network related - and we've even seen some bad Windows updates trigger some strange side-effects to the Windows wait handles.

      Best,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: DROP path UNC

      Hi @jerome-virot_4088 ,

      Linux does not support UNC paths, so you'll need to mount the appropriate machine and drive to a local directory under the Linux OS. Once this has been done, you can then map the volume in your Docker container, and configure the Drop Path in ProGet.

      Best,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: SQL Timeout When Starting Agent Service

      Hi Justin,

      The Inedo Agent Service is generally something that you'd run on a remote server; if it's crashing on start-up, then the error message would be in the Windows Event Log. The most likely reason is not sufficient permissions or invalid configuration.

      The error message that you're sharing is from the Otter Web application, and it's happening while trying to view the "Admin > Diagnostic Center". That's an unrelated problem... but it's also a bit unusual, as there shouldn't be more than 1000 entries in that table.

      The first thing I would investigate is the data the underlying table. You can just run SELECT * FROM [LogMessages] ORDER BY [LogMessage_Id] DESC , and peek at what's there.

      That won't help with the agent, but it will help troubleshoot other issues. There definitely shouldnt' be atime out there.

      Cheers,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: Proget Service/Site Crashes with 500 Error

      Hi @jfullmer_7346 ,

      The ProGet Service (and WebApp if using IIS) will crash when the database is no longer accessible. Based on the error messages, that exactly the case. The "good news" isn't that isn't ProGet related, so that at least gives you one less place to look.

      It looks like you're using the SQL Server on the same machine ("shared memory provider"), but I'm not totally sure. If that's the case, then my guess is that the SQL Server is crashing; you'd have to check SQL Server's event/error logs for that. It's very rare for SQL Server to crash, and I'd be worried that it's a sign of hardware failure.

      Beyond that, I don't have any specific trips/ticks on research SQL Server connectivity problems, but but if you search ...

      • "shared memory provider error 40" (it was in the error message I saw)
      • "SQL Server v??? crashing" (assuming this is the case, whatever your version is)
      • specific error messages from SQL server's event/error logs about the time it crashed

      ... you'll find lots of advice all over the place, since this could impact pretty much any software that uses SQL Server.

      Good luck and let us know what you find!

      Thanks,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: Delete asset by age

      @nathan-wilcox_0355 great to know!

      And if you happen to have a shareable script, by all means post the results here - we'd love to share it on the docs, to help other users see some real-world use case

      posted in Support
      stevedennis
      stevedennis
    • RE: Problem with Vulnerabilities in docker with Clair

      Hi @w-repinski_1472,

      Unfortunately integrating with Clair v4 cannot be done with only a new plug-in / extension. It requires substantial changes to the vulnerability feature/module in ProGet, so it's something we would have to consider in a major version like ProGet 2024.

      Thanks,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • RE: NuGet Feed API Endpoint URL Returns 404

      Hi @cole-bagshaw_3056 ,

      The web interface and API are powered by the same program (the ProGet web server application), so if the UI is accessible so would the API, as you noticed.

      In this case, this error message is coming from your reverse proxy (NGINX); I would check with your configuration there, as something is misconfigured.

      The first and most obvious thing I would check is the hostname/port of your URLs. It's possible that you're accessing different hosts/domains. This is controlled by the X-Forwarded headers.

      Hope that points you in the right direction

      Cheers,
      Steve

      posted in Support
      stevedennis
      stevedennis
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 5 / 8