Welcome to the Inedo Forums! Check out the Forums Guide for help getting started.
If you are experiencing any issues with the forum software, please visit the Contact Form on our website and let us know!
Failed to fetch package tags for <local package name> from registry.npmjs.org.
-
We have a npm feed with a connector to registry.npmjs.org and in this feed there are a handful local packages that has been pushed to the feed.
After we upgraded to version 2024.25 (official Docker image), we now see a lot of "Failed to fetch package tags for <local package name> from registry.npmjs.org." and also we have had a number of errors mostly:
Web: "An error occurred processing a GET request to http://<proget server>/npm/<feed name>/is-date-object: The operation has timed out."
ConnectorCacheCheckRunner;
ExecutionDispatcherRunner;
DropPathMonitorRunner;
FeedReplicationRunner:
"Unhandled exception: Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
Microsoft.Data.SqlClient.SqlException (0x80131904): Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding."From time to time we noticed that builds fail due of failing to fetch packages from ProGet. Sometimes we have quite high CPU load (90%) and a lot more open connections to the database (up to 200) instead of just a handful connection. The issue don't seem to be fully related to the high load, it happens also under low load.
We don't see much load on the database, at most 10%.
Host CPU: 4 vCPU Intel Xeon Platinum 8259CL 2.50GHz
Host RAM: 16GBSo to the question, the request for local packages that are sent to registry.npmjs.org, are they cause to the other issues (too many connections to the connector?)
At most we have 20 build agents that may fetch npm/nuget packages and push npm/nuget and docker images to the proget (usually all build ain't in use, more of 2-4).
Do you think we may have an under dimensioned machine and need to increase to more vCPU?Or you just recommend to split the npm feed into one with the connector and the other one just have the local npm packages?
Thanks in advance for your reply, at the moment we unsure what to do...
-
Hi @janne-aho_4082,
This is most certainly related to heavy usage, even though it might not seem that at first; the connectors are basically a load multiplier. Every request to ProGet is forwarded to each connector via the network, and when you have self-connectors, that's a double-burdeon.
Keep in mind that an npm restore will do thousands of simultaneous requests, often multiple for each package. This is to check latest version, vulnerabilities, etc. So you end up with more network traffic than a single server can handle - more ram/cpu will not help.
This is most commonly seen as SQL Server connection issues, since SQL Server also uses network traffic. The best solution is to use network load balancing and multiple nodes.
Otherwise, you have to reduce traffic. Splitting feeds may not help, because the client will then just hit all those feeds at the same time. The "connector metadata caching" can significantly reduce network traffic, but it comes at the cost of outdated packages. You may "see" a package on npmjs (or another feed), but the query is cached so it won't be available for minutes.
Since you're on Linux, I would just use ngnix to throttle/rate limit ProGet. The problem is peak traffic, so start with like 200/request/max then go up from there.
Cheers,
Alana
-
Thanks for the reply, we on our end have to take a discussion and see if there is a budget increase option or if we just throttle traffic.