Hi @janne-aho_4082,
This is most certainly related to heavy usage, even though it might not seem that at first; the connectors are basically a load multiplier. Every request to ProGet is forwarded to each connector via the network, and when you have self-connectors, that's a double-burdeon.
See How to Prevent Server Overload in ProGet to learn more.
Keep in mind that an npm restore will do thousands of simultaneous requests, often multiple for each package. This is to check latest version, vulnerabilities, etc. So you end up with more network traffic than a single server can handle - more ram/cpu will not help.
This is most commonly seen as SQL Server connection issues, since SQL Server also uses network traffic. The best solution is to use network load balancing and multiple nodes.
Otherwise, you have to reduce traffic. Splitting feeds may not help, because the client will then just hit all those feeds at the same time. The "connector metadata caching" can significantly reduce network traffic, but it comes at the cost of outdated packages. You may "see" a package on npmjs (or another feed), but the query is cached so it won't be available for minutes.
Since you're on Linux, I would just use ngnix to throttle/rate limit ProGet. The problem is peak traffic, so start with like 200/request/max then go up from there.
Cheers,
Alana