Then we have something to look forward to, could be that we by then switched to API Keys, but the decision ain't mine.
Thanks for the help and for all the RC that we got to test.
Then we have something to look forward to, could be that we by then switched to API Keys, but the decision ain't mine.
Thanks for the help and for all the RC that we got to test.
@atripp thanks for your reply.
I guess most of the LDAP requests are for the authentication, would it be possible to cache the authentication request and LDAP response for a short time, think that could also improve things. But sure, the test environment is limited compared to the production environment, tend to be a question about money.
About the keys, we had some keys of which a number were of old type that version 6 don't allow you to create anymore, we decided to remove all api keys that hadn't been in use for a long while, most api keys seemed to not been used at all, so we ended up with deleting all of them. Turned out that a couple of the keys were in frequent use (in production), I'm not sure if it's something we have configured on our side or there are issues in logging of successful use of the api keys?
As there been a will of having a centralized way of administrating access and also quite many places to change if switching from account credentials to api keys wouldn't happen over night. But we may have to look into that path incase we notice issues in production.
@rhessinger thanks for your explanation of the user cache.
Till yesterday our test environment had been the faster one, even if it had half the amount of ram compared to the production environment. So decided to setup the permissions for the LDAP users in the same way. The first test after that took around 60s to run, directly after that we decided to clear the cache and after that the test environment was back on 200s per run.
On the test environment the recycle time for the app pool is 1740m and idle-timeout is 10m, I assume that is default. I don't see anything about reset of the pool or crashes in the eventlog. So it don't explain why having a pause or 30 mins makes the time better than run twice after each other. Any recommended values for these configs?
Yes, the AD is in a different network which causes extra latency, but there is no difference between test and production in this regard. They are both setup in the same way when accessing their AD's.
Regarding the time to run in production the same "test" before the upgrade, I can see the last build before upgrade it took 59s to install the same packages.
Something we noticed while testing on our test environment was that after running a "Clear Cache" in "Task/Permission", the time to fetch packages went up.
If we didn't run tests for a longer time, we started to see better times the first time we run the test, just running like 20 sec later it was back on long responses.
The two test were run approximately 11:00 CET:
Running test 25 mins after previous test:
added 1406 packages from 1153 contributors in 60.835s
Running next test 20-30 sec later:
added 1406 packages from 1153 contributors in 210.565s
We did update the InedoCore to 1.13.1-rc.11 to our production instance, this far running the same test runs seems to do it between 41s to 65s without seeing the same slowdown that we saw in the test environment.
added 1408 packages from 1153 contributors in 65.477s
added 1408 packages from 1153 contributors in 42.449s
added 1408 packages from 1153 contributors in 40.736s
added 1408 packages from 1153 contributors in 52.648s
No, there is no CEIP data from the production instance.
If there are anything you want us to test in the test environment, let us know.
We installed InedoCore 1.13.1-rc.11 on our test environment
npm install: added 1406 packages from 1153 contributors in 215.136s
npm install --no-audit: added 1406 packages from 1153 contributors in 110.997s
(this we not tested before)
Comparing with yesterday, it seems 1.13.1-rc.11 is slower.
We did upgrade production environment, sadly the npm install is still as slow as before, 600s. It's similar to the test environment but twice as much memory.
What would be your recommended vCPU and Memory for a instant that gets a good amount of package requests and also has maybe around 10 new packages pushed (all from nuget to docker images) per hour.
@atripp Will look into the metadata caching and see if it helps us.
The machine name has been posted in EDO-8231
Update: the caching has been enabled all the time for the npm and nuget repositories.
If we pull the packages directly from npm org it takes around 40s to install and that was kind of the time it took on 5.3.11 before the upgrade too (when the packages are cached).
After the upgrade (first to 6.0.6, then 6.0.8), the time it took was just less of 600s (for prod: test should have been around 300s) for the same packages.
I'll see to that the machine name will be posted in the private ticket.
It seems that the 6.0.9.rc.3+1.13.1-rc.10 combination fixed the 404 issue when using the always-auth=false. Sorry no CEIP on that one as I hadn't restarted ProGet service and IIS when I run that test.
@atripp if you have always-auth=true for npm then it will do an authentication each request it makes to the repository, if always-auth=false then it will only do the authentication from time to time (not sure how often or how it decides it's time to authenticate again).
Thanks for the heads up on the change, we will test with the rc.3 for now.
@stevedennis we turned on CEIP on the test instance and run a npm install against the npm feed that just a cache of npm org.
The license number is: CQAB8SJN-XXXX-XXXXXX-XXXXXX-46VCSHTF (ProGet Trial)
Start time: 10.13 CET
Result: added 1406 packages from 1153 contributors in 150.089s