TIL that PowerShell can use internal CLR generic reference type names like that! But really, please don't do that...
[System.Nullable``1[[System.Int32]]]
[Nullable[int]]
... much easier to read
Welcome to the Inedo Forums! Check out the Forums Guide for help getting started.
If you are experiencing any issues with the forum software, please visit the Contact Form on our website and let us know!
TIL that PowerShell can use internal CLR generic reference type names like that! But really, please don't do that...
[System.Nullable``1[[System.Int32]]]
[Nullable[int]]
... much easier to read
You can ignore this error; for whatever reason, the NuGet client unexpectedly terminated the connection, and the result was that ProGEt stopped writing bytes. Not really anything to worry about.
The diagnostic center isn't for proactive-monitoring, more for diagnostic purposes. So ulnless users are reporting a problem, you don't need to check it.
Hi @jimbobmcgee ,
Thanks for all the details; we plan to review/investigate this via OT-518 in an upcoming maintenance release, likely in the next few two-week cycles.
-- Dean
Hi @andreas-unverdorben_1551 ,
npmjs.org primarily serves static content and runs on massive server farms running in Microsoft's datacenters.
Your ProGet server is much less powerful and does not serve static content. Not only is every request is dynamic (authentication, authorization, vulnerability checking, license checking, etc), but most requests (such as "what is the latest version of package X") need to be forwarded to npmjs.org and aggregated with local data.
So, a much less powerful server doing a lot more processing is going to be a little slower ;)
Running ProGet in a server cluster will certainly help.
Cheers,
Dean
Hi @kichikawa_2913,
We see multiple connectors pretty often, and it rarely presents a problem.
The main downside comes in the overhead of aggregation; for some queries like "list all package versions", each connector will need to be queried and have the results aggregated. So it could cause performance issues for for very high-traffic feeds - at least that's what we see on the support side of things.
However, if you plan on using a package-approval workflow, then it won't be a problem, as your approved-npm
feed wouldn't have any connectors.
Hope that gives some insight,
Dean
@joel-shuman_8427 thanks for the heads up!
I just updated it
https://github.com/Inedo/inedo-docs/blob/master/CONTRIBUTING.md
Hi mwatt_5816,
BuildMaster does support "release-less" builds, though you may need to enable it under the application's Settings > Configure Build & Release Features > Set Release Usage to optional. That will allow you to create a build that's not associated with a release.
It's also possible to do "ad-hoc" builds (i.e. builds with no pipeline), but we don't make it easy to do in the UI because it's almost always a mistake (once you already have pipelines configured). So in your case, I think you should create a secondary pipeline for this purpose.
-- Dean
Hi @kc_2466 ,
We'll get this fixed via PG-3050 as well; the button should be shown to non-admins
-- Dean
Hi @kc_2466 ,
Those should not be displayed to non-feed admins; we'll clear that up with PG-3050 in the next maintenance release. In the meantime, I suppose you could just click the "X" for them ;)
-- Dean
Hi @mmaharjan_0067 ,
We'll investigate and see about adding this via PG-3035; it's probably returning some unexpected status. Will update if we run into trouble!
-- Dean
@mmaharjan_0067 we'll try to do this via PG-3034 as well
Hi @v-makkenze_6348 ,
We'll get this fixed via PG-3041 in the upcoming maintenance release; you can try out the prerelease container if you'd like (proget:25.0.4-ci.1
), which is building now.
-- Dean
Short answer yes, and you'd probably see a bit better than 15 -> 5 TB reductions with those artifacts. We usually see 90-95% storage space reduction. Pair it with ProGet's retention rules and I wouldn't be surprised to see that drop to 500GB.
Long answer, file deduplication is something you want handled by the operating system (e.g. Windows Data Deduplication, RHEL VDO, etc), not the application. It's way too complex -- you have to routinely index a fileset, centralize chunks in a compressed store, and then rebuild those files with reparse points.
Maybe this wasn't the case a couple decades ago. But these days, rolling your own file deduplication would be like implementing your own hacky encryption or compression. Pointless and a bad idea.
That being said, you may be using a tool by our friends at JFrog. They advertise a feature called "data deduplication", which IMHO is something between deceptive and a clever marketing flex.
Because they store files by their hash instead of file names, the files are automatically "deduplicated"... so long as it's the exact same contents. Which, in most case, it will not be.
Here’s an article that digs into how things are stored in Artifactory, and also should give you an idea of their “file-based” approach: https://blog.inedo.com/proget-migration/how-files-and-packages-work-in-proget-for-artifactory-users/
As for the package count, 5M is obviously a lot of packages. Obviously it's not going to be as fast as 5 packages - but probably not that much noticeably slower. There's lots of database indexes, etc.
Hope that helps.
-- Dean
@michal-roszak_0767 just a heads up we're a bit slammed with ProGet 2025 release but will respond soon!
@lukas-christel_6718 just a heads up we're a bit slammed with ProGet 2025 release but will respond soon!
@layfield_8963 no plans, as you're the first to ask :)
I don't know much about ARM/MacOS builds.... do you think it's just as easy as adding a new publish target?
See our build script here:
https://buildmaster.inedo.com/applications/132/scripts/all?global=False
Hi @parthu-reddy ,
Nothing to worry about - there are a few ways this can happen, and unless it's happening a lot and/or causing problems with your end-users / pipelines / etc., you can ignore the message.
-- Dean