Hi @kc_2466 ,
We'll get this fixed via PG-3050 as well; the button should be shown to non-admins
-- Dean
Welcome to the Inedo Forums! Check out the Forums Guide for help getting started.
If you are experiencing any issues with the forum software, please visit the Contact Form on our website and let us know!
Hi @kc_2466 ,
We'll get this fixed via PG-3050 as well; the button should be shown to non-admins
-- Dean
Hi @kc_2466 ,
Those should not be displayed to non-feed admins; we'll clear that up with PG-3050 in the next maintenance release. In the meantime, I suppose you could just click the "X" for them ;)
-- Dean
Hi @mmaharjan_0067 ,
We'll investigate and see about adding this via PG-3035; it's probably returning some unexpected status. Will update if we run into trouble!
-- Dean
@mmaharjan_0067 we'll try to do this via PG-3034 as well
Hi @v-makkenze_6348 ,
We'll get this fixed via PG-3041 in the upcoming maintenance release; you can try out the prerelease container if you'd like (proget:25.0.4-ci.1
), which is building now.
-- Dean
Short answer yes, and you'd probably see a bit better than 15 -> 5 TB reductions with those artifacts. We usually see 90-95% storage space reduction. Pair it with ProGet's retention rules and I wouldn't be surprised to see that drop to 500GB.
Long answer, file deduplication is something you want handled by the operating system (e.g. Windows Data Deduplication, RHEL VDO, etc), not the application. It's way too complex -- you have to routinely index a fileset, centralize chunks in a compressed store, and then rebuild those files with reparse points.
Maybe this wasn't the case a couple decades ago. But these days, rolling your own file deduplication would be like implementing your own hacky encryption or compression. Pointless and a bad idea.
That being said, you may be using a tool by our friends at JFrog. They advertise a feature called "data deduplication", which IMHO is something between deceptive and a clever marketing flex.
Because they store files by their hash instead of file names, the files are automatically "deduplicated"... so long as it's the exact same contents. Which, in most case, it will not be.
Here’s an article that digs into how things are stored in Artifactory, and also should give you an idea of their “file-based” approach: https://blog.inedo.com/proget-migration/how-files-and-packages-work-in-proget-for-artifactory-users/
As for the package count, 5M is obviously a lot of packages. Obviously it's not going to be as fast as 5 packages - but probably not that much noticeably slower. There's lots of database indexes, etc.
Hope that helps.
-- Dean
@michal-roszak_0767 just a heads up we're a bit slammed with ProGet 2025 release but will respond soon!
@lukas-christel_6718 just a heads up we're a bit slammed with ProGet 2025 release but will respond soon!
@layfield_8963 no plans, as you're the first to ask :)
I don't know much about ARM/MacOS builds.... do you think it's just as easy as adding a new publish target?
See our build script here:
https://buildmaster.inedo.com/applications/132/scripts/all?global=False
Hi @parthu-reddy ,
Nothing to worry about - there are a few ways this can happen, and unless it's happening a lot and/or causing problems with your end-users / pipelines / etc., you can ignore the message.
-- Dean
Hi @alex_6102 ,
It sounds like you're trying to do a kind of "manual" or "custom" installation on Linux? That's the impression I got when you mentioned, "using the file system provided..."
We don't support this kind of installation,; you should really just run the Docker image like this:
docker run -d --name=proget --restart=unless-stopped \
-v proget-packages:/var/proget/packages -p 80:80 --net=inedo \
-e PROGET_SQL_CONNECTION_STRING='Data Source=inedo-sql; Initial Catalog=ProGet; User ID=sa; Password=«YourStrong!Passw0rd»' \
proget.inedo.com/productimages/inedo/proget:latest
As for the error... it seems that ProGet is failing to read the configuration file, which isn't used on a Linux installation. Instead, environment variables are used, since that's the Docker way.
-- Dean
Hi @darren-sloper_5044 ,
That's great to see you're giving it a shot! We'll fix this via PG-2992 in the next maintenance release, but in the meantime... it looks like the bug is in the download statistics procedure, so if you disable that feature on the feed it should work.
Let us know what else you find,
-- Dean
@michal-roszak_0767 said in Pull Maven artifacts - invalid version:
Next victim:
https://repo1.maven.org/maven2/org/springframework/data/spring-data-releasetrain/
jeeze, what a mess!
Well, there goes any hope of using v[0-9]+
-- they just straight up use random strings as version numbers.
Open to ideas, but based on the URLs alone... I don't see a good way to identify one of these as an artifact and the other as a version of an artifact.
Hi @michal-roszak_0767 ,
ProGet does not support wildcards searching for artifacts.
Licenses are declared in the manifest (i.e. .pom
file):
https://maven.apache.org/pom.html#Licenses
You cannot really override this. If an artifact does not have a license, you will be given a chance to pick a license for it. If you ever need to change that, you'd have to go to the SCA > Licenses > License Types, and remove the package-specific assignment from there.
-- Dean
Typically, Deployment Targeting is done in the Pipeline:
https://docs.inedo.com/docs/buildmaster/deployment-continuous-delivery/buildmaster-pipelines#deployment-targets
This way, you don't need to put for server
or anything in your script.
So, my guess is that your Pipeline is actually targeting the ProductionServer (but not running anything on it, except intializing the agent), but your script is targeting the BuildMasterServer.
-- Dean
@michal-roszak_0767 ProGet is not a file server. Metadata files like maven-metadata.xml
are generated upon demand, based on artifacts stored in the feed
After looking into this further, I'm afraid we simply can't support this artifact/package at this time. I don't really see a good path for supporting this without adding significant complexity and risk of breaking proper artifacts / versions.
The problem is that this version breaks the basic rules that Maven repositories need to follow:
These rules resolve the ambiguity of determining what /com/google/javascript/closure-compiler/v20250407
means. For example, is it:
v20250407
of com.google.javascript.closure-compiler
artifact?com.google.javascript.closure-compiler.v20250407
artifact?I'm not even sure how this was uploaded to Maven central. I have no idea why the developers ignored the warnings that Maven spat out for legal version numbers. This has been a specification for like 20 years. Heck, here's a discussion from like 2008 on how the "must start with a digit" rules needed clarification: https://cwiki.apache.org/confluence/display/MAVENOLD/Versioning
If you encounter other artifacts like this, maybe we can consider some kind of very limited exception, but until we figure something else out this artifact version is simply not supported in ProGet.
I can't imagine there are many other artifacts like this, but let us know if there are.
-- Dean
Oh that's a whole lot of connectors and this is most definitely going to cause some performance issues.
Remember that ProGet needs to forward every request you make to all 10 of those servers, and some of these repositories will not respond very quickly. Like JCenter was deprecated/retired a few years ago, and I can't imagine is performant at all. Especially for things like metadata requests.
Maven is not a very patient client and will time out while waiting for ProGet.
There is really no way around this. You'll need to use less connectors.
-- Dean
Hi @michal-roszak_0767 ,
That error is unrelated to invalid versions being allowed/disallowed in ProGet. Maven is just saying that it can't find a snapshot (i.e. prerelease) version of a dependency.
'io.github.java-diff-utils:java-diff-utils' is a public library, published to Maven Central:
https://repo1.maven.org/maven2/io/github/java-diff-utils/java-diff-utils/
Snapshot versions are not published to Maven Central. I don't know where it's published.
In any case you should not be using snapshots of public libraries unless you have a very specific need to; they're only intended for development of related public libraries and are in a special repository. Check w/ the devs behind that build with their intents... it might be a mistake?
-- Dean
@michal-roszak_0767 those are also invalid versions and you should never upload them to a feed (repository) directly; see https://docs.inedo.com/docs/proget/feeds/maven#snapshot-versions
The code has already been fixed and the maintenance release is scheduled for next week. We could get you a prerelease, but I don't think your developers are manually uploading artifacts using the Web UI?
If you want to upload artifacts with bad versions now you can just use the maven client (e.g. maven-deploy) or just do a basic PUT
of the file to the desired group/artifact-id.
Hi @michal-roszak_0767 ,
It looks like the manual upload page does not consider that setting; we'll fix it via PG-2977 in the next maintenance release.
-- Dean
Hi @michal-roszak_0767 , @steviecoaster ,
Maven versioning is a total mess. v***
is indeed an invalid version, per the which means that (among other things) it must be lexicographically sorted for determining latest version.
It looks like they knew that enough to use a string... but not enough to use a valid version number. Oh well. /rant
If you go to Manage Feed Settings, you can enable invalid versions in the feed. The message should make this more clear, so I will clarify that via PG-2977 in the next maintenance release.
-- Dean
Hi @steviecoaster,
The current package import tool uses the NuGet API. It's not really easy to use, and I'm afraid our API access code isn't really "portable" -- it's tightly integrated into Connectors, which are tightly integrated into Feeds, etc.
Here's a guide on how to query all published packages from a NuGet feed:
https://learn.microsoft.com/en-us/nuget/guides/api/query-for-all-published-packages
That said, next week we will be releasing a brand-new package importer that will connect to Sona Nexus, Artifactory, AzureDevOps, ProGet, GitHub, and GitLab. These use the provider-specific APIs and work much better than what we have now.
Functionality it's the same, but now your credentials are stored in ProGet. You can also run it multiple times, and it will only import new packages. This is useful for the cases where you are transitioning usage.
-- Dean
Hi @michal-roszak_0767 ,
Thanks for letting us know!
Unfortunately the Feed Wizard seems to have some quirky behavior when configuring certain combinations of options, as you've noticed. We are actually in the process of rewriting the new feed wizard to be a bit more simpler (esapecially behind-the-scenes), hopefully in the next couple weeks it'll be in a new maintenance release.
In the meantime, if you encounter these errors... I would just create the connectors on the MAnage Feed page. Which it sounds like you've done :)
-- Dean
Hi @daniel-pardo_5658 ,
The UI-based package editor is intended for small packages, up to 50 MB or so. It looks like there is a platform-enforced limit of 2GB. For now, you will need to download, edit, re-upload.
That said, I switched the stream that we're using to something that can accommodate larger packages, but I did not test it so I really don't know if it will actually work on these packages. It'll be in the upcoming maintenance release via PG-2964
-- Dean
Hi @scusson_9923 ,
Sorry, I misunderstood; I thought you were doing PSExec
.
In this case you are just executing the pwsh
process, so you need to figure out how to have that process return an exit code.
I don't know if that's the same as powershell.exe
, but a AI told me "To return an error code from a PowerShell script, the exit
statement followed by the desired error code should be used."
So I guess exit -1
or something like that?
-- Dean
Hi @scusson_9923 ,
Exec
(or Execute-Process
) runs an operating system process, so that's where the return code comes in.
If you're doing something like a PSCall
, then you can create output parameters/variables in the script, and then test those values.
-- Dean
Hi mwatt_5816,
BuildMaster does support "release-less" builds, though you may need to enable it under the application's Settings > Configure Build & Release Features > Set Release Usage to optional. That will allow you to create a build that's not associated with a release.
It's also possible to do "ad-hoc" builds (i.e. builds with no pipeline), but we don't make it easy to do in the UI because it's almost always a mistake (once you already have pipelines configured). So in your case, I think you should create a secondary pipeline for this purpose.
-- Dean
Hi @procha_8465 ,
I'm afraid we'll need a bit more information here to help you. There are a lot of changes between ProGet 5.2 and ProGet 2024 and between older/newer versions of the npm client.
If you can put together a reproduction case, ideally on a new instance of ProGet 2024, that'll help us determine what you're trying to do and how to help.
-- Dean
Hi @f-medini_8369,
Here is our documentation on how to use Prometheus:
https://docs.inedo.com/docs/installation/logging/installation-prometheus
-- Dean
If you haven't seen it already, I'd check out How Files and Packages Work in ProGet for Artifactory Users.
Long story short is, you should consider a more modern approach than the Maven-based file/folder that Artifactory uses. Many have found a lot of success with Universal Feeds & Packages.
We don't maintain a TeamCity plugin, but it's really easy to create, publish, deploy packages using the pgutil
command line (i.e. pgutil upack create
); see HOWTO: Create Universal Packages
Hope that helps!
-- Dean
Hi @uwer_4638 ,
The underlying issue is that you're making a "NuGet v2 API" request to your ProGet feed, which ProGet is then forwarding to connectors, and DevExpress does not support NuGet API V2.
So, you'll need to track down whatever is making that request (perhaps you're using an old endpoint URL), or simply just disable the V2 API on your feed. This will cause an error on the client, and should show you pretty quickly what's making that outdated call.
-- Dean
@layfield_8963 great news, that was a really strange error. The UI upload uses some kind of chunking and file appending, so it sounds like that was it.
Hi @scusson_9923,
In this case, you'll likely want to select 5
as the type.
For reference, here are the valid types:
//
// Summary:
// Specifies the type of a raft item.
//
// Remarks:
// All types except BinaryFile and TextFile are "regulated" and only allow well-known
// files; for example,
public enum RaftItemType
{
//
// Summary:
// A role configuration plan.
RoleConfigurationScript = 1,
//
// Summary:
// [Uninclused] A Script with .otter syntax is prefered
OrchestrationPlan = 2,
//
// Summary:
// [Uninclused] A Script with .otter syntax is prefered
Module = 3,
//
// Summary:
// A script.
Script = 4,
//
// Summary:
// An unclassified binary file.
//
// Remarks:
// BinaryFiles cannot be edited in a text editor, compared, etc; they are always
// treated as raw content
BinaryFile = 5,
//
// Summary:
// A deployment plan.
DeploymentScript = 6,
//
// Summary:
// An unclassified text file.
//
// Remarks:
// TextFiles can be edited in UI , may have lines replaced on deploy, and can be
// used as templates
TextFile = 7,
//
// Summary:
// A pipeline.
Pipeline = 8,
//
// Summary:
// [Uninclused] Feature is deprecated
ReleaseTemplate = 9,
//
// Summary:
// A job template.
JobTemplate = 10,
//
// Summary:
// Files used with build tools like Dockerfile.
BuildFile = 11
}
I'm not sure if TextFile (7) will work; in Otter it was intended to be used as a text template, which means lines in it are replacement. You may need to play around and see what works.
-- Dean
Hi @scusson_9923 ,
What is the file you are uploading? What happens when you upload through the UI?
Can you share the PowerShell snippet you're using?
What are you specifying for RaftItemType_Code
?
-- Dean
@kc_2466 the "invalid feed type" will come up if you have a connector or feed that was created in a newer version of ProGet that wasn't available in an older version of ProGet... and you downgraded to the older version.
It looks like it's a connector, based on the URL. Easiest way to fix is to just upgrade, delete, downgrade
@kc_2466 thanks for the heads up, we'll target reviewing/fixing this for the following maintenance release (i.e. 2024.27 / Feb 21) via PG-2893
Hi @jimbobmcgee ,
Thanks for all the details; we plan to review/investigate this via OT-518 in an upcoming maintenance release, likely in the next few two-week cycles.
-- Dean
Hi @caterina ,
Looking over the code, I can see that; we will also fix that in the next maintenance release. The notifier should not be dispatched when there are 0 issues.
-- Dean
@cooperje_6513 that error means that the Windows service account user does not have access to the SQL Server database; you'll want to grant NT AUTHORITY\NETWORK SERVICE
access
You can do this with SQL Server Management Studio, or a scritp like this should work:
CREATE LOGIN [NT AUTHORITY\NETWORK SERVICE] FROM WINDOWS WITH DEFAULT_DATABASE=[ProGet]
CREATE USER [NT AUTHORITY\NETWORK SERVICE] FOR LOGIN [NT AUTHORITY\NETWORK SERVICE]
ALTER USER [NT AUTHORITY\NETWORK SERVICE] WITH DEFAULT_SCHEMA=[dbo]
ALTER ROLE [ProGetUser_Role] ADD MEMBER [NT AUTHORITY\NETWORK SERVICE]
Hi @caterina ,
I was able to find the issues; the correct value should be this:
$ToJson(%(
issues: @BuildIssues,
buildNumber: $BuildNumber,
releaseNumber: $BuildReleaseNumber,
projectName: $BuildProjectName
))
I've updated the documentation and also ProGet (via PG-2890), which will be in teh next maintenance release. But if you use the above template it should work.
-- Dean
@jimbobmcgee fantastic, we'll review/merge soon! thanks much :)
Hi @scusson_9923 ,
This seems to be an issue related to release vs debug builds (works fine locally, but not when deplyoed to server), and we'll investigate and fix via OT-517 in an upcoming maintenance release (2024.4) - not sure on the exact schedule, but we're targeting the next couple weeks
-- Dean
@jimbobmcgee thanks; we'll definitely investigate this later, but it will likely not be for a few months until we can do some "heads down" time with this stuff
Honestly I don't remember how any of this works, so I could be wrong and you need to do something else. It's clearly not something we document.
Our primary use case is more like this, uploading basic scripts:
https://docs.inedo.com/docs/otter/scripting-in-otter/otter-scripting-powershell
@jimbobmcgee thanks for reposting this here as well!
PSEval
is definitely not meant for scripts like that, due to how the parsing works... but as you noticed, the $PSEval($ps)
should work. We probably won't change this.
PSExec
(i.e. Execute-Powershell
) can capture variables, but not output streams. So something like this:
set $hello = world;
$PSExec >>
$hello = 'dears';
>>;
Log-Information Hello $hello;
Similar to my comments on the PSEval thread, this is another one of those rabbitholes that can break stuff, since the existing behavior seems to work for some users. So we're super-cautious about it.
It's likely we won't change these behaviors as they are "good enough" for the intended usecase of Otter.
@jimbobmcgee that's a nice idea; we'd definitely be open to a pull request on those FYI
Based on other list/map functions I think it'd be relatively straight-forward and an easy pattern to follow:
https://github.com/Inedo/inedox-inedocore/blob/master/InedoCore/InedoExtension/VariableFunctions/Lists/ListRemoveVariableFunction.cs
Just not something we can focus on now though
@jimbobmcgee thanks for reposting this here as well
Working with PowerShell output variables is a very long-standing challenge, in particular because PowerShell has very inconsistent returns based on Windows, Windows Core, LinuxCore. We made some fixes not too long ago, but it's still not perfect and was a ton of effort that ended up breaking some user scripts.
And as you probably saw poking around the execution engine code, a variable prefix ($
, @
, %
) is more of a convenience/convention, and the prefix isn't really available in any useful context. I'm almost certain you can do stuff like $MyVar = @(1,2,3)
for example. This is very likely not something we will want to change.
Keep in mind that OtterScript was never designed as a general-purpose scripting language, but as a light-weight orchestration script to run other scripts. So these limitations happen.
I will make a note of this on our long-term roadmap, but it's likely we won't take action on it due to sensitivity of all this and not wanting to break existing scripts.
@jimbobmcgee thanks for reposting this here
This is a long-standing behavior of Otter/OtterScript and it's most likely not a trivial fix and would involve updating the parser/execution engine (after remembering how it all works) - so not something we'll do in a maintenance release for a community/free user, as I'm sure you'll understand
However now that it's here, I will link it to our internal roadmap planning and consdieration for Otter 2025.
Each instance of ProGet needs its own file storage (S3 bucket, disk, etc). You definitely do not want to use the same storage across instances - that'd cause a major issue.
Some users have been tempted to use a combination of Database Replication + Disk Replication with third-party "external" replication tools, and learned the hard way that it's an absolute disaster once deployed. So don't try that :)
Basically the "external" (non ProGet) replication is way too slow to handle the type of traffic ProGet receives, and the files/database replication cycles are never in sync.
-- Dean
Hi @cooperje_6513 ,
It sounds like you had done some manually/IIS configuration, or perhaps an error occurred at some point. In any case, I would manually remove all components (service, IIS, etc), and you can delete any registered installation in the c:\ProgramData\upack folder. Just keep your installed package files (typically c:\ProgramData\ProGet).
Then, just install fresh, pointing to the same database. Use the Integratred Web SErver, not IIS. That's what we recommend now.
NOTE that the WebApp
folder is no longer used.