It seems the package deployment endpoint is POST
-only, but using the Native API you could use the Packages_GetPackageDeployments
method. Down the line we can certainly consider adding GET
method on the deployment api... if you can help us know what you/someone will use that info for ;)

Posts made by stevedennis
-
RE: Proget: deployment usage api not failing but no usage logged
-
RE: Proget: deployment usage api not failing but no usage logged
Glad that helped @mcascone , I'll try to update the docs when I get a chance!
In older versions, that information used to be displayed more prominently, but unfortunately relatively few users seem to utilize that data, and several have moved to Package Usage. I think someone even built Usage Scanner that queried that deployment history table but we couldn't get much info beyond that...
We don't maintain that Jenkins plugin, but it is open-source and if you're comfortable enough with Java you might be able to add the required header fields?
But you may find upack.exe to be better, since it can "install" packages, which then maintains a history on the server itself.
-
RE: Proget: deployment usage api not failing but no usage logged
Hi @mcascone ,
These are two different features...
Deployment records show which servers a package has been deployed to from that instance of ProGet, at some point in the past. They are usually added when a package is downloaded, and the GET request has a special header parameter or a user agent string.
Package usage is more complex, in that it shows which servers/hosts a package (or container) is currently installed on, regardless of whether it was deployed from ProGet or not. It requires a
PackageContainerScanner
component, which is intended to bridge the gap between servers and packages.We currently have two scanners: one that interacts with Otter's API (which can return Docker, Debian, Rpm, Universal, and Chocolatey packages across your servers) and Kubernetes API (which is just Docker and Helm charts).
Hope that helps,
Steve -
RE: Proget: delete all versions of a package via API
@mcascone yep, that's a good way of putting it!
-
RE: No option for NuGet package path under Advanced Settings
Hi @kichikawa_2913 ,
I think it's this way for "historic reasons" - mostly all the other feed types came later, and it seems no one ever changes these paths or noticed.
Easy enough to make it configurable, but can you share your use case? Why do you want to use something other than a single root path with all of your packages?
Anyway I added a feature for this, and we should be able to get it in the next maintenance release PG-2006
Cheers,
Steve
-
RE: Proget: delete all versions of a package via API
@mcascone sounds like some great progress!
I got a little confused at the combination of "keep only last 5 versions" plus the 30-day window. From what i've read in the docs, all conditions must pass for the item to be deleted. How do i set up, "keep only the last 5 versions, but when nothing has been requested for 30 days, delete them all"?
We could definitely improve the docs on this area, and you're right all conditions must be passed for items to be deleted. When you add the "keep only the last 5 versions" rule, there will be, at a minimum 5 versions of a package.
You could be able to add a second rule, but it operates independently. More like an "OR" than an "UNLESS" I guess. Perhaps you could adjust the time-windows a bit?
- Rule 1. "Delete unused versions not requested in last 10 days." AND "Keep only last 5 versions"
- Rule 2. "Delete unused versions not requested in last 60 days."
I would look at your release cycles for guidance. For example, we release our products every two weeks, though maybe we'll skip a week every now and then. So, no -ci package will be needed past 1 month. And as you said, you can just rebuild if needed.
-
RE: Proget: delete all versions of a package via API
Hi @mcascone ,
We don't have a single API method that can be used to delete all package versions from the API, but the
foreach
loop will do the trick!I should add that I am doing this as the first stab at an attempt to automatically delete packages from a development feed, when the corresponding branch in github is deleted
I don't know the specifics/details of your use-case, but based on what I read, I'd recommend these guidelines:
- assuming: one GitHub repository, one project, one package you want to release
- use the same package name/group for all packages you create for this project, regardless of branch or development status
- create your "dev" packages using a prerelease version number, that has a sort of
-ci.##
version (assuming you use CI to build packages) - embed the commit id and branch in your upack metadata file, for traceability
- if you want to see which branch the packages was created from using the version number alone, add a
+branch
metadata label to the version number for branches (don't do this formaster
) - use repackaging and promotion to take your
-ci
packages to-rc
to stable (and the desired feed) - let retention policies automatically cleanup up the
-ci
packages
-
RE: Connector to ghcr.io no longer works
Hi @brett-polivka,
I haven't tried ghcr, but in my experience GitHub is really unstable on the API/integration side when it's outside of their core source/Git hosting functions (e.g. Packages are notoriously buggy), so if your PAT is okay, then this is the most likely scenario.
The Connector Health Check for Docker uses the catalog API (
/v2/_catalog
), and the response should look something like this: https://proget.inedo.com/v2/_catalog. This endpoint is particularly buggy in other repositories (especially ones that require authentication/authorization), so my guess is GitHub introduced a regression or something on their implementation.Another possibility...
403
is access related, so it could be something on your proxy side; please also check your Proxy settings, or something on your network side. -
RE: Test Instance License for ProGet?
We have a lot of customers who maintain a separate test instance of ProGet; while upgrade testing is important of course, a dedicated testing instance also lets you evaluate new ProGet feature usage patterns (such as requiring promotion workflows, etc.), try out new tools (perhaps new version of visual studio, etc.), and conduct training on ProGet usage -- all without risking/disturbing your production instance.
To keep things simple from a licensing perspective, we just treat testing instances separate instances (and thus require a separate license key). Many customers use a ProGet Free License for this, but of course not all the features are available. It's rare to see a second license be cost prohibitive, especially given the labor/server costs involved with maintaining a testing instance -- even ProGet Enterprise customers will have full instances just for testing and even DR purposes.
You're right --- Active Directory is usually a pain point; sometimes our code changes (we try to never touch this), but also people want to change their AD configuration (move to LDAPS, etc.). Wrong settings, and you can lock-out your instance. If it's an uncommon / one-off testing case then a temporary trial license is fine for this.
-
RE: Buildmaster Version 7.0.9 (Build 2) keeps suffering database timeouts
What version did you upgrade from? That could help trace code changes.
Does this happen only on one page (i.e. /releases)? Then it's probably related to a bad query/unexpected data, but you only have 161 releaes according to your query. Not so much.... easy way to test that is by adding querystring params:
/releases?ApplicationId=2&Status=Active
for example.If it's easy to see queryes on your RDS server, then we can see what query might be bad.
Does this happen only intermittently/randomly? If it's only you, then the problem probably isn't database/server load. And even with a ton of people, that's really rare. On very old instances with lots of retention jobs and years/gigs of data, doing an index cleanup is necessary, but I don't think that's the case here. The simplest/fastest thing I can think to do is reboot the BuildMaster server, and hope it goes away (maybe it's a weird underlying network stack thing).
Does this happen all the time (like nothing at all works on the website). Then it's probably network related?
Thanks,
Steve -
RE: Support for Homebrew in Proget
@yogurtearl_0881 this is the first I've heard of Homebrew... at first glance, it looks like a kind of open-source/hobbyist/alternative package manager for MacOS?
-
RE: Proget dows not activate on free license
Any updates on this issue? Were you able to resolve/fix this problem?
We've had another customer who reported a very similar problem (activation of a key on our new server, using sha256 vs sha1, causes an error in an old ProGet version). But we still can't reproduce it, and now I wonder if it's related to operating system version, or another operating system patch that's missing.
Thanks,
Steve -
RE: pgscan not sending --consumer-package-source
Hi @jeff-peirson_4344 ,
That's definitely what it's intended for, so I think this must be a bug...
I haven't had a chance to reproduce or look any further, but I wanted to at least share the code ASAP...
https://github.com/Inedo/pgscan/blob/master/pgscan/Program.cs#L90
... so please feel free to look/fix yourself, but we'll also take a look in the coming days as well! Just an FYI.
Thanks,
Steve -
RE: BuildMaster Configuration File Deployment
Hi @paul-reeves_6112 ,
This is by design; Configuration File Templates were intended to simplify maintenance of Configuration File Instances by combining common things into the template. Not saying it's the "right" design, but that's the use case.
However, we can definitely consider changing the behavior, to allow you to specify the default Template or a different Template when deploying.
Could I trouble you to share the configuration files (sensitive data redacted of course), so we can see the use-case better? We really want to document the configuration files better in the coming months, and having examples like this will help us tremendously.
We also want to make sure it's the best way to solve the problem. There is also the option of using those ASP/PHP-like OtterScript Snippets in Configuration Files, too. Maybe that's better to put that in your template? I don't know,...
Lots of options, and we want to make sure we document how/when to choose which ones.
Thanks,
Steve -
RE: BuildMaster Path Browser
Hi @paul-reeves_6112 ,
The [...] moving is a bit strange and definitely shouldn't do that... but the remote browsing not working is a concern. I can see why it's not, and it's a way with how new agents are constructed behind the scenes in v7.
This feature was originally removed from v7 due to UI/JavaScript challenges, but we ultimately brought it back.. but clearly this part was overlooked in testing.
Anyways we'll get it fixed pretty quickly in BM-3716 - thanks for reporting it!
Thanks,
Steve -
RE: Allow login cookies on ProGet to persist across browser restarts
Hi @hwittenborn ,
I can definitely see how this could get annoying; this has been the design of our products for quite a while, mostly for simplicity/security reasons, and there hasn't been much demand for changing it. We're definitely open, so if other users are interested we'll certainly consider it.
Most administrators prefer "short sessions" (i.e. logged out at browser close or with no activity) for their own management simplicity; if we were to add "long sessions" (the "Remember Me" checkbox using persistent cookies), then administrators would need to worry about which users are "logged in", for how long, and terminate those sessions. And then we'd have to add all the features to support that capability - so nontrivial.
Best,
Steve -
RE: Support for Rust Cargo packages
Hi @brett-polivka,
I've added it to our Other Feed Types page, and linked this as the official discussion thread.
There's a lot of things to consider in developing a new feed type, but ultimately it all comes down to two things: (1) how much more value does this feature bring to our users, and (2) how many new licenses of ProGet would this feature sell.
The second question is where internal market research comes in, but we would love your opinion on the first question.
Here's a nice and simple way to help understand value: how much more do you suppose your company/organization would pay for this feature if it were available as a hypothetical add-on? $100/year? $1,000/year? $10,000/year? Etc. And why? What time is it saving, risk is it mitigating, etc.
The second part of the value equation is how much effort will it take, technically speaking. It's more than 15 minutes obviously, but is it 10 hours? 100 hours? Etc.
On the plus side, the package format seems to be documented pretty well. However, the registry API has a huge red flag:
The index key should be a URL to a git repository with the registry's index.
Does this mean their API is Git-based, and we'd need to first add private Git repository hosting to ProGet? And did they test it with private/authenticated Git repositories, or just their public (probably GitHub) repository?
-
RE: ProGet hosting in k8s or VM?
@saml_4392 said in ProGet hosting in k8s or VM?:
Btw, how do you measure the performance? Do you test it with a reverse proxy or use dotnet core directly?
In this case, it's mostly about what happens when the servers get overwhelmed with traffic - and it's largely anecdotal. So it could just as well be related to other factors, like other programs running on the servers running the Linux clusters, or SQL Client for Linux not performing as well, or who knows.
With Windows, people tend to set up dedicated or virtual servers with strict hardware provisioning.
-
RE: Feature Suggestion - Repackaging for Helm Charts
@Stephen-Schaff let us know! If that works, then we can just inject that in the package, and read it in on the history page, similar to this NuGet package:
https://proget.inedo.com/feeds/NuGetLibraries/Inedo.ExecutionEngine/100.1.1/history
-
RE: Security Suggestion: API Keys should be offered once
Hi @Stephen-Schaff ,
Thanks for the suggestion! So we had considered the "auto-generated one-time key" in our initial design, but decided against it for several reasons.
-
This enables the less-secure "API Key Spreadsheet Antipattern" - basically people want to store keys they generate -- and since the software doesn't allow it, they go this route. It's the same problem with "change your password every 30 days" policies that create easier-to-guess passwords.
-
This tends to create a lot of stale keys due to a fear to delete them. Administrators can "back-up" the API Keys if they can see them, and add-them back if cleaning up causes a problem.
-
Allowing keys to be entered allows users to more easily migrate from one instance to another - just do a DNS change, and all old automations /old keys will work fine.
API Keys are similar to passwords, but different; passwords are entered by a human to log-in, and in theory should only be "in that human's head" -- where as API Keys are always entered somewhere (usually in a script).
In general, in ProGet, we recommend keeping API Key as limited as possible. This simplifies things for everyone. There's no practical security problems in allowing all users to publish packages to feed.... you should be using package promotion to test/verify packages anyway.
-
-
RE: Feature Suggestion: Advanced Setting to force a user for API Keys
Hi @Stephen-Schaff ,
We already have "Personal API Keys" coming, so I think this will address those concerns.
The User Impersonation is really only used by the "Feed API" Endpoints anyways, and the only "problematic" endpoints might be "Feed Management API" (they could delete feeds) or "Native API" (they could do anything).
Otherwise, I think this would best be handled by training and documentation. Perhaps just a warning to put on the Create API Key page?
We've learned the hard way that advanced settings like this are really hard to support -- everyone forgets they exist (including support team).
steve
-
RE: ProGet hosting in k8s or VM?
Just some random thoughts...
We've had several customers try Kubernetes, but report that, in general, they found Windows to be easier to manage and maintain. "Anyone" can manage a product on Windows, but "only 1 guy" can manage things on Kubernetes, so he became the bottleneck of all things. In addition, the "Kubernetes guy" was constantly tweaking with their clusters, so the "rules kept changing" for how ProGet was supposed to work.
We don't officially support high-availability mode on Kubernetes yet, for the exact above reason. We aren't "Kubernetes experts" and don't know the best configuration.
Performance-wise, Windows tends to perform better on the same hardware, especially during high network traffic. The Windows network stack must handle it differently, as it can pass requests directly, instead of doing "whatever Docker/Containers and NGINX does"? Hard to say though, that's only anecdotal.
I did see a thread on how to run ProGet on a k8 cluster - we don't have an official Helm chart, but this is a great way to get that sort of stuff started
cc/ @viceice @saml_4392
-
RE: Feature Suggestion - Repackaging for Helm Charts
Hey @Stephen-Schaff ,
We'd love to expand repackaging to more feed types! It's still a pretty new concepts, and not too many folks understand the value of the process/feature.
The main thing that ProGet adds is the audit-trail -- basically metadata added to do the package itself, so the "history" lives with it. Plus this way, you don't have to reproduce that download/repack/reupload process over and over again.
Any suggestions on where to add this to the chart? I think we just inject a basic file in the NuGet package, maybe that would work in Helm as well?
Cheers
Steve -
RE: Proget Feature request - API key admin per user
thanks @scroak_6473 , I think I'm understanding better now. So how about an idea like this?
We make a new page ("Manage My Feed API Keys" or something) that lists the API keys in ProGet that...
- have only the Feed API permission
- Impersonate User = <current logged in user>
The <current logged in user> could delete any of these keys, or create a new key using a new page ("Create New Feed API Key" or something). There'd be only a single editable field on that modal page ("Description"), and behind the scenes, it would create keys like this:
Think that would do the trick?
-
RE: Proget Feature request - API key admin per user
Hi @scroak_6473
Thanks for the feature request - we've had a few "informal, unofficial" requests for this over the years, but the discussions didn't develop, and there was never a clear reason why it would help - but this seems much clearer. And it might be an opportunity for improving the API keys in the upcoming ProGet v6.
Can you expand upon a few things?
The user then needs an API key to use with CI/CD tools on the users workstation.
What tools, specifically? What permissions does this API key have?
Administrator needs to login to admin portal, add the ad user with the correct "security", then generate an API key and "impersonate" the user that has just been setup.
How often does this happen?
If you impersonate an AD username, I would expect it to "just work" like logging in? Meaning, you don't need to set-up special privileges - just give the group? Is this not the case (maybe this is a bug)?
Can you give a specific case?
-
RE: How to set content type of asset with API?
@joshuagilman_1054 this is currently planned for 5.3.27 as PG-1934 (April 17) - we'll let you know if plans change!
-
RE: Azure Blob error when upload PyPi package
@brett-polivka unfortunately, we recently had to upgrade the AzureBlob libraries we were using due to some deprecated APIs, and it would appear there's some behavioral changes between versions
This particular issue requires a ProGet change, which is already complete and will be coming in this week's maintenance release (PG-1921)
-
RE: How to set content type of asset with API?
@joshuagilman_1054 that module sounds really cool!
Anyways this is definitely enough to work with from here, so we'll schedule some time to investigate/reproduce/fix. We may be able to get it in the next next maintenance release (scheduled April 17), as the next one is a bit close (Apr 2).
Stay tuned!
-
RE: How to set content type of asset with API?
Hi @joshuagilman_1054, just doing some information-gathering here, but this sounds definitely like a bug to me...
I uploaded a markdown file and set the
Content-Type
totext/plain
and the ProGet server sets the content type toapplication/octet-stream
. This seems to be the default value because I can upload the file via the web GUI and get the same result.Can you give reproduction instructions (including the script/code), and we can use that the verify/evaluate/fix.
-
RE: [ProGet] Manual database upgrade (docker, kubernetes)
@saml_4392 in theory it's fixed, but we didn't test it
Could you open a new thread about a creating a Helm chart for ProGet itself? That'd be a great place to start that discussion, and get community direction and feedback from users - and provide nice opportunities to partner with organizations like bitnami, who could help
-
RE: [OTTER 3.0] Adding dependencies in role break server access
Confirmed! Thanks for the report; there is a regression with displaying the status of dependent roles.
I did a very quick code patch so the page wouldn't crash (don't display dependent roles on that page), but the real fix will come via OT-412 - we'll target it for the upcoming maintenance release, but this one might be a bit tricky b/c that particular code is a bit messy. We'll get it fixed.
-
RE: BuildMaster : Legacy URL Trigger editing
Ah, I see what happened. The logic to display that particular should be tab
BuildMasterConfig.Legacy.ScmTriggers || BuildMasterConfig.Legacy.UrlTriggers
. I fixed it!FYI - we still ship new maintenance releases of BuildMaster 6.1, but no "new features". The main goal is to make sure we have everything to help transition the migration to 6.2/7.0, so if there's anything we can add please don't hesitate to ask.
-
RE: BuildMaster : Legacy URL Trigger editing
There's no problem in using the legacy features in BuildMaster 6.1.28, though they are a bit hard to find. Our goal was that no new users would see them, but existing users could still access them.
You can directly navigate to the page where these are displayed with
/schedules
under an application, so for examplehttp://buildmaster/applications/4/schedules
. You should be able to also find this page on the Applications > MyApp4 > Settings > Legacy Build Triggers.The Legacy Build Triggers link will only display when
Legacy.ScmTriggers
is checked (Admin > All Settings). This should be set if you have them (the legacy feature detector should have checked this, but it might not have).These are also directly the database, happy to give more insight if you want to look across all apps. I don't believe there was a way to do this globally (but you can for 6.2 / non-legacy).
-
RE: Feature Request - ProGet - Update vulnerability list if a package is not available in any feed
@harald-somnes-hanssen_2204 that's... a lot of vulnerabilities
I did just want to confirm this bit...
Manually by version, where the version is either removed entirely or unlisted .. very ineffective.
Are you referring to deleting/removing vulnerabilities, or the packages themselves? Are you using "retention rules" to clean-up the old chocolatey packages?
Basically the feature idea I'm thinking essentially a checkbox on the Retention Rules, where it deletes the vulnerabilities when the package is deleted, if no other packages are using it. That seems like the easiest and most explicit way to manage going forward
-
RE: Support for Dart/Flutter pub.dev package repo
Thanks @bvandehey_9055 and @harald-somnes-hanssen_2204 !
It's likely not something we can consider in the next quarter (q2) or so, but we can reinvestigate our roadmap as we approach Q3.
-
RE: npm missing sha512 integrity
FYI @alexjeffreys_3320 this is currently targeting 5.3.26, which is planned for April 2 (PG-1914) - we'll update if it gets tricky or problematic
-
RE: Connector to Azure DevOps NPM package feed not working
@nicolas-morissette_6285 ah, that's too bad, but not surprising. Microsoft doesn't like non-Microsoft products (or even products from other departments) using their products
Unfortunately we can't "see" the response or back/forth communication and have no idea what's going on.
You could attach ProGet to a proxy server (like Fiddler) by going to Admin > Proxy, and see what's the back/forth communication is.
Or , easiest thing, just send us an access key to test with. We can just attach a debugger and see the responses, and maybe even fix the code or work-around whatever nonsense they're doing.
If you email it to support at inedo dot com with the [QA-529] in the subject, we can then attach it to the internal issue, and an engineer can investigate.
-
RE: npm missing sha512 integrity
Thanks @alexjeffreys_3320,
So basically, npm install can fail when you transition projects from npmjs.org to ProGet. I can see that being inconvenient, since you'd have to redo the lock file.
In theory it should be an easy fix to do, right? But just wanted to understand why
We can just use this thread as a feature request, I've already marked it that internally, and it'll get evaluated from here. That'll take a few days, but we'll try to respond within a week about the status! Please stay tuned
-
RE: Cant login in proget behind nginx proxy
@emejibka_8689 got it, thanks!
So, ultimately, you were able to get it working by adding that?
I'd love to get a NGIX Guide together that walks through how to do this. What did your NGIX configuraitn end up looking like?
-
RE: Buildmaster Version 6.2.27 (Build 6): Error 500 saving Config files
@antony-booth_1029 we'll try to figure this out!
Did this just start happening? There were some minor changes/fixes to the config file pages in the last release I believe, and it may have impacted this.
Could you check Admin > Diagnostic Center to find the error message and stack trace from this?
-
RE: npm missing sha512 integrity
Hi @alexjeffreys_3320 ,
Why don't we put together a Feature Request for this?
To start with, how would adding
.integrity
metadata help? Is this for another tool integration? I understand thatsha512 > sha1
, but if you've already got a secure connection to ProGet, then you can already trust the packages you downloaded.Thanks,
Steve -
RE: Feature Request - ProGet - Update vulnerability list if a package is not available in any feed
@harald-somnes-hanssen_2204 thanks for the Feature Request, definitely makes sense to me!
Just a couple things to consider / think about, on our end, from an implementation point.
-
How do you remove internalized chocolatey packages? Is this using the Package Retention Rules feature? Maybe it would make sense to add a deletion as part of this process.
-
How many excess/outdated vulnerabilities do you have now? Handful? Dozens? Hundreds?
-
-
RE: Connector to Azure DevOps NPM package feed not working
I'm not super-familiar with ADO's npm feeds, but I don't think you're the only one who's had this issue, based on this old post.
Since that post was made, we added the "Authentication" drop down, and you should select Bearer for that. Then, keep Username empty.
If you do that, the Password field will be used as the Bearer token.
Can you try taht, and let us know if it works?
-
RE: [BuildMaster] Configuration files history is empty
@philippe-camelio_3885 this seems to be a regression, but an easy fix. It'll be fixed in the next BuildMaster (6.2.28), but you can grab the patch from here: https://inedo.myjetbrains.com/youtrack/issue/BM-3672
Just download that SQL file, run it in your environment, and they'll start showing again.
-
RE: Support for Dart/Flutter pub.dev package repo
Thanks @bvandehey_9055 !
Just a couple questions to help us understand...
- Do you intend to publish your own, first-party packages?
- Are there any other third-party package sources, other than pub.dev?
- Aside from your own packages, what are the main benefits of a private repository for third-party packages?
The recommended private server (https://pub.dev/packages/pub_server) seems to be discontinued and no longer maintained. Are there any other private package server on the market?
This will help us come up with how to evaluate this!