Hey @mjc_4927
Since you're already using Docker, you can install a new version of the container...
proget.inedo.com/productimages/inedo/buildmaster:7.0.9-ci.2
Cheers.
Alana
Hey @mjc_4927
Since you're already using Docker, you can install a new version of the container...
proget.inedo.com/productimages/inedo/buildmaster:7.0.9-ci.2
Cheers.
Alana
Thanks for sending those over @RobIII
At first I want to confirm, are you using the "V3" API endpoint in ProGet, for your VS2019 configuration? If not, please do that, since VS treats those APIs differently.
Otherwise, there's nothing off about the index files... the most likely scenario is that there is a problematic "translation" between VS2019's API call to ProGet, and ProGet's API call to GitLab. There are more than just those index files that are used.
It's most likely a bug on GitLab's end, in that they're doing something against the spec that just "happens to work" in VS2019. We struggled with a very similar bug with GitHub, but their timeline to investigate/fix it was crazy, so we just did a workaround (see PG-1932) and have it mostly working.
Ultimately, the customer decided not to use GitHub in this manner, because the workflow (outside of the API) was painful and GitHub is buggy they said. So there's that. They just publish packages to ProGet directly instead. You may find this workflow better as well.
Anyways... if you're already using the v3 api endpoint in ProGet, we will need to get this reproduced so we can attach a debugger and find the problem.
I understand it's not feasible to provide access... maybe it won't be so bad to set-up a reproduction case with a public repository? We don't know how to use GitLab's packages features, so if you can help set that up, we can attach a debugger and get it tested.
Thanks
Hi @kenneth-garza_2882 ,
Based on your use cases, an extension wouldn't be a very good fit... and Webhooks are the way to go.
For ProGet (unlike BuildMaster/Otter), extensions are really specialized, and it's pretty rare for folks to make their own, and we're keen on helping anyways.
For example, depending on what your vulnerability scanning software is, we'd love to help and even adopt the integratino, so other users can use it.
The main three things we see are:
And the best way to do those, are mostly to copy the examples/code we already have.
Let us know more details, and we can do ourbest to help!
Thanks,
Alana
Hi @paul-reeves_6112 , sorry on the slow reply, we talked about this in our engineering meeting, but then I forgot to reply to it.
We've seen this come up a few times, but it's not limited to offline installations. We are thinking the problem is on the Windows Service Control Manager side of things (it's not sending the right signal), but we can't seem to pin it down. Our general plan is to add a timeout or something, and hope that it addresses the problem.
Not sure what's going on with the threads though; it might be related to the in-process database we use (Sqllite), doing some background things? I don't know...
Can you let us know if you it keeps happening?
Hi @tomg_5321
Glad you can get some results using the non-filtered search. If there is more demand, we may implement it... but you're the first person mention it.
However, based on your requirements, I don't think you should use search..
my actual underlying requirement is to test if a certain version (which may not be the most recent) of a certain package (by exact id) exists on the server, in an efficient way, for use over relatively low-performance network links (ie, not over local LAN)
In this case, I recommend you just do a HEAD
request on this URL: /nuget/<feed-name>/v3/catalog/<package-name>/<version>.json
For example in PowerShell, Invoke-WebRequest -Method Head https://proget.inedo.com/nuget/NuGetLibraries/v3/catalog/inedo.buildmaster.sdk/2.0.0.json
A 200
means package exist, a 404
means it doesn't exist.
Cheers,
Alana
Hi @daniel-cooper_8892 ,
At first, please note you'll need to upgrade to v5.2 first, and then you can move to v5.3: please see Upgrading from ProGet v4.
Of course, you'll need access to your database. Please check out: Connect to SQL Server when system administrators are locked out
Once you do that, you can connect from SQL Management Studio, and then add yourself (and ideally the local Administrators group) as the sysadmin of sql server.
Cheers,
Alana
Hi @tomg_5321 ,
Looking at the code real quick, it seems that when performing searches against a local package store, ProGet does not parse the query in the same way nuget.org does (i.e. using filters).
Instead, if the search string (ID:mycompany
in your case) exists within the Id, Title, or Tags, it's returned. I don't think that string exists, but mycompany
must.
Can you try doing nuget search mycompany ...
instead?
Thanks,
Alana
Hi @RobIII ,
Thanks for sending that over; we've reviewed it, but nothing is jumping out as a problem. I'm thinking it's something else...
Can we do this?
At first, let's move this to a new topic. I'm going to lock this thread, can you reply as a new topic? The reason for this is so that we can track much easier.
Is it possible to share access to your GitLab repository? That's the easiest thing of course, but you say "on-prem" so maybe not so easy.
As an alternative, can you create a simple reproduction case using a public GitLab repository? This way, we can plug ProGet into it, attach a debugger, and figure out what's going on.
The most likely scenario is that GitLab doing something wrong/weird against the spec, but it just happens to work in Visual Studio. That was the sitaution of GitHub.
Thanks,
Alana
@rbenfield_1885 hello just as an update, we make it so that automated scanners don't complain about this version in upcoming versions of ProGet (no ETA)
Hello,
Can you provide some specific reproduction instructions?
For example, I create a request like yours (but using Inedo
and our ProGet server), and it produces results:
Thanks,
Alana
@maxim_mazurok this has been implemented for a couple years now -- however it is still considered an "Experimental feature" because the npm audit API is undocumented and npm, Inc. does not support third-party implementations.
Instead, ProGet will attempt to forward requests to the audit endpoint to npmjs.org or a connector. This may stop working if npmjs.org changes the API or blocks ProGet's requests. They've only changed APIs once, and we've promptly fixed it. In any case, you can configure the proxy URL in ProGet 5.3 by navigating to the Manage Feed page on your npm feed.
@Stephen-Schaff you got it! We'll try to get PG-1996 added in the next maintenance release. Should just be a pretty straight-forward loop :)
Hi @Stephen-Schaff,
I'm not all that familiar with Helm charts, but on some feeds, we have a "Delete all versions of this package" on the Delete Package page. I can see that checkbox is not on the Delete Helm Package page.
I guess, that would have the same effect? A checkbox that said "Delete all versions of this chart (i.e. delete entire repository)"?
What do you think? It seems relatively easy to do, and then we can fix the redirect thing as well.
Cheers,
Alana
Hi @paul-reeves_6112 ,
Of course, it should totally work in the usecase you described, I just wanted to explain how it got missed during testing :)
In general, we recommend not including configuration files that contain environment-specific configuration in artifacts to begin with, or at least using include/exclude filters on the Deploy Artifact operation. Then, deploying using the configuration files feature. But that's just general advice of course.
However, your use case sounds a little different, in that you want the configuration file deployed, but only the first time? I don't have a great solution to that, but the main downside I see is that engineers who manage/edit the configuration file (add values, etc), may be surprised when it's "not deploying" like they expect it to.
Config files can be tricky to balance as you probably already know :)
Cheers,
Alana
Thanks for investigating that; unfortunately I don't know the details to that.
We did a bit update to MyInedo recently, and tested it with an old install of BuildMaster v 3.5 and ProGet v2 - both requested a new key and activated it fine.
Maybe, try requesting a new license key at MyInedo that might help?
Hello; what version of ProGet are you using?
@internalit_7155 although v3.7 is really old, it should still work. The box on MyInedo is currently really teeny, so make sure that you expand the box and copy/paste the entire activation code. It's pretty long, and it should end with =
or ==
. (it's a base64 encoded string)
Hi @jndornbach_8182 ,
Thanks so much for sending that over, super helpful!
Here's what I did:
maven-feed
.jar
PUT request (changed URL) ... jar uploaded.pom
PUT request... pom uploadedmaven-metadata.xml
GET request... results were identical to yoursmaven-metadata.xml
PUT request.... no error There are no ProGet software changes; from the headers, I can see you've since upgraded to ProGet 5.3.32, and I don't see anything that would remotely yield this difference - https://my.inedo.com/downloads/issues?Product=ProGet&FromVersion=5.3.33
The only thing I can think of is .NET5/Linux (you're running on Docker) vs .NET452/Windows (I was using IIS). We keep finding a few oddities like this, so I logged this as PG-1992 and escalated it. We should et it fixed for next maintenance release (July 30), but a few folks are on vacation now, so it'll take a couple days to get thru review.
Hi @paul-reeves_6112 ,
Good catches! These will be hidden in BM-3720. Originally this was intended only to hide it from the navigation, but it should also be displayed here.
FYI; you can download this and the other fixes in 7.0.6-rc.3
Alana
Thanks @jeff-peirson_4344 ! That's great. I'm going to assign this to a senior engineer to review (looks good to me), and then we'll merge/publish a new version next week.
And of course if you have any other suggestions on how to improve, please let us know. This feature came directly from a user collaboration - https://blog.inedo.com/feature-not-a-bug
Hi @paul-reeves_6112 ,
This was a strange bug that came from a v7 compiler change and missed test case; if you do ReadOnly=false (the typical usage), it works fine.
Fortunately it was a very simple fix, and if you download InedoCore 1.12.2-CI.1, it'll be included in that. We will ship 1.12.2 with the next maintenance release as well.
You can set your extensions feed url (Admin > Advanced Settings) to be https://proget.inedo.com/feeds/PrereleaseExtensions
, which will download prerelease versions from the extensions page
Thanks,
Alana
Hi @paul-reeves_6112 ,
@apxltd requested that we implement this after all
So it will be handled via BM-3719 in an upcoming maintenance release, and the icons will be proxied through buildmaster server.
Alana
Hi @adrian-leeming_9656 ,
The Inedo Hub uses the system's local package registry (`C:\ProgramData\upack\installedPackages.json) to determine which packages/programs are installed. The easiest thing to do is just delete that directory.
Here are the full troubleshooting instructions for fixing a broken install:
https://docs.inedo.com/docs/desktophub-troubleshooting
cheers,
Alaa
Hello,
The NuGet Client only uses the the -ApiKey
you on certain requests to the -Source
, specifically the PUT
of the package file. Before doing that, it will query the package source using a GET
, and in this case, the API Key isn't sent.
The easiest solution is to allow anonymous view access to your feed. That simplifies configuration for all users of your feed.
Alternatively, you will need to add credentials to the PackageSourceCredentials in the NuGet.config file . Note this file exists in several places, please see Config file locations and uses
Cheers,
Alana
Hi @paul-reeves_6112 , looks like this was a v7 regression due to some UI/library changes.
hopefully we can create a new/better configuration file editor at some point soon :)
This is fixed in 7.0.6-ci.2; there haven't been many code changes since 7.0.5, so it's quite safe (even though it's pre-release). Here is how to install it:
https://docs.inedo.com/docs/desktophub-overview#prerelease-product-versions
Cheers,
Alana
@kichikawa_2913 glad it's working now, you seem to have found the issue.
FYI, you can also configure that value from Advanced Settings page under Admin.
Hi @benjamin-soddy_9591 ,
ProGet does not have support for package deprecations at this time, and we haven't had anyone ask about them until now :)
It could be very tricky to implement, because once a package version is pulled or cached to ProGet, there is no "synchronization" of "server-only" metadata. The reason is, packages versions are supposed to be immutable, and metadata self-contained.
The "unlisted" flag is similar. If a NuGet pulled/cached package becomes unlisted, then ProGet won't "know" about that, and therefore can't inform the client.
There doesn't seem to be any value in depreciating a first-party package (i.e. a library you create), as even the largest organizations should have communication channels for library authors and framework teams. Plus, they can monitor usage, and reach out directly to teams.
However, but I can see why you'd want to know which third-party packages are deprecated... but so far as we saw, very few package authors deprecate their packages. There seem to be better tools (vulnerability scans) to see if there are problem libraries.
So, if you could share some more specifics about what you're doing, why deprecation is helpful (with specific packages), etc., metadata would really help us understand the value in such a feature.
Hi @jharbison_7839 ,
We definitely need to update our release template documentation, but in the meantime...
The "Allow all variables to be added" option allows you to freely add/edit/remove variable from the Build/Release pages, regardless of restrictions set-up by the Release Template. When it's not checked, only templated variables may be configured.
This is not checked by default on new release templates, and prior to v7, there was no ability to edit variables using the template.
So, checked = just like it was in v6.2 and earlier, and not checked is temlate-based editing.
Alana
Hi @paul-reeves_6112 ,
I ended up fixing this in the extension after all - can you try 1.10.5?
We will still want to make the SDK change, but it was more involved than I thought. So the team wants to wait to do that later... especially since we could quickly fix it in the extension.
Cheers,
Alana
Hi @sbolisetty_3792 ,
Cron format is a bit tricky, but it "can do anything"... so it seemed like an "decent" compromise to use that for advanced/custom intervals.
But to be honest, even after reading the docs, I can never figure it out, and always use a generator like this:
https://freeformatter.com/cron-expression-generator-quartz.html
What cron expression do you use, and what time of day did you check the "next run time".
Cheers,
Alana
Hi @paul-reeves_6112 ,
Thanks for all the detailed information; it was a bit of a rush due to US holiday yesterday, and I didn't look so carefully. That patch to Scripting (1.10.4) seems to fix text mode, but the editor is clearly still broken unless you prepend the application name (which you figured out).
The application name should not be required like this, and I've logged a product change to fix this (BM-3707 and SDK-74). This will get fixed in the next maintenance release of BuildMaster, which is scheduled for Friday.
We may ship it sooner, because this regression is quite annoying.
Cheers,
Alana
@paul-reeves_6112 I'm thinking I didn't fix the regression after all
Can you go to Admin > Diagnostic Center and share the stack trace that resulted from the message? It should be logged in the case of an editor crash like that. I suspect it's related, but I only used a very simple/quick script test.
Hi @paul-reeves_6112 ,
I was able to track down this error; it's a regression introduced into v7, and only impacts the OtterScript editor.
If you upgrade the Scripting
extension (Admin > Extensions) to 1.10.4 then it should work :)
Cheers,
Alana
Great, thanks @hwittenborn -- where's pull requests for our docs when we need it ;)
I'm not familiar enough with Linux or Debian to understand or make the second changes you suggested (I'll leave it to @rhessinger), but I replaced the wget
command with yours, which avoids creating that file.
Cheers!
Thanks for letting us know @hwittenborn! That wasn't very clear that it was two separate commands....
I updated the docs to hopefully be clearer, and break out these into three steps.
@Stephen-Schaff said in API to apply an Alternate Tag to Docker Container Image:
Here is the PowerShell function that does both the promotion and the tagging (incase anyone ever needs something like this). Kind of a "do it yourself" repackaging. (Might be nice to have the Repackaging API support Docker container images someday).
Thanks for sharing this, I've added it to our Semantic Versioning for Containers docs page, I hope that's okay :)
And yes I agree, it woudl be nice to make this an easier API call
Thanks for the update @Stephen-Schaff -- And just to add to this, you should see "Note that you can use the following alternative tags to refer to this image:..." on the browse image page.
@jndornbach_8182 said in ProGet shows "(500) Server Error: Value cannot be null. (Parameter 'version')" when opening "Dependencies" tab of Maven artifact:
Is there a way that ProGet can cope with "parent-managed" dependency versions in future or, at least, does not throw this error?
For sure! It does look like this is supported in most places, but not on this page in the UI.
Easy fix... I logged this as PG-1968, and it will ship in the next maintenance release of ProGet, which is scheduled for next Friday. Or I can show you how to download a pre-release if you'd prefer sooner :)
Hi @jndornbach_8182,
Thanks for posting all the information, it's really helpful!
With that stack trace information, we can see where the error is occurring... and as the error says, the PUT body doesn't contain XML as the code expects.
if (context.Request.HttpMethod == "PUT")
{
if (info.IsMetadataRequest)
{
if (info.HashAlgorithm == null)
{
var xdoc = XDocument.Load(context.Request.InputStream);
var metadataElement = xdoc.Element("metadata");
if (metadataElement != null)
{
var versioningElement = metadataElement.Element("versioning");
if (versioningElement != null)
{
var releaseVersion = (string)versioningElement.Element("release");
if (!string.IsNullOrWhiteSpace(releaseVersion))
await new DB.Context(false).MavenArtifacts_SetReleaseVersionAsync(feed.FeedId, info.GroupId, info.ArtifactId, releaseVersion.Trim());
}
}
}
else
{
// Don't need to actually save the hash since it's computed on demand
}
context.Response.StatusCode = 201;
}
}
If I'm being totally honest, I don't understand why the code is doing what it's doing
But that doesn't mean I can't help fix it! Could I trouble you to attach a tool like Fiddler to capture the requests, and then send us the session file? We can then inspect that PUT request, and see what's going on.
You can send to support at inedo dot com with a subject of [QA-597]
-- but please let me know when you do, so I can dig in the box and find it.
Thanks,
Alana
Hi @Ashley_8010
In nearly all of our external retail websites, there are tonnes of connections to third party websites which inevitably drive up the loading times for our customers. Some loading times can be upwards of 10+ seconds (depending on huge sneaker drops or major releases)
It sounds like what might need is a sort of web bundler/packager (perhaps like webpack)?
And then a Content Delivery Network on top of that, that can cache and serve static content faster than your server? We use Cloudflare ourselves.
ProGet does allow you to access individual files within a package (see the files tab on InedoLib for example), and download those files with a URL like this.
However, this feature was not designed to be a front-end web server, and I wouldn't recommend using it as such.
Cheers,
Alana
Hi @joel_6345 ,
Can I trouble you to read the latest version of the documentation?
https://docs.inedo.com/docs/proget-feeds-nuget-symbol-and-source-server
Actually I just rewrote it, so I'm really hoping that this can clarify your question .. and that it would have made it easier in the first place :)
thank you
Hi @Stephen-Schaff ,
I'm not sure about the API, but ProGet implements the Chart Repository API, and from a look at that, you should be able to just access index.yaml
at the API Endpoint URL
Here is some information about the format of that file: https://helm.sh/docs/topics/chart_repository/#the-index-file
Can you give that a try and let us know what you find out? Cheers :)
Cheers,
Alana
Hi @coskun_0070 ,
Did you try setting an api key with setApiKey
? Perhaps there's another way to suppress this messages?
While it's relatively easy to add privileges and features, we've learned the hard way that it creates a lot more work in the long-run from a support standpoint and user confusion. It's best to keep things simple.
I think this is something addressable via nuget client configuration.
Cheers,
Alana
Hi @coskun_0070 ,
If I understand correctly, the issue is that you're having a hard time getting dotnet nuget push
to work without granting anonymous access to view feeds?
In this case, I believe you need to add the URL as an authenticated package source. This will also let you download packages with dotnet nuget restore
.
I believe this issue is resolved by dotnet nuget add source
.
https://docs.microsoft.com/en-us/dotnet/core/tools/dotnet-nuget-add-source
Cheers,
Alana
Hello just confirming we received it! We can now begin the investigation of the problem from here.
@joshy-mathew_7277 what type of package is this?
HTTP is not a great protocol for pushing really large files like that, and things like IIS and Middleware breakdown with huge requests. ProGet doesn't have any limits per se, but the 503 error means that something between ProGet and your browser is killing the request. It's usually IIS.
Most of the client tools (NuGet, etc.) do not support "chunked" file uploads (Docker does), which is why we recommend using drop folder for these large files.
Is there an accepted Code of Practice for managing prerelease stuff?
Yes, the general rules to follow are these:
Do people publish prerelease stuff to a different feed and only post the released stuff to the more public feed? Or is it just expected that once the release is out that the prerelease builds are simply removed from the feed?
Yes to both
On our ProGet Instance, we use both patterns.
My advice for deciding which pattern to follow would be looking at the consumers of your feeds/packages (i.e. who uses your packages vs who publishes your packages).
Using a single feed that has release and prerelease packages requires more training for developers. If one of your developers accidently uses a prerelease package and commits that, then it's going to cause problems. Even if you catch it before production, it will waste time and resources.
@kichikawa_2913 we've reviewed this a bit more as a team, and believe that there are a few things to consider here.
At first, it's clear you have a large, "older" Active Directory. There is a tremendous amount of customization one can do to Active Directory, and do enough of them over the years, and you end up with a "older" directory that has layer of layer of compatibility shims. You should see the crazy hacks they had to implement to get MSA accounts working...
It's important to note here is the fact that Microsoft Active Directory and .NET (Core) do not play nicely together. It took Microsoft over 10 years to get .NET Framework to work with Active Directory, and it's still really quirky. We've worked-around as many of the bugs as we can.
Microsoft is still trying to get .NET Core on Linux to work properly with Active Directory, but it's got a very long way to go as you're seeing. There are so many strange behaviors we've already had to work-around (like methods sometimes returning strings, sometimes returning byte arrays) -- and these behaviors will just come with new versions of their library.
For all we know, the crazy "2 or so minutes" to do a login query could be a parsing error in their library? Or something timing out in their network code, but not logging an error? We saw all that in .NET Framework. In any case, we can only guess because their library provides no diagnostic information for us to use.
At this point, you should open a support ticket to Microsoft. This is the only way we can see how to identify why you have a "2 minute or so" delay to run a basic login query.
The code we have is really, really simple. It follows all of Microsoft's guidelines, and it'd be super-simple for you to reproduce the exact problems for them to show them. They have some advanced monitoring tools that can detect exactly what crazy stuff is happening between the query and Active Directory.
We can't do this, because we don't have access to your directory. It's unique to your setup and
configuration, somehow.
Alternatively, just use Windows instead. It will be significantly cheaper in the long-run (I suspect we've already burned through a lifetime's worth of licensing fees diagnosing this problem). Microsoft is still years away from even having the support infrastructure to help their customers with Linux problems, so any time there's a slight problem on Microsoft's end (SQL Server, .NET Core) , it will be "DIY" -- which really means, spend a lot of your time fixing quirks on their software.