@Srinidhi-Patwari_0272 here is the api for healthcheck:
https://docs.inedo.com/docs/proget-reference-api-health
the url would be /health
@Srinidhi-Patwari_0272 here is the api for healthcheck:
https://docs.inedo.com/docs/proget-reference-api-health
the url would be /health
Hi @Srinidhi-Patwari_0272 ... it sounds like everything's working on the ProGet side, but I'm not familiar enough with troubleshooting Terraform/etc. to advise how to troubleshoot automating that.
The domain name is abcd.efg.com
and the port is 8624
; it's very likely someone will need to open that port on the firewall as well.
Cheers,
Dean
Hi @v-makkenze_6348 ,
This is the first time I've seen a "link": true
type entry in a lock file.
The docs for package-lock.json aren't very clear for what this signifies, aside from:
A flag to indicate that this is a symbolic link. If this is present, no other fields are specified, since the link target will also be included in the lockfile.
Not sure how a "link" entry like that gets added to a lock file, but I think we should just skip it then? Basically add code in that loop that does this?
if (npmDependencyPackage.Value.TryGetProperty("link", out var link) && link.GetBoolean() == true)
continue;
What do you think?
Cheers,
Dean
@Srinidhi-Patwari_0272 said in How to automate ProGet installation via ansible so that ProGet and SQL EXPRESS gets added to D drive instead of default C drive:
How to make ProGet URL accessible to all?
If you're able to access ProGet while being logged into the server, then it means the web server is up and running without issues.
From here, you'll need to adjust Windows firewall settings and/or enable DNS so that others can access. There's also certificate settings you'll need possibly. I'd check w/ your network team on this --- definitely not a change you can make inside of ProGet itself.
Hi @Evan_Mulawski_8840 ,
That particular bug may be resolved by running the patch script attached to this upcoming change: https://inedo.myjetbrains.com/youtrack/issue/PG-2484
As for retrying the migration; there will be a button on the ProGet root page if there is a failed/incomplete migration (/) that allows you to retry it. This will open to the /retry-migration
URL, which I would recommend using instead of a database insert.
Best,
Dean
Due to some reasons I don't fully understand, authenticating to the Docker API requires using a Bearer authentication token Authorization: Bearer {token}
Here is an example for how to get that token:
function GetDockerToken() {
param (
[string] $packageName = $(throw "-packageName is required. This is the namespace and image name. For example: library/my-container-image"),
[string] $feed = $(throw "-feed is required"),
[string] $actionToAuthorize = $(throw "-action is required. This is the docker action to be authorized (pull, push, delete, etc)"),
[string] $apiKey = $(throw "-apiKey is required"),
[string] $progetBaseUrl = $(throw "-progetBaseUrl is required. "),
[string] $service
)
if ($service -eq "") {
# This expects that $progetBaseUrl is prepended with "https://" If you are using "http://" then change 8 to 7 below.
$service = $progetBaseUrl.SubString(8,$progetBaseUrl.Length-8)
}
$base64AuthInfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(("{0}:{1}" -f "api","$apiKey")))
$response = Invoke-WebRequest -Uri "$progetBaseUrl/v2/_auth?service=$service&scope=repository`:$feed/$packageName`:$actionToAuthorize" -Headers @{Authorization=("Basic {0}" -f $base64AuthInfo)}
if ($response.StatusDescription -eq "OK") {
$token = ($response.Content | ConvertFrom-Json).token
$token
}
}
I know we plan to document all this better some day, for now you can find the Semantic Versioning for Containers page.
Dean
Hi @Stephen-Schaff,
Thanks for the detailed write-up; the behavior you describe makes sense, and made for very copy/pasting PG-2478 :)
We anticipate getting this in the next maintenance release, scheduled for Sept 15
Best,
Dean
DockerImages_GetTags
is part of the Native API, which is basically just a wrapper around the SQL ProGet database. That's fine to use, but you can also use the Docker API to list tags as follows:
curl -X GET https://proget-url/v2/feedName/repositoryName/tags/list
The Docker API will show you tags that come through a connector, where as the Native API is will only be local tags.
The Docker API does not provide a way to directly retrieve the operating system information of Docker images. However, you can infer the operating system information by examining the contents of the Docker image itself or by using external tools and scripts.
Here's an example in Python of how to do that:
import docker
def get_os_of_image(image_name, tag):
client = docker.from_env()
image = client.images.get(f"{image_name}:{tag}")
os_info = image.attrs['Os']
return os_info
# Example usage
os_info = get_os_of_image('my-image', 'latest')
print(f"Operating System: {os_info}")
Alternatively, you can also inspect the image layers directly. Docker image layers contain files from the image's filesystem, and you can look for specific files or patterns that are indicative of the operating system.
Best,
Dean
Hi @pariv_0352,
What version of ProGet are you using?
Perhaps this is related to this:
https://forums.inedo.com/topic/3596/proget-2022-6-the-hostname-could-not-be-parsed
Dean
@jw that's the correct column! Just the incorrect value being added :(
hi @msmith_2315,
Oftentimes, this is related to a database backup that's occurring; if you change the database recovery model to SIMPLE, this generally makes backups much quicker.
It could also be related to a scheduled job; you can see which scheduled jobs ProGet is running by going to Admin > Scheduled Jobs. There are logs, and you can see execution times, etc.
Cheers,
Dean
Hi @pmsensi ,
Unfortunately, with multiple connectors, "quirky" search results are to be expected, especially with broad searches.
The technical reason for this is quite complex, but has to do with how the NuGet search API is designed (unbounded, paged results with pages of indeterminate size), the fact we need to support both v2 and v3 APIs, and the need to aggregate multiple (often conflicting) search resultsets into a single resultset in a performant manner.
What are you trying to accomplish with the search api? It's mostly intended for visual studio.
If you already know the package name (microsoft.playwright.nunit
), you should use the registrations api (e.g. microsoft.playwright.nunit/index.json).
As far as missing fields, I can't answer why those aren't there - it could be relatively new to the search api, or omitted for performance reasons.
Cheers,
Dean
@jimbobmcgee said in Apply-Template adding unexpected CR newline chars:
Without wanting to hold you to a particular date, what is the typical release timeframe?
We're targeting BuildMaster 2023 for Q3/Q4 and Otter for Q4. We don't have any dates beyond that at this point.
And it sounds like you understand quite well why we're not so keen to jump on making relatively easy changes - definitely a lot to consider. We have a lot of "product debt" like this and have learned to be a lot more cautious.
That said, I think as long as it's documented it's probably okay. That's the hard part of course (writing docs) - but there's a Note
attributes that can be added to explain things as well.
cheers!
Hi @norbert ,
The easiest way to accomplish this is to configure two IIS Sites (different hostnames and/or ports), that point to the same directory - one that has Windows Integrated Authentication configured, and the other that doesn't.
This is not an uncommon configuration - a lot of clients (like npm, Docker) simply don't support Windows Integration Authentication. Unfortunately, Windows/IIS (which handles the authentication) does not support enabling/disabling it at a URL level, hence why you have to do the whole site.
Cheers,
Dean
Thanks for sharing all of these; we'll try to reproduce/investigate and fix some of these soon.
[1] and [3] seem pretty easy to test/reproduce.
For [2], that's definitely strange. I assume, that's a generic Git repository? I thought we had fixed that, but I wanted to confirm before investigating further.
For [4], that's a good question. The behavior kind of makes sense from a technical standpoint, but it doesn't make a lot of sense from a user perspective. I don't know if we want to change it outside of a major version. Someonme might be accidently relying on it.
For [5], we haven't seen that before. Any details you can share would be helpful, such as the OtterScript plan you're using and so on. Is this the execution temp folder, etc? We have several things to work-around random git corruption, but I want to see where this is before explolring more.
Hi @ForgotMyUsername , thanks for the bug report!
I logged this as OT-491, and it should be a trivial fix we can get in the next maintenance release :)
Hi @magnus-wiksell_6950 ,
I think you identified the underlying issue...
It seems like the internal counter includes individual package versions when counting and from that result shows the latest of each package.
... it's not clear to me, but are you saying that this is happening only in ProGet 2023? And that this wasn't a problem in ProGet 2022?
We just released a few days ago, and since there were some major changes to the underlying indexing system (i.e. database), there will be a few UI regressions here and there.
FYI: I haven't investigated this yet, but the "Feed View" is surprisingly complicated under the hood, since it does all sorts of aggregation with connectors, and combining local and remote packages into one view. Obviously this shouldn't be happening, but you can get a much simpler view under the "Packages" tab on the top navigation.
The logic is fairly simple, but in general the changes are:
/symbols
, the symbols will be indexed and the file will be saved as .snupkg
/symbols
, the .snupkg
will be returned instead of the .nupkg
file.snupkg
file will also be deletedThere's of course some details to work out, but with this approach, we're not treating .snupkg
files as "NuGet Packages", so they won't show up in feeds.
@lm FYI, our plan is to create a second URL for pushing symbol packages, and then essentially save the files pushed to that URL as .snupkg
, next to the .nupkg
files on disk. This URL will be wired to SymbolPackagePublish
and documented.
So hopefully this will make a better user experience, and then also help w/ the hash bits as well.
@lm said in Uploading snupkg using NuGet client:
If I'm not completely wrong, all that is needed is a configuration option at the first feed "Use Feed x as symbol feed" and as soon as that is configured, the index.json of the first feed can return the correct SymbolPackagePublish resource and everything should just work.
You're correct -- that would probably work, but we thought a "single feed approach" would be much better from a user-experience standpoint. This way, there's just one feed to configure, and you push package files and/or symbol files to that feed using that default NuGet configuration. The files will be are stored on disk, right next to each other.
This is something we're planning for ProGet 2023, but can probably do as a preview feature in December or January.
@marc-ledent_9164 that page should redirect to the manual license activation page if it can't automatically update after a timeout (usually a few seconds) occurs
If it won't redirect, you can manually type in /administration/licensing/manual-activation
and handle it that way.
Hi @g-koessler_0127 ,
This is a networking error, and it basically means that the remote server (http://nexus-proget.???
) abruptly disconnected. There is no additional information provided by the remote server, so the networking stack (Windows/.NET) provides only this information.
This is almost always related to server-level security settings (HTTPS required, TLS settings on the ProGet server, certificate trust, etc.), but it also might be related to network-level settings like url restrictions, proxy servers, etc.
Ultimately I'm afraid it's not easy to troubleshoot - this is typically something that the network operations needs to help with, as they have tools and are familiar with restrictions placed on servers, and they can resolve them.
I'm hoping this at least points you in the right direction?
You can ignore this error; for whatever reason, the NuGet client unexpectedly terminated the connection, and the result was that ProGEt stopped writing bytes. Not really anything to worry about.
The diagnostic center isn't for proactive-monitoring, more for diagnostic purposes. So ulnless users are reporting a problem, you don't need to check it.
@rmusick_7875 generally speaking feed information is combined from chocolatey.org (and other connectors you configure), metadata stored in the database (the NuGet*
tables), and cached/local package files on disk
@p-boeren_9744 sure thing!
Never noticed it before, but I just fixed as PG-2100 - and it'll go in 6.0.9 ( Feb 25)
@cronventis would you be able to get us the following?
/api/v1/pods
)If so, we can provide a secure link to let you upload them to us. Note the namespace list might be multi-paged, which means you may need to use the continue
argument; https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#list-all-namespaces-pod-v1-core
This would be the easiest for us to sift through, and you wouldn't have to run queries, etc.
If not, we'll think of something else :)
Hi @r-riedinger_1931 ,
That error indicates that the database code (Stored Proc) didn't get updated, or is somehow out of sync with the installed version. Is it possible that you're pointing to an older database?
You may want to run the installer again, just to make sure.
Dean
@harald-somnes-hanssen_2204 unfortunately not; a feed index provides details about only the latest version of a package. You need a separate query to find details about all versions of a package, and then would need to do that query for each package in an index
Instead, it's better to just have a feed of approved packages that developers could use. This will let you also filter for other problems like quality.
We recently published some advice about, but it's for nuget feeds. https://blog.inedo.com/nuget/package-approval-workflow
It would work the same way for npm though
Hey @sylvain-martel_3976 !
Thanks for starting the discussion; I added it to our Other Feed Types to hopefully get some more attention.
I've been following WinGet for a little bit now, and if I'm being totally honest, I don't get it
My understanding of WinGet is that it's basically an index that simply points to installer files. In other words, there are no "WinGet packages" - the client simply goes to the index (a big ole yaml file from what I can gather), and downloads a MSIX installer. Is that what you've seen? Am I off here?
A lot of Chocolatey packages basically do this via a PowerShell script, and that's a big painpoint for folks who try to use it for anything other than some quick/basic workstation steup. Some do bundle their whole installer with Chocolatey, I've seen.
Can you tell me how / why you want to use WinGet? What are you currently doing without WinGet, and how will WinGet make it better?
Thanks,
Dean
TIL that PowerShell can use internal CLR generic reference type names like that! But really, please don't do that...
[System.Nullable``1[[System.Int32]]]
[Nullable[int]]
... much easier to read
Thanks @nicolas-morissette_6285 -- we've received it, and I've attached to our internal dashboard. I can also see which domain/company you're with, based on the Email -- and it looks like your company purchased a license, thanks!
I'll escalate the priority internally, and from here we'll just try to reproduce/research this (might take a day or two)... but we'll update as soon as we learn more!
@JonathanEngstrom Otter v3 is still really early in the release cycle (closer to what folks used to call "beta" than "stable"), and we're getting the bugs worked out. We also fixed a bunch of bugs (unintended behavior) while testing v3, so it could be an unexpected change.
Please let us know specifically what you find so we can look to get things working. I'm not really sure what you mean by "Ensure Server $servername".
The only thing you should need to do is change PSEnsure
-> PSEnsureScripts
to get those scripts working, but the single-script will be a lot better once you get a hang of it. We plan to put a lot of work in documenting this new integration, and coming up with tons of examples to help folks verify the configuration and configure their servers.
The main real advantage to the new PSVerify
/PSEnsure
mechanics is that you can write a single PowerShell script to both verify and configure servers, and more easily create/modify/test/share those scripts outside of Otter.