Hello;
Can you share the errors you're seeing in the Admin > Diagnostic Center? Those will be logged w/ the 500 errors.
Thank!
Hello;
Can you share the errors you're seeing in the Admin > Diagnostic Center? Those will be logged w/ the 500 errors.
Thank!
@nicholas-boltralik_3634 very sorry on the slow response, we were internally discussing and tracking this, but I guess I didn't update you here.
We will target to make a software change in PG-1814, and hopefully get this working in the next or following maintenance release.
Thanks for updating and letting us know the problem was related to a connection string!
@jyip_5228 FYI, as part of ProGet 5.3.10 release, we shipped the ProGetCore
container image as well.
You can follow the normal steps in the Linux and Docker Installation Guide to install/upgrade, but just use progetcore
for the container instead of proget
.
Aside from support for the Lucene-based Maven feed indexing (in progress), it seems to be feature complete. And of course, if there are problems, you can switch back to proget:5.3.10
or downgrade as needed (no database schema changes).
For example, docker pull proget.inedo.com/productimages/inedo/progetcore:5.3.10
@nuno-guerreiro-rosa_9280 just to let you know, as part of ProGet 5.3.10 release, we shipped the ProGetCore
container image as well.
You can follow the normal steps in the Linux and Docker Installation Guide to install/upgrade, but just use progetcore
for the container instead of proget
.
Aside from support for the Lucene-based Maven feed indexing (in progress), it seems to be feature complete. And of course, if there are problems, you can switch back to proget:5.3.10
or downgrade as needed (no database schema changes).
For example, docker pull proget.inedo.com/productimages/inedo/progetcore:5.3.10
hi @scroak_6473 , just to let you know, as part of ProGet 5.3.10 release, we shipped the ProGetCore
container image.
You can follow the normal steps in the Linux and Docker Installation Guide to install/upgrade, but just use progetcore
for the container instead of proget
.
Aside from support for the Lucene-based Maven feed indexing (in progress), it seems to be feature complete. And of course, if there are problems, you can switch back to proget:5.3.10
or downgrade as needed (no database schema changes).
For example, docker pull proget.inedo.com/productimages/inedo/progetcore:5.3.10
Getting LDAP/LDAPS to work on Linux was a whole different problem to solve; the three major libraries (DotNetCore, Mono, Novell) all had separate and strange bugs. We'll be blogging about this, but for now, it might be a step in the right direction for addressing the problems you're seeing, at the very least.
Hello, lots of questions to address, but since you prefer to do a regular online upgrade, let's just suggestion that :)
The [config] button in Inedo Hub lets you configure a package source; this shoudl be https://proget.inedo.com/upack/Products
. If you whitelist proget.inedo.com
then it should be fine.
@ludovic_2596 this sounds familiar, from an early version of 5.3, but I can't find a report for it. It seems to work fine in the latest version for me.. can you try the upgrade?
The NuGet API does require absolute URLS to be returned, hence this behavior. And since the IP address changes (we check for local requests using HttpRequest.IsLocal, which basically just looks for 127.0.0.1), the license violation triggers.
Instead of using the Web.BaseUrl property, you could configure your proxy to use the X-Forwarded-* headers. I attached the code for how the URL is constructed in ProGet, and how these headers work.
var baseUrl = ProGetConfig.Web.BaseUrl;
if (string.IsNullOrEmpty(baseUrl))
{
var request = HttpContextThatWorksOnLinux.Current?.Request;
if (request != null)
{
var requestUrl = request.Url;
var forwardedHost = request.Headers["X-Forwarded-Host"];
int? forwardedPort = AH.ParseInt(request.Headers["X-Forwarded-Port"]);
var forwardedProtocol = request.Headers["X-Forwarded-Proto"];
var host = AH.CoalesceString(forwardedHost, requestUrl.Host);
var protocol = AH.CoalesceString(forwardedProtocol, requestUrl.Scheme);
int port = forwardedPort ?? requestUrl.Port;
var uri = new UriBuilder(protocol, host, port);
if (uri.Uri.IsDefaultPort)
uri.Port = -1;
baseUrl = uri.ToString();
if (!baseUrl.EndsWith("/"))
baseUrl += "/";
}
else
{
baseUrl = "/";
}
}
else
{
baseUrl = baseUrl.TrimEnd('/') + "/";
}
hi @philippe-camelio_3885 , this will be fixed in OT-380, in the upcmoing Otter (2.2.23), which is planned to release on this Friday.
@scroak_6473 got it, thanks! Okay, I've requested to engineering about adding it to the upcoming ProGetCore
container, and we'll update this with more details as we have i
hi @Adam1, I looked into this further, and this is by design, but it should really be clarified in the UI a little bit better (I updated the docs).
A server can exist in multiple environments, but it's not recommended.
Basically, when a server is in multiple environments, then there can be no single environment in context. This means that the variable function $EnvironmentName
will return empty, and variables cannot be resolved against those environments.
This is unlike a role (which is set when executing a configuration plan, or explicitly set with for role X
). So, in this case, I recommend you to use multiple roles.
hi @Adam1; thank you for the detailed reproduction instructions.
You are correct, this is indeed a bug, and it will be fixed in the next maintenance release (scheduled for Friday) as OT-381. If you'd like a pre-release, we can easily share one to you as well!
Hmmm, I'm not sure if it's possible... I don't know if PowerShell even allows multiple versions of the same module/resource to be installed on the same server?
@nicholas-boltralik_3634 I investigated this further, and the behavior seems to have existed since 5.2 and earlier.
I'm not totally sure how works Maven, but I'm almost certain you're not supposed to create an artifact with a version that has -SNAPSHOT
in it. Instead, the -SNAPSHOT
seems to be intended for use inside of a <dependency>
only.
if your project depends on a software component that is under active development, you can depend on a snapshot release, and Maven will periodically attempt to download the latest snapshot from a repository when you run a build. Similarly, if the next release of your system is going to have a version “1.8,” your project would have a “1.8-SNAPSHOT” version until it was formally released.
For example , the following dependency would always download the latest 1.8 development JAR of spring:
<dependency> <groupId>org.springframework</groupId> <artifactId>spring</artifactId> <version>1.8-SNAPSHOT”</version> </dependency>
My understanding of how the server logic works, if you request a version with -SNAPSHOT
in it, then the latest version is returned. In other words, -SNAPSHOT
is not a real version, it's just something the server uses t send back the latest version. And since there is no latest version, you get an error.
It seems to be a problem to allow a version called -SNAPSHOT
to be uploaded, because then it's ambiguous. When you ask for -SNAPSHOT
do you want the version named that, or the latest version, etc.
To be honest this is all confusing to me... I'm wondering, why do you create a -SNAPSHOT
version?
Hello; good news, we can now fix this as part of SDK-64, which will ship in ProGetCore
, a new container that will contain the .NET5 version of ProGet that's designed to replace the mono stack.
We're planning to ship a public technology preview by the end of next week (as part of 5.3.11), and the ProGetCore
container will eventually replace ProGet
container (which will become ProGetMono
). This will happen well before November, but we don't know when.
PG-1804 will fix the caching problem you identified, however.
Hello; what product is this for... Otter?
Hi Nik,
I can confirm that I got the same error message when I created a snapshot POM file, and tried to download it:
<project>
<modelVersion>4.0.0</modelVersion>
<groupId>com.mycompany.app</groupId>
<artifactId>my-app</artifactId>
<version>2-SNAPSHOT</version>
</project>
However, this works:
<project>
<modelVersion>4.0.0</modelVersion>
<groupId>com.mycompany.app</groupId>
<artifactId>my-app</artifactId>
<version>1</version>
</project>
However, I found the code where this was happening, and it seems to have always been working this way? I don't really get how SNAPSHOT works, but there seems to be some special handling with SNAPSHOT that I need to research some more. I'll follow-up once I learn more
@jyip_5228 said in Nuget package not found in proget, but searchable in UI:
This issue appears to still exist occasionally, is there a way to increase logging to try and troubleshoot this?
Errors on the ProGet side are logged under Admin > Diagnostics. However, the error message is coming from the nuget.exe
side, so we have no idea why it can't find the package. Clearly, it's available on ProGet... and if you can see the package in the UI, then will should show up in the API as well. No one else is reporting this either, which leads me to think it's outside of the ProGet software.
We don't have a mechanism in side of ProGet to capture all incoming/outgoing traffic, but could you run the cli tool through a proxy, and capture the results?
Regarding the caching, we were able to reproduce that -- it as PG-1804, which will cause remote packages to sometimes cache, even if you have the caching disabled
hi Marcus,
Since you're hooking ProGet to npmjs.org, then the search results of npmjs.org are returned.
This is the usual behavior of npmjs.org, and you can see how it works by going to the webste: https://www.npmjs.com/search?q=%40angula
I can't speak for how or why they designed things they way they did, but I can say that, in npm, the @
-symbol denotes a package scope, and it's only sort-of-kind-of part of the package's name. It's messy, but suffice it to say, if a search string contains only a scope name, then npmjs.org seems to return all packages within that scope.
Otherwise, the search algorithm reverts to its default behavior, which I guess is searching names, descriptions, those sorts of things. Note that searching angular
doesn't return packages in the @angular
scope, either: https://www.npmjs.com/search?q=angular
Ah got it, thanks!
Seems like an easy line to add (and probably doesn't increase container size much), but since it's only used during intiial setup, is it easy to run apt-get install iputils-ping
first, or does that require a special kind of network access? Or is this common in other product-based containers?
@scroak_6473 oh, I guess we already rewrote this for our .NET5 plans.
In the coming weeks, we'll shipping a new container, ProGetCore
that is build on .NETCore (soon: .NET5), and won't use any of Mono. So, then, we'll have a chance of easily adding this!
Definitely, it will be in the next release.
We don't consider it a serious security vulnerability; unprivileged users can create new feeds, but they can't view or use them. Obviously it has potential for "vandalism" (we see already some test feeds created on our public instance, for example ), so we will take care of it right away.
@scroak_6473 oh I see, the usual ping
! Thanks.
How/when is this helpful? Is it something that is called from outside the container? My image is, "I need to diagnose my network configuration, so I SSH into my Docker container, but ping
isn't there?"
Thanks @scroak_6473; that method would probably work if we were using the protocol-level libraries (i.e. System.DirectoryServices.Protocols
) to connect, but we're working at one-level above that (i.e. DirectorySearcher).
These libraries use a protocol called ADSI
, which basically a wrapper around LDAP, but with more security (either via SSL or something else, I forgot) and some Microsoft extensions. But ADSl
can also connect to any LDAP server.
Hello;
Thanks for the bug report! I've logged this as PG-1801, and it will get shipped in the next maintenance release.
Cheers,
Alana
This is was addressed in PG-1795, which was released in 5.3.9 just recently!
hi Simon,
Unfortunately the code solution I mentioned isn't feasible. Basically, to use that method, it would require us to rewrite our library to work at the "protocol level" (TCP/IP) instead of the "directory level" (users/groups/objects); so it's not so trivial...
Could you install the certificate to the Docker container to see if it works?
In this case, please ensure that the ProGet service is running and has permission to read and delete from that folder (Admin > Service). You can manually trigger the drop path monitor from that page, and see what he's doing behind the scenes.
Great!! I've logged this as PG-1798, and it's planned for 5.3.10 (Aug 28), but may get delayed depending on other priories.
hi Simon, thanks for the suggestion.
I'm not so familiar with the ping command, and i didn't find much on searching "docker ping command". Do you have any information on it, is it a kind of standard?
Or is this the usual ping
that you use to like, ping google.com
or something?
Cheers, Alana
Hello;
ProGet does support SNAPSHOT versioning, and when we test it, there's no problem. So I'm guessing it might be a problem with your POM file, repository configuration, or naming conventions?
They are a bit complex to get right, and the conventions have to be perfect; https://maven.apache.org/guides/getting-started/index.html#What_is_a_SNAPSHOT_version
Hi @scroak_6473
Shouldn't there be an option to "ignore or skip certificate verification"
If it's possible at the library level, it's something that could be added, I think; we recently added the LDAPS support. I'm not an expert on security/certification verification, or if this is even a good option to have, however.
The VerifyServerCertificateCallback may allow it, and it seems like it could be an easy checkbox to add to ADUserDirectory.cs, maybe.
It's definitely possible to fork InedoCore, modify, then manually install. Or we can very easily shipped branched-builds to our CI-extensions feed, if this is something you'd be interested in collaborating/testing. This is how we got LDAPS working in the first place, as it's quite complex to reproduce an environment for that.
Cheers,
Alana
Oh no! Well, I can do this -- I can see the email of your new account ( @JonathanEngstrom), and I think I can just delete your new account, and set the email of your old account @Jonathan-Engstrom?
Let me know if it's okay, and I'll try that.
Can you try it again?
It was a setting on the Forums admin side, it was disabled for change by regular users.
I see; that makes sense, the re-indexing would delete it.
Hmm, okay, I just thought about another work-around instead. Please download v12 package, edit the .nuspec file, rename it to be cased like the others, and then upload it? This ought to solve it, as well.
It's definitely a bug we can look to fix, but thus far it's impacted just a single user on a single package in several years so far. It's a high-risk change, and we have a lot of things on the backlog that add more impact/value to the users.
@scroak_6473 thanks Simon!
I added PG-1790 to our system; one blocker I have... how should we document this?
We'd love to even give an example of more advanced configuration, like how you have, of the Docker Compose file. Here's the current docs we have.
If you can docs suggest or do a Pull Request that would help us get this done ASAP :)
Thanks, got it. An annoying work-around for sure, but it somewhat makes sense, because the package name must be decided based on the first package in the list? I don't know... but the right-answer is to store everything as lower case and have a fallback for mixed-case (which is what we did in the cloud storage).
If no cache is set for the connector, none of this should technically matter, correct?
The problem is occurring when the package is being loaded some disk, so it would impact Cached and Local packages, but not Remote packages.
I think dot net publish
, but remember these are really meant to work with nuget.org. This might be an issue to take up with the NuGet team; it's very possible their tool doesn't support this use case and is buggy.
It's just an HTTP PUT, so you could do it in a line of PowerShell as well using the Invoke-RestMethod CmdLet: Invoke-RestMethod -Uri $uri -Method Post -InFile $uploadPath
Here is some more information from NuGet's docs: https://docs.microsoft.com/en-us/nuget/api/package-publish-resource#push-a-package
@christian_panten as I mentioned, we recommend making a single symbol package to avoid this kind of headache (it changes between versions of nuget CLI ), and I don't use nuget.exe
to push packages in this manner so I can't say it
If you were to just upload or PUT
the package to ProGet, then it would work fine. So we are just trying to figure out nuget.exe
's quirks, and why he refuses to push to a custom symbol source...
The screenshot you showed seems to be fine. I also read the nuget.exe push documentation.
I don't think you're supposed to specify the snupkg
file. I believe he will search for a .snupkg
and then push it? It's hard for a guess.
I also remember this was broken for a while in nuget.exe
(not using of SymbolApi at all), but it works in dotnet push
.
Thanks; so just so I understand, if you made a properly-cased version of the v12 package file, it worked, for both versions?
It's not ideal, I know. But this is situation must be extremely rare (first report in many years), and fixing it is costly/risky, so we need to weigh cost/benefit (especially when there are a lot more valuable things we can could improve in the software) against a workaround.
@scroak_6473 great suggestions, thanks!! The mockups will really help me to present a case :)
I see that the priority is on finding the unassessed vulnerabilities, which make sense. I don't know about a "mouseover" (we don't have this UI construct in our products like this to make this easily doable), but I can envision a modal window (popup) or a regular page that allows for quickly asessing those vulnerabilities
This isn't trivial, but it's not terribly complicated either. I'm going to try to get this submitted internally next week (I'll share what I write up), and from there we might be able to get this in the following or near-term maintence release
@wsah_6160 @pluskal_4199 assuming this is related to 5.3, then there should be a fix coming soon!
hi Simon, thanks for the suggestion!
I'll be honest, I'm really not that familiar with Docker Swarm or Secrets, but I wonder, from the "code inside ProGet perspective", does this seem as simple as like,
// fake code, just an example
if (EnvironmentVariables["proget_connection_string"] != null)
return EnvironmentVariables["proget_connection_string"];
else if (EnvironmentVariables["proget_connection_string_file"])
return File.ReadAllText(EnvironmentVariables["proget_connection_string_file"]);
else
return ReadFromNormalConfigFile();
First off, if you want to have separate library and symbols packages (which we don't recommend), then you'll need to make a separate symbols feed. ProGet supports a "combined package" and will strip out symbols/source unless explicitly requested, so you don't have to bother.
But if you want to separate, please note that nuget
is a "little" funny. If the file extension ends in .snupkg
then it ignores the source
argument, and attempts to push the file to Microsoft's server (symbols.nuget.org).
This is because symbol packages and regular packages must be pushed to a different feed. If you want to specify a custom symbols feed, then please use the --symbol-source
when using a .snupkg
file.
@christian_panten I have a favor.... can you suggest (or do a pull request) on how we can update our documentation to make this more clear? Thanks
ProGet has a really cool feature called Semantic Versioning for Docker Container Images, and it works pretty much how you described, but it also enforces that containers have a valid semantic version tag. For example, latest
will always refer to the highest, stable-version of a container, and 4
will be the latest stable of 4.x.y
As for "generic archives", check out Universal Packages - they're really powerful, and are like "nuget/maven" but for your own applications and components, and extensible.
Hi all, just an update! We will be shipping a potential fix in PG-1783, which adds a new checkbox in advanced settings (unchecked by default):
Close Database Connections Early
EXPERIMENTAL (for debugging/testing only) - As of ProGet 5.3, database connections are left open during the lifecycle of a NuGet API request as a means to reduce overhead; however, this may be causing ProGet to run out of available connections to SQL Server. Set this value true to open/close database connections as needed on NuGet feeds.
We'll update when this is shipped --- but if we can get some folks to verify that this works better (we can't repro, at all ), then we will likely make it the default. Hopefully this will do it. Seems better than raising connection pool limits
If so, then the savings in connection open/close overhead don't seem to make it particularly worthwhile. This "keep open" technique made a ton of differences elsewhere in our software, but since a NuGet API requests may yield a ton of other network requests (via connectors) and block, the pool may be getting drained too quickly...
just a theory, as I mentioned. Anyway hope this helps, stay tuned!!
Thanks!
I see; it seems the problem is the casing change from v12 to v17 of the package.
I assume you are't using both at the moment? As a work-around, can you try deleting v12 of the package, then renaming folder to WPF instead of Wpf?
Will that work?
Thanks, you have have a pretty good point here. Finding where the vulnerabilities live is kind of difficult, but let's make it easier
First, bc9ab73e5b14
is a layer that's in one or more container images that has zero or more tags in a repository (in a registry/feed).
What's actually useful information is registry (feed), then repository+tag (containername:version).
If all this added up to a single tag in a single registry + repository + tag combination, we could display that instead. But there are going to be a lot of container images using that layer...
Maybe clicking that page opens up a page that is like, "tags that use this layer" or something, and it displays Registry (Feed) and REpository+Tag in a simple list view?
The only way to get to this page would be clicking on a image hash like that, so perhaps it could be a modal-popup window instead?
Just brainstorming... what do you think?
At first, I wouldn't worry about connector timeout errors to nuget.org unless it's frequent. They happen due to service or network outages (your end or theirs), nothing to worry about too much.
But otherwise it's hard to say; are you totally sure that the error is coming from that connector? Maybe you have multiple connectors to nuget.org?
Otherwise, nuget,.org's JSON-LD API (v3) sometimes points to the nuget.org ODATA-based URLS (v2), though you'd have to follow the URLs returned by the v3 API to see that.