Hi @mcascone, I'm afraid it's not possible.
nuget or dotnet nuget tools only understand the NuGet Package Format, and the NuGet API, so there's no way for the tools to use different formats or different APIs...
Hi @mcascone, I'm afraid it's not possible.
nuget or dotnet nuget tools only understand the NuGet Package Format, and the NuGet API, so there's no way for the tools to use different formats or different APIs...
@nuno-guerreiro-rosa_9280 very glad we could fix it! It's definitely a .NET5 bug, and in a totally unexpected place
@jn_7742 unfortunately, Microsoft has also abandoned Source Server in favor of SourceLink, and they will not add Source Server support to Portable PDB files
Technically, ProGet's Source Server works on Linux, but since ProGet can't inject its Source Server urls into the .pdb files, it's impractical to use. I'll let you guess which 20+ year old program is required to inject Source Server metadata into Microsoft PDB files 
Bottom line, you'll need to upgrade your processes to use SourceLink
Unfortunately this is just an invalid package... someone really needs to tell the developers at Microsoft that the developers at Microsoft really want packages to be in SemVer2 format, per the Microsoft documentation 
https://docs.microsoft.com/en-us/nuget/concepts/package-versioning
Microsoft's documentation seems to no longer describe what Microsoft's once documented "legacy versioning" used to be, but we captured it in our documentation before Microsoft removed it.
Bottom line, the "." is an invalid character in a non-SemVer2 (i.e. four-part version number) pre-release tag. Hence, why it's getting a 400 error. It used to do that on NuGet.org, too. I guess it doesn't now. We probably won't change this, since it kind of works already., and since it's only one package.
But if you email the Microsoft team responsible for that package, they probably will just change their versioning going forward. IT'd be nice if Microsoft documented how their non-SemVer2 packages are supposed to work.
Hello;
The best place to look to troubleshoot this would be the Symbol and Source Server documentation.
Symbol server does work in Linux, but just not with symbols in the "Microsoft PDB format" (which might be what you're referring to). It's fine for Portable PDB (which is the "new" format).
Unfortunately Microsoft seems to have totally abandoned the legacy (Microsoft PDB) format, and has not nor will not open-source the code or specifications required to read those files.
The only supported way is using PDBSTR.EXE, which is essentially a 20+ year old program that modern tools (including ProGet) embed. It can't run on Linux, only Windows.
We decided not to do some hacks (like use WINE) to run it, because there's less and less legacy usage each year.
Best,
Alana
Whoops, my bad!
@brett-polivka thanks so much for the help, this was a baffling .NET5 issue, but so glad we could identify and fix it. This will definitely go in the maintence release as well, of course.
@viceice thanks! We will add a command via PG-1897 to handle this, it seems quite straight forward...
Stay tuned :)
@brett-polivka thanks again for all the help and insights!
I talked this over with the engineering team, and we just decided to remove the ParallelEnumerable query.
Would you be able to try a pre-release container?
inedo/proget:5.3.23-ci.4
@viceice sorry on the slow reply, but I presented this at our engineering meeting and our product manager thinks it's a great idea. But, he asked me to write the documentation on it so they know exactly what changes need to be made, and how it will be used...
So I wonder if you can help?
https://docs.inedo.com/docs/proget/installation/installation-guide/linux-docker#upgrading
Basically I think we should add another heading like this.
The ProGet container will automatically upgrade the database when starting up; this upgrade might take a few minutes, which may appear to cause delays to automated probes (??) like Kubernetes.
You can run the following command to instruct the ProGet container to upgrade the database, and then exit.
docker run ???? docker exec ???
I'm just stuck at how to document the new command to run.
From a code/C# standpoint, we can simply just add a option to ProGet.Service.exe to upgrade the database and exit. And then, we won't upgrade the database if the version matches.
@brett-polivka fantastic, thanks for the additional research. This will make it so much easier to debug from here.
Do most of the stack traces look like this? I.e. coming from Inedo.Web.PageFree.PageFreeHandler.GetHttpHandler?
If so, the issue is happening in a totally unexpected place (basically routing a URL). This is just using a relatively simple query with Linq's ParallelEnumerable library methods. Under-the-hood (as the stack trace shows), that seems to leverage operating-system level mutex support... which is likely why it's problematic only on your specific hardware/environment.
We saw a ton of Mono-related bugs early-on with this (some even would SEGFAULT on certain hosts), largely because we tend to find the "corner-cases" with optimizations, like working-around Activator.CreateInstance slowness using using System.Reflection.Emit directly.
Anyways, I'm going to have to involve our engineering team for ideas, because at this point we have to guess how to work-around this.
@entro_4370 said in Support for R and CRAN:
Hi! Any news on R-support in ProGet? There seems to be quite a demand! As a Data Engineer in a large government that bases its package management on ProGet i feel that there is a clear gap here...
I'm afraid it's still not on our roadmap, based on our research. Here's how we evaluate this sort of things.
At first, what's the market demand outside of existing users? We didn't find much opportunity to attract data engineers (or their bosses, who would buy software) who aren't already using ProGet (or a competing product) to use a private repository instead of CRAN directly. It's just not big discussion of R/CRAN community, unlike other package types.
And then, we surveyed/asked users. But we phrase it like this: how much more would you pay for ProGet it if it had this feature, because this is an honest assessment of "how much more value would this feature bring to you" (which is what we want to decide).
Unfortunately everyone we asked said, "it's a nice to have, but we would actually pay X% more if ProGet had feature X instead."
Anyways still something we want to do, but we want to make features that bring most value to most people at first...
of course if you have insights on this please let us know! Cheers.
@scroak_6473 this is very helpful actually, thank you!
There might be a relation to the LdapReferalException here, so we're going to do some more research and try to suggest what to try next. It might involve some new code (and potentially an upgrade).
Please stay tuned...
Thanks for the update @brett-polivka
The ProGet code (application layer) is effectively identical between those images; the main differences is are the underlying container image and .NET5 vs Mono. We had some initial bugs with .NET5, but is the first we've heard of Mono vs DotNetCore problems of this nature out of a lot of users.
Unfortunately, we really don't have any means to diagnose this further at this point. We're not even sure what code we could put in place to do network / operating-system level debugging. And worse, we can't reproduce this in our containers, even on Azure host.
We're open to ideas here, but I think your best bet would be go to as basic of an installation as possible; just a simple container with a simple database, and then add the pieces back until you figure out which thing is causing problems.
If you've upgraded, then the problem is most certainly in your Azure/cluster configuration. I don't know, maybe it's some default configuration setting or something in your cluster that's causing strange network/routing problems... a total guess, because I don't know how to advice to kubernets cluster diagnostic. But that's where to look.
As you said, 100 simultaneous queries are really easy to handle -- the micro-servers (like 512MB RAM, including SQL Server) we use in our testing labs can do that no problem.
I'm afraid there's nothing in ProGet that can really help diagnose this... it would just tell you what you already know -- there are a lot of incoming/waiting connections and the database connections are timing out. All of that's happening at the operating system level, not in the application level of our codes.
@brett-polivka the standard version of ProGet can indeed handle a single project, and in fact many projects from many users at the same time -- it's the most performant private server on the market, by far, in fact.
But a single server/computer/container can only handle so many network connections, and what you're experiencing is the network stack being overloaded
It could be related to the virtualization or container settings, etc. It could be authentication across networks, which shouldn't happen in Azure but it might. But it's not really related to the database, or database performance at all. This is the first place you'll see stack overload, since database connections have short timeouts, compared to connectors like nuget.org.
There is an optimization in 5.3.22 (PG-1891) that might help in scenarios like yours (Linux connected to slow-ish database), so I would try to upgrade.
Otherwise investigate why the network is being overloaded.
@Stephen-Schaff is SA-RestrictedAuto also a group? It's possible in AD to have a User and Group have the same name.
There is (or was) an issue in our products where adding privileges will favor the Group when both a User and a Group have the same name. I'm not sure if it's been resolved, but @rhessinger might be able to confirm it.
The issue is related to the UI more than anything, and it can be worked-around with a "database hack", by changing the PrincipalType_Code from G to U in the Privileges table for the appropriate entry.
Unfortunately, this this behavior is to be expected with "dotnet restore"; it's massively parallel, and will issue as many parallel requests as the build agent allows. Typically, this is more than the server can handle. The end result of this is that the "network stack" gets overloaded. This is why the server is unreachable.
The reason is that, each request to a ProGet feed can potentially open another requests to each connector configured (maybe NuGet.org, maybe others), and if you are doing a lot of requests at once, you'll get a lot of network activity queuing up. SQL Server also running on the network, so those just get added to the queue, and eventually you run out of connection.
One way to solve this is by reducing network traffic (removing connectors to nuget.org, restricting the build agent if possible, etc.), but the best bet is to move to load-balancing with ProGet Enterprise. See How to Prevent Server Overload in ProGet to learn more.
Another option is make sure you're not using --no-cache when you use dotnet restore command. NuGet typically will create a local cache of downloaded packages which would help alleviate some of the load on the ProGet server. Passing --no-cache will bypass that local cache and it will cause it to always pull from the server.
Another thing that might help is using the --disable-parallel option in dotnet restore. That will prevent restoring multiple projects in parallel, which will also reduce the load on ProGet.
Fortunately and unfortunately, NuGet does a lot of parallel options that can saturate a NuGet server. When you are restoring a lot of simultaneous builds and solutions with a large number of projects, it can really affect the performance of a server.
This is ultimately where load balancing will come in.
@Stephen-Schaff what was the restriction that you added? The "Publish Packages" and "View & Download Packages" tasks are made up of a collection of attributes, and those are what's tested. So you'd want to Restrict both of those.
Restrictions override grants, but more a more-specific grant will override a less-specific deny. But in any case, a Grant at the system level, with a Deny at the feed level should accomplish what you're doing.
If you can share a screenshot that might help us see it as well
@scroak_6473 we could definitely try a screen share, but in a case like this (where we have no idea what's wrong), it's mostly digging in the code and trying to think of things to try to get a clue for more information. Currently, I'm at a loss... because the error you have shouldn't be happening, but it clearly is.
So now, I had a new idea. I would like to eliminate Docker from equation, as it handles the
"api" username slightly differently than everywhere else. Plus you can do this all in your browser.
Can you try to visit a restricted (i.e. not anonymous view) NuGet endpoint using the "api" user name, and a password?
For example, it should look like /nuget/NuGetLibraries/v3/index.json, and then your browser should prompt for a Username/Password.
Depending on the result of this, we will explore different code paths, and then might need to add some more debugging codes.
Best,
Alana
Hi Simon,
That is strange, it's basically your browser "hiding" the underlying error. Sometimes that happens if the response body is too short.... which could happen if the server got in some really bizare state.
I could find the logs you sent, but they were very random and they also don't make sense; it's random ASP.NET errors, but we can't see the full situation. In general the 500 errors should be logged in ProGet > Admin; this will provide a stack trace as to what errors are happening.
If you can't get to the admin page, then something is really wrong with the server. I would try restrating your container.
Alana
@scroak_6473 is it the exact same message? Basically, the "api" user not found in the directory?
@Stephen-Schaff thanks for the bug report, I verified that this may happen depending on permission of user, and which feeds they can/can't use --- but it seems an easy enough fix that we can do via PG-1894 (targeted to next release) - the packages can't be viewed upon clicking, but it's a sub-optimal experience for showing packages they can't see
@philippe-camelio_3885 oh i see; you mean, capture the output of a process or script execution into a variable or something.
Definitely something to consider as an enhancement, I think. That wouldnt' be too bad (though the variable could get huge, and rutnime variables raen't realy designed for large amounts of text like that)
Thanks for the update; and the upcoming fixes will certainly make it so that purging is much more efficient on the manual execution side, just in case there's an "Explosion" of executions like this.
So right now, my concern is that it's "logging every sync" (once per hour), due to a sort of bug or something. Can you check what's getting logged in Infrastructure Sync? You should be able to see this under Admin > Executions, and see if you can spot a pattern?
No rush. The infra sync executions should clearly show a change history of what was updated on the infra side.
@philippe-camelio_3885 said in OTTER - Capture Logs from block execution and assign to variables ?:
The ANSIBLE::BM-Playbook module returns the log:
By this, I assume you mean, writes to the Otter execution log, either via Log-Information or an execute process? You might have to do this via PSExec, write the logs to text, and parse it out that way using a regular expression...
At this time, there's no way to read entries from the log during a live execution.
Best,
Alana
Thanks @scroak_6473; I found the email, and can see a lot of information from what you sent.
I can clearly see the identical api challenge/response, and the different behaviors from ProGet.
Unfortunately, I'm not able to reproduce the scenario on this end, using our own instance and a domain impersonated account. But I think that's because this issue may have already been fixed with PG-1859; would you be able to upgrade to 5.3.22 to confirm?
@Joshua_1353 I also got this message from GitHub;
Basic authentication using a password to Git is deprecated and will soon no longer work. Visit https://github.blog/2020-12-15-token-authentication-requirements-for-git-operations/ for more information around suggested workarounds and removal dates.
I wonder if it's disabled on your account, already 
@Joshua_1353 Thanks, I was able to access it without a problem, and added a test script.
But I made a mistake at first, and I kept the default branch name of "master" in the Raft dialog. On your repository, the master branch is named "main".
What is the configuration inside of Otter? If it says "master" then there will be a sort of error.
I'm not familiar with what apt-cacher is, but I'm guessing it is a kind of proxy/caching for APT packages?
I'll note that connectors are not supported for Debian feeds at this time; only packages you publish yourself to that feed.
@mathieu-belanger_6065 said in Connection reset while downloading npm packages:
I am curious, would there be an impact on performance when "piping" connectors together? For example, internal feed A has a connector to internal feed B, which has a connector to internal feed C, which has a connector to npmjs.org?
Connectors are accessed over HTTP. So assuming you have a "chain" like A --> B --> C --> npm.js, (i.e. different 3 feeds and 3 different connectors), each request may yield 3 additional requests.
So when your browser asks feed A for package typescript@3.7.4, then following will happen.
B, in this case) is queried over HTTP for typescript@3.7.4Each connector follows the same logic. When ProGet (via a request to feed A) asks feed B for that package, the same logic is followed:
C, in this case) is queried over HTTP for typescript@3.7.4Continuing the pipe, when ProGet (via a request to feed B via a request to feed A) asks feed C for that package, the same logic is followed:
nuget.org, in this case) is queried over HTTP for typescript@3.7.4This is why caching is important, but also why chaining may not be a good solution for high-trafficked npm developer libraries like typescript. The npm client basically does a DoS by requesting hundreds of packages at once. Same is true with nuget.exe as well.
@arozanski_1087 I don't think repackaging in the UI is the best for this
in theory it should work, but we we never designed or tested it for this scenario; it's more to change pre-release versions.
In theory having both versions (1.0, 1.0.0) in the feed should work; I would just delete all versions, then upload from disk, then edit the file on disk (1.0 to 1.0.0), then upload the other.
We heard of one other user doing that.
If you can find "the trick" please do share, because certainly you won't be the last person using this ancient, broken package ;)
Hi Simon, you can send to support at inedo dot com. Please include [QA-473] in the subject, so we can find it easily :)
@arozanski_1087 ah, yes. Owin. It's most definitely a problem package 
https://www.nuget.org/packages/Owin
There's not much we can do about this, when you have a connector to NuGet.org on the same feed. The reason is this.
NuGet stopped supporting this back in 2016, but unfortunately... the developers are using an 8+ year old package.
If you don't use a NuGet.org connector on the feed, you can simply follow those manual repackaging steps I mentioned, and create your own Owin 1.0.0. The clients will still be able to get it, but may issue a 1.0 request first.
The two api endpoints I can think of are:
/v2/_catalog returns all repository names ("container" names)/v2/<repository-name>/tags/list returns all tags within specified repositorySome more details are here: https://docs.docker.com/registry/spec/api/
That URL is a bit strange, and I wouldn't expect it to work.
What is /endpoints/Chocolatey/content/chocolatey.license.xml? If you visit in your browser, what happens?
Ah, glad we can see the message now.
Okay - so that error message is coming from within libgit2, and it basically just means there's an authentication error. The most common reason is that the username/password credentials for GitHub are invalid, but the git server (github in this case) won't give the reason -- it could also can also mean your account is locked, you're using a "username" instead of a "token", don't have access to the branch, etc.
If it works on your local computer, but not BuildMaster, then it means there's "something" in either your local repository or global git configuration that's allowing it. Usually, this is stored git credentials, or even a plug-in that's allowing it.
That being said, it could also be related to the credentials changes in Otter v3; the problem is we can't reproduce it here. Could we trouble you to do this.
whatatripp, so then we can test your repositoryWith this, we can be looking/working on the same repository, and at least figure out where the problem lies.
@arozanski_1087 thanks for clarifying...
This will be a bit tricky to debug; there are some supported scenarios for the quirky version in the UI, but most many over the API don't work (because API requires semver).
There could be other factors at play like connectors or caching, or who knows... so could you set up a basic reproduction case, that you could share/ send to us?
create two basic packages (no contents just the nuspec file is ok) for 1.0 and 1.0.0, basically that mimic your real packages; your real packages are okay too, but in case you can't share for propietary reason
create a new NuGet feed
try to repdouce on the new NuGet feed
We've tried the above test, but don't experience the problem you describe... the packages can be deleted.
Hi @arozanski_1087 ,
Can you navigate to the package(s) from the UI? If so, you should be able to delete it from the UI.... what errors are you seeing when you try to delete?
If you have a package with a quirky version, the best bet is to download it, edit the nuspec file in the package, delete the quicky package on the server, then republish it.
Best,
Alana
@philippe-camelio_3885 thanks. please keep us in the loop!
As expected, that's a LOT of infrastructure sync executions. I wonder why. Are there frequent variable changes on your servers/roles?
There's probably something off, where it's logging when it shouldn't. We can investigate that another time, but in the meantime, the upcoming optimizations in pruning the manual executions should make this go a lot faster next time.
Thanks @philippe-camelio_3885
So, the good news is, we've identified the problem. There was just a huge number of manual executions happening, for some reason, and the manual execution purging routine could never catch up. Changing those throttles wouldn't make a difference I'm afraid, as none will trigger a manual execution...
At first, can you please share the results of this query, so we can see what made all those?
SELECT [ExecutionType_Name], COUNT(*) FROM [ManualExecutions] GROUP BY [ExecutionType_Name]
That will tell us what Manual Executions are around, mostly so we can understand what it is. I suspect, infrastructure sync.
That being said... the first thing I'm now seeing is that the report looks old. It's because the number of rows is 164,125, which is the exact same number as from before. So, I'm thinking, actually, you didn't commit the transaction in the query I posted before? It included a ROLLBACK statement as a safety measure... that's my fault, I should have said to only run DELETE if you were satisfied.
Since the query seems okay (it reduced rows from 164K down to 1k), please run this:
DELETE [Executions]
FROM [Executions] E,
(SELECT [Execution_Id],
ROW_NUMBER() OVER(PARTITION BY [ExecutionMode_Code] ORDER BY [Execution_Id] DESC) [Row]
FROM [Executions]
WHERE [ExecutionMode_Code] IN ('R', 'M', 'T')) EE
WHERE E.[Execution_Id] = EE.[Execution_Id]
AND EE.[Row] > 1000
From here, it should actually be fine...
@mathieu-belanger_6065 thanks for all of the diagnostic and additional information. I think you're right, it's environment / network specific, and not related to ProGet.
I would check the ProGet Diagnostic Center, under Admin as well.
Otherwise, proGet doesn't operate at the TCP-level, but uses ASP.NET's network stack. There's really nothing special about how NPM packages are handled, compared with other packages, and we haven't heard of any other issues regarding this.
For reference, here's code on how a package file is transmitted. Note that, if you're using connectors and the package isn't cached on ProGet, then each connector must be queried. This can yield quite a lot of network traffic.
if (metadata.IsLocal)
{
using (var stream = await feed.OpenPackageAsync(packageName, metadata.Version, OpenPackageOptions.DoNotUseConnectors))
{
await context.Response.TransmitStreamAsync(stream, "package.tgz", MediaTypeNames.Application.Octet);
}
}
else
{
var nameText = packageName.ToString();
var validConnectors = feed
.Connectors
.Where(c => c.IsPackageIncluded(nameText));
foreach (var connector in validConnectors)
{
var remoteMetadata = await connector.GetRemotePackageMetadataAsync(packageName.Scope, packageName.Name, metadata.Version.ToString());
if (remoteMetadata != null)
{
var tarballUrl = GetTarballUrl(remoteMetadata);
if (!string.IsNullOrEmpty(tarballUrl))
{
var request = await connector.CreateWebRequestInternalAsync(tarballUrl);
request.AutomaticDecompression = DecompressionMethods.None;
using (var response = (HttpWebResponse)await request.GetResponseAsync())
using (var responseStream = response.GetResponseStream())
{
context.Response.BufferOutput = false;
context.Response.ContentType = MediaTypeNames.Application.Octet;
context.Response.AppendHeader("Content-Length", response.ContentLength.ToString());
if (feed.CacheConnectors)
{
using (var tempStream = TemporaryStream.Create(response.ContentLength))
{
await responseStream.CopyToAsync(tempStream);
tempStream.Position = 0;
try
{
await feed.CachePackageAsync(tempStream);
}
catch
{
}
tempStream.Position = 0;
await tempStream.CopyToAsync(context.Response.OutputStream);
}
}
else
{
await responseStream.CopyToAsync(context.Response.OutputStream);
}
return true;
}
}
}
}
}
The Execution_Configuration column of the ManualExecutions table will give a clue; it's XML but if you expand the coluumn, you'll see the name of the manual execution.
It's only supposed to log if something changed, however...
If there's a bug, one way to check would be to disable infrastructure sync, for the time being.
If I'm understanding correctly, did your the Manual Execution records go from 1000 to 164,000 in just a few days? If so, that would explain a lot....
These are the types of so-called Manual Executions:
They are supposed to only occur on a manual basis, like when you trigger something from the UI so you can get logs. Or, in the case of sync infrastructure, whenever infrastructure changes.
Any idea what all the manual executions could be?
Hi @Adam1 ,
The Restart-Server operation is performed on the server itself, using the Inedo Agent or PowerShell Agent.
Behind the scenes, the agent will just use the advapi32.dll::InitiateShutdown Win32 API method, and that error string indicates that Windows is returning ERROR_ACCESS_DENIED when attempting to initiate the Shutdown. This is the same method that shutdown.exe uses behind the scenes as well.
So basically, just make sure that the agent process is running as an admin/system account.
Best,
Alana
How often is this happening? It shows 102 executions were purged, and based on the I/O there was a lot of logs deleted... this can be actually quite resource-intensive, as there are a lot of log data.
But this usually happen during off-hours, etc., so it shouldn't be disturbing.
@Joshua_1353 did this work in Otter v2?
The "too many redirects/auth requests" is usually a kind of red herring, and refers to some sort of configuration problem (corrupt local repository, cached credentials, etc.). We'd need to see the whole stack trace --- but could you post it to a new Topic, so we can track it differently?
I don't think it's related to v3. The reason it didn't show in v3 was (we just forgot to tag it properly after some coding refactoring changes in Otter).
Thanks @Joshua_1353! Looks like this was a minor configuration change, where that particular repository type wouldn't load in Otter v3. I added a missing attribute, and rebuilt, so now he seems to be displayed in the list.
Easy fix, if you download latest Git extension (1.10.1).
Hi @Stephen-Schaff_8186,
Thanks for the clarifications! In fact, I wanted to learn some of the behavior, and here's what I discovered.
I'm sharing the details, because I think we should take the opportunity to clarify not only the docs, but the UI, since it seems like this can be improved. It's a new concept in ProGet 5.3, and it was primarily intended to guide set-up of new feeds, so we haven't looked at it closely since first adding the feature.
There are two sets of feed type options, and which ones are displayed is dependent upon whether the feed type is denoted as having a private gallery (HasPublicGallery).
These all map to an enum: Mixed = 0, PrivateOnly = 1, PublicOnly = 2, Promoted = 3.
The following feed types are denoted (internally) as having an official, public gallery: Chocolatey, Cran, Maven, Npm, NuGet, PowerShell, Pypi, RubyGems.
Almost all of the behavioral changes occur in the "out of box tutorial", to guide users through the setup. Aside from that, here's the UI impact I found:
On the list packages page (e.g. /feed/MyFeed):
On the Package Versions page (e.g. /feed/MyFeed/MyPackage/versions):
No UI changes.
Well, that's everything. Any opinions / suggestions?
I'm not sure why the Add Package button is disabled. Of course you can still use API, or even navigate directly to the page. Perhaps a warning on the Add Package Page would be better?
Cheers,
Alana
This upgrade path isn't supported, and ProGet 5.0.1 does not work on SQL Server.
Your best route for upgrade is ProGet 5.0 > ProGet 5.3. Then, migrate to ProGet for Linux.
Hello;
That search syntax is really only supported by NuGet v3 API, I think; so, ProGet simply forwards on the query to that API, and returns the results.
But regardless, connector filters need to be applied after the remote feed returns results, because connector filter logic can be more complex that what is supported by the various feed APIs (you can allow Microsoft.* and Inedo.* for example).
More advanced connector filter options are definitely something we've considered, and we'd love to do things like "version: 3.*" for example. But, it's a lot more complicated under the hood, and probably isn't even feasible given the nature of feeds.
Alana