Hi @jfullmer_7346 ,
I was able to reproduce this, and we will fix it in an upcoming maintenance release (PG-2920). This came from a regression in 2024.28, with some other NuGet feed changes.
Cheers,
Alana
Hi @jfullmer_7346 ,
I was able to reproduce this, and we will fix it in an upcoming maintenance release (PG-2920). This came from a regression in 2024.28, with some other NuGet feed changes.
Cheers,
Alana
Hi @pratik-kumar_8939 ,
ProGet does not require two different feeds for this, and we would generally recommend putting them in the same feed. If you wish to have the artifacts intwo different feeds, then you can just create different feeds and name them how you'd like.
Thanks,
Alana
If the connector is not showing up in the UI, then it's not in the ProGet's database.
When you use ProGet to delete connectors (either with API or UI), then it will be logged as a "Connector Deleted" event under Admin > Event Log. There is no other way that the ProGet software can remove a connector record in the database.
If you don't see anything in the event log, then it's not being deleted through ProGet. It's possible that someone or some script has direct access to the database and is removing it that way. Or, perhaps, the database is being restored somehow.
Cheers,
Alana
Hi @itadmin_9894 ,
It sounds like there's a mismatch of versions with your database the web server- basically the web server has different code than the database. I would just try to downgrade/upgrade from Inedo Hub and then the code will be there.
Cheers,
Alana
Do you mean that the connector is no longer associated with the feed? Is the connector deleted altogether? Or is that you don't see packages when you load the feed?
We don't document but if @steviecoaster recently ffigured this out and published a pretty cool library: https://github.com/steviecoaster/InedoOps
It may do what you already need, so I'd check that out!
But if not you should be able to find the answers in
https://github.com/steviecoaster/InedoOps/blob/main/source/public/Security/Users/Set-ProGetUserPassword.ps1
As a note, the Users_CreateOrUpdateUser
is ust a stored procedure, so you could also peek at the code to see what it's doing behind the scenes. Groups is just <groups><group>A</group></groups>
Hi @janne-aho_4082,
This is most certainly related to heavy usage, even though it might not seem that at first; the connectors are basically a load multiplier. Every request to ProGet is forwarded to each connector via the network, and when you have self-connectors, that's a double-burdeon.
See How to Prevent Server Overload in ProGet to learn more.
Keep in mind that an npm restore will do thousands of simultaneous requests, often multiple for each package. This is to check latest version, vulnerabilities, etc. So you end up with more network traffic than a single server can handle - more ram/cpu will not help.
This is most commonly seen as SQL Server connection issues, since SQL Server also uses network traffic. The best solution is to use network load balancing and multiple nodes.
Otherwise, you have to reduce traffic. Splitting feeds may not help, because the client will then just hit all those feeds at the same time. The "connector metadata caching" can significantly reduce network traffic, but it comes at the cost of outdated packages. You may "see" a package on npmjs (or another feed), but the query is cached so it won't be available for minutes.
Since you're on Linux, I would just use ngnix to throttle/rate limit ProGet. The problem is peak traffic, so start with like 200/request/max then go up from there.
Cheers,
Alana
Hi @udi-moshe_0021 ,
That's what we'd have to do... but it's not a trivial code change on our end. And then we'd have to test, document, and support it when it doesn't work as expected.
So probably best to just use script to do the import.
Thanks,
Alana
hi @udi-moshe_0021,
I believe that rpm
feeds will support package upload, so you should be able to already use that in the latest version of ProGet 2024.
However, I don't think we can add this feature for Debian. The reason is, when you upload a package to a Debian repository, you must specify a "component" it belongs to. This is not available/determinable from just the file name... it's server-side meatdata basically. So you'd have to use a script that pushes the packages.
Cheers,
Alana
Hi @layfield_8963 ,
I would check under Admin > Diagnostic Center for errors, as well as your browser console.
I would also use pgutil assets upload
to work directly with the API and see if you can get any clues on what the underlying error is:
https://docs.inedo.com/docs/proget/reference-api/proget-api-assets/file-endpoints/proget-api-assets-files-upload
Most commonly, it's a antivirus/indexing tool that is locking/blocking the file, but the error message you see will help identify it further.
Cheers,
Alana
Hi @arose_5538 ,
Looks like this was indeed a regression in 2024.23 as a result of upgrading the JSON library we were using... I guess it's a lot more strict.
Of course, upack
is also wrong, but for whatever reason it worked before. anyway it's an easy fix, and will be fixed in the next maintenance release (scheduled Feb 7) of ProGet via PG-2884.
In the meantime, you can just downgrade to 2024.22.
And I checked pgutil
2.1.0 will be released soon :)
Cheers,
Alana
@steviecoaster great news, glad you got it all working
@kc_2466 there are some 500 errors, so please check Admin > Diagnostic Center to see what those are about. Those should each be logged.
Hi @scusson_9923 ,
The Invalid cast from 'System.String' to 'Inedo.Data.VarBinaryInput''
isn't related to the content of the file, it's just a problem with wiring up the API to database. Basically a bug.
I don't know why it works on my machine, but not in your instance. It's one of those "deep in the code" things that we'd have to investigate.
Maybe try upgrading to latest version of Otter? I suspect there was a library upgrade/fix that might make this work.
Thanks,
Alana
Hi @scusson_9923 ,
This code should work for a file on disk; it's same as before, but uses GetBytes...
Invoke-WebRequest -Method Post -Uri "http://otter.localhost/api/json/Rafts_CreateOrUpdateRaftItem" -Body @{
API_Key = "abc123"
Raft_Id = 1
RaftItemType_Code = 4
RaftItem_Name = "mypath/myscript.ps1"
ModifiedOn_Date = Get-Date
ModifiedBy_User_Name = "API"
Content_Bytes = [System.Convert]::ToBase64String([IO.File]::ReadAllBytes("c:\myfile.txt"))
}
@steviecoaster great, hopefully we'll get somethig figured out :)
Hi @scusson_9923 ,
Is this the latest version of Otter? "It worked on my machine" without an issue, so I wonder if there is a change somehow.
Are you able to upload the .yaml file in the way you want in the UI? What type shows up when you do that? I would expect you select like this?
The RaftItemType_Code for Text is 7.
Alana
@kc_2466 great news, thanks for letting us know.
If you don't see any violations recorded, then the banner should go away soon. It's cached.
hi @steviecoaster ,
Thanks for explaining; this is something we will consider on our roadmap planning, but it's currently a "free user" request which is difficult to prioritize with so many other requests from paid users.
HOWEVER, if we had a tech/marketing partnership that would allow us to prioritize this differently. That's above my paygrade, but I can definitely make a strong case internally... is that something you would want to pursue? I suspect It'd end up with our CEO's chatting and figuring something out heh.
In the mean time, the Native API willdefinitely work to automate everything you're trying to do. You can also just do some basic sql commands to insert stuff in the database, which would be easier.
Alana
Hi @kc_2466
License violations are recorded from requests that are not local. You can clear recorded violations by clicking "change" on the license key and then save. No need to actually change it.
This is the logic used to determine if a request is local:
public bool IsLocal
{
get
{
var connection = this.NativeRequest.HttpContext.Connection;
if (connection.RemoteIpAddress != null)
{
if (connection.LocalIpAddress != null)
return connection.RemoteIpAddress.Equals(connection.LocalIpAddress);
else
return IPAddress.IsLoopback(connection.RemoteIpAddress);
}
if (connection.RemoteIpAddress == null && connection.LocalIpAddress == null)
return true;
return false;
}
}
So if you are continuing to see violations, it means that you need to make sure that the local/inbound IPs are the same or the inbound request is loopback (127.0.0.1).
This may require some configuration in your ngnix container.
Cheers,
Alana
Thanks @e-rotteveel_1850 !
This is super-helpful indeed, especially the package that has that metadata.
Do you have an example package that is emitting app_own_environment
in the index repo? It looked like it was handled slightly differently in the conda-build
code you found, and might be a nested property.. or something. It also seems to be a bool.
It isn't trivial to add fields but also not terribly complicated. I added an item PG-2876 that we hope to get in the Feb 7 maintenance release, but might get pushed back.
Let us know if you're aware of a app_own_environment
package as well, that really helps to test this. We likely will not use conda, but simply verify json/index data.
@jimbobmcgee that's too bad it didn't work :(
That's probably why we didn't update the page in Otter 2024 to work with those types from BuildMaster... and as you saw from the code, it's probably not trivial. I know that some of the other platform modernization efforts (especially with HTTP/S support) took a lot longer.
I guess the only option for now is to just add a list of server names. This is still on our roadmap, but rewriting the page is more than we can do in a scope of a fix like this.
Hi @steviecoaster ,
The "Security API" has been on our mind for years, but I'm not sure if any of our paid users have expressed interested in it; I suspect it's because they are using AD/LDAP, so there's no need to automate users this?
AD/LDAP is one of the reasons that users purchase ProGet, so there's obviously a concern of creating an easily-scriptable alternative.. as I'm sure you can understand!
The Native API should definitely work, since it's just a wrapper on Stored Procs.. and the UI uses those same Stored Procs to do it's thing. However, it definitely requires some studying (likely looking at the sproc code) and may requires a few calls to learn the internal IDs.
I recently wrote a very simple "script" to call a Native API in our Otter product, and I guess it's not even using JSON:
Invoke-WebRequest -Method Post -Uri "http://otter.localhost/api/json/Rafts_CreateOrUpdateRaftItem" -Body @{
API_Key = "abc123"
Raft_Id = 1
RaftItemType_Code = 4
RaftItem_Name = "mypath/myscript.ps1"
ModifiedOn_Date = Get-Date
ModifiedBy_User_Name = "API"
Content_Bytes = [System.Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes("'hello world'"))
}
Seemed to work fine, so maybe use that Pattern?
Otherwise, I'm aware of one customer that sets up lots of instances in an edge network, but they just wrote a SQL Script to provision a lot of other settings. Even if we had an API, that was easier for them. So perhaps that's an option as well...
Hi @kc_2466 ,
Self-connectors (i.e. a connector to another feed in the same instance) use the HTTP API, so the server (container) needs to talk to itself. The "connection refused" means that there's some kind of network configuration problem here.
You may need to try using a different host name/port/etc to allow for "self-communication" like this.
Cheers,
Alana
Hi @itadmin_9894 ,
It's not possible to edit vulnerability records, as they are updated/source from outside of your ProGet software.
You're also using an older version of ProGet that sources data from OSS Index. That database is generally unreliable and outdated, so if you're concerned about vulnerabilities you should definitely upgrade:
https://docs.inedo.com/docs/proget/installation/proget-old-versions-migration/proget-compliance-ossindex
It looks like the similar/equivalent is here:
https://security.inedo.com/vulnerability/details/PGV-245118T
Cheers,
Alana
Glad to hear it!
Our only exposure to the Conda ecosystem is writing a feed for it, so hope you don't mind a few "dumb" questions :)
Are there any official/public packages (from like the anaconda repository) that have this field? I presume those would also be in their index.
Any idea where these might be officially documented? I found a reference in the meta.yaml specs, but you said "json" and that's clearly "yaml". I figure maybe it turns into a json at some point.
Are there any similar fields? I see we parse a bunch of fields, like constrains
to dev_url
.
Cheers,
Alana
@lisama7982_5385 this would require using the Docker API, which is kind off a pain, but if you search for things like "how to tag an image with a digest usng Docker API" or something you should find it eventually
You can also use the Docker CLI by doing something like docker pull
using the digest, then docker tag
, then docker push
.
If this is a one-time clean-up you may also wish to query the database tables, like Docker*
tables will allow you to eventually find what images you need to tag. Just don't delete from the database, since that can cause a headache and it won't delete files from disk
Hope that helps :)
Hi @jimbobmcgee,
Good question on versioning... to be honest I kind of forgot how assembly binding would work in this particular context
The Otter.Core assembly version changes in each release. I don't think we have special handling for it, but you could probably add in an assembly resolver for now, until we officially add support. The Job Template / Variables is something we have on our 2025 roadmap for Otter, but other products are priority.
Security... BuildMaster handles it a little better, and will do runtime checks, and does allow for more flexibility. This serverlock we added in Otter was a stopgap in 2022 (I think?) to resolve a customer's issue. The issue you are encountering is relatively easy to work-around, by reimplementing server targeting using variables. It's also on our roadmap, but you know... priorities :)
Hi @lisama7982_5385,
This is something that Retention Rules are typically used for, but as a home/hobby user I understand how that doesn't make sense.
You cannot navigate to these images in the ProGEt UI, which means you can't delete them from the UI. So I would just tag them, then you can browse and delete them.
Cheers,
Alana
Hi @monika-sadlok_5031 ,
I hate copy/pasting what I wrote earlier, but I feel that I've already answered these questions.
The only way this error is possible if there is no network connection between the application container and the SqlServer Container. There are no other scenarios in which this error will occur.
If it's always happening, then it's likely due to your container's network configuration (often it's a typo, like inedosql and inedo-sql or something in your configuration)
Please ignore "BuildMaster"...
, that's just a stack trace showing us where the error occurred, and specifically the code file name/line number. Those paths refer to the build server (i.e. BuildMaster) that ProGet was compiled on.
Ultimately there is something wrong with your Docker configuration. Docker is not esy to troubleshoot/maintain, so I would consider moving to Windows if you are new/uncomfortable troubleshooting.
Cheers,
Alana
@jimbobmcgee thanks again for the detailed analysis & debugging, definitely not easy doing so "blind" like that :)
Anyway we will investigate/patch via OT-516 - I haven't looked but what you're describing sounds like a reasonable conclusion, i.e. the incoming agent isn't getting matched up.
@jimbobmcgee thank you again for the very detailed analysis; we will get this fixed via OT-515, it's most an easy encoding to add
Hi @jimbobmcgee,
This restriction is for security purposes, specifically to enable the usecase of Otter enabling end-users to create/edit/run scripts, but not decide where they are run. So the for server
is locked unless the targeting is set to None.
The solution is to indeed add a variable that allows you to select a server... but as you noticed, you'd have to type in a list. We simply ran out of time to bring those over from BuildMaster unfortunately, and this is not really a popular Otter requirement.
But sure it's possible, it's called a VariableTemplateType
. Here is the code from BuildMaster that you could probably copy/paste into a custom extension.
But the issue is that you don't have access to DB
in the SDK. Kind of a pain, but you could either reference Otter.Core.dll in your Nuget Package, use reflection, call DB directrly, etc.
[Category("Infrastructure")]
[DisplayName("Servers")]
[Description("Servers configured in BuildMaster, optionally filtered by one or more environments")]
public sealed class ServerListVariableSource : BuildMasterDynamicListVariableType
{
[Persistent]
[DisplayName("Environment filter")]
[PlaceholderText("Any environment")]
[Inedo.Web.SuggestableValue(typeof(EnvironmentNameSuggestionProvider))]
public string EnvironmentsFilter { get; set; }
private class EnvironmentNameSuggestionProvider : ISuggestionProvider
{
public async Task<IEnumerable<string>> GetSuggestionsAsync(IComponentConfiguration config) =>
(await DB
.Environments_GetEnvironmentsAsync().ConfigureAwait(false))
.Select(e => e.Environment_Name);
}
public async override Task<IEnumerable<string>> EnumerateListValuesAsync(VariableTemplateContext context)
{
var values = (await DB.Environments_GetEnvironmentsAndServersAsync(false).ConfigureAwait(false))
.EnvironmentServers_Extended
.Where(es => es.Server_Active_Indicator)
.Where(es => string.IsNullOrEmpty(this.EnvironmentsFilter) || string.Equals(this.EnvironmentsFilter, es.Environment_Name, StringComparison.OrdinalIgnoreCase))
.Select(es => es.Server_Name)
.Distinct();
return values;
}
public override RichDescription GetDescription()
{
if (this.EnvironmentsFilter?.Length > 0)
return new RichDescription("Servers in ", new ListHilite(this.EnvironmentsFilter), " environments.");
else
return new RichDescription("Servers in all environments.");
}
}
It is on our list to "Make Job Template Varible Editor Closer to Pipeline Variable Editor", it's just not trivial and not a huge priority on our products roadmap (https://inedo.com/products/roadmap)
Cheers,
Alana
Hi @serveradmin_0226 ,
This is a network-related error and basically means that your ProGet container cannot talk to the SqlServer container. There are no other causes of this error.
If this error is sporadic, then it's likely related to the SqlContainer restarting or some other problem with the networking stack on the server. You'd need to investigate the timing of these errors with other things happening on the host server.
If it's always happening, then it's likely due to your container's network configuration (often it's a typo, like inedosql
and inedo-sql
or something in your configuration) or the SqlServer container could just not be running.
As for the logs, that's just a stack trace showing us where the error occurred, and specifically the code file name/line number. Those paths refer to the build server (i.e. BuildMaster) that ProGet was compiled on.
Hi @MY_9476 ,
Thanks for the heads up! We will fix this via OT-514 in the next maintenance release.
As an FYI, this is the code that should have been run at the end of the database upgrade, to ensure that all procs and table-value params have appropriate permission:
DECLARE @SQL NVARCHAR(MAX) SET @SQL = ''
SELECT @SQL = @SQL + 'GRANT EXECUTE ON TYPE::' + QUOTENAME(name) + ' TO [OtterUser_Role] ' FROM sys.table_types
SELECT @SQL = @SQL + 'GRANT EXECUTE ON ' + QUOTENAME(name) + ' TO [OtterUser_Role] ' FROM sys.procedures
EXEC sp_executesql @SQL
The script you ran works too :)
hi @hammel_7023 ,
Thanks for letting us know, you are correct... this is indeed a regression from PG-2859.
What's happening is the upload stream is getting prematurely closed during the POM validation logic, which is what's causing this error to occur.
I've just patched it now via PG-2868, and it'll get in the next maintenance release.
Cheers,
Alana
Hi @scusson_9923 ,
Here's a one-liner that should hopefully get you started.
Invoke-WebRequest -Method Post -Uri "http://otter.localhost/api/json/Rafts_CreateOrUpdateRaftItem" -Body @{
API_Key = "abc123"
Raft_Id = 1
RaftItemType_Code = 4
RaftItem_Name = "mypath/myscript.ps1"
ModifiedOn_Date = Get-Date
ModifiedBy_User_Name = "API"
Content_Bytes = [System.Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes("'hello world'"))
}
The RaftItemType_Code=4
is not documented, but it's a fixed value and means a script. I recommend creating the item manually first, then looking in the RaftItems
table for the Raft_Id
and RaftItemType_Code
values.
Cheers,
Alana
@jimbobmcgee excellent thanks!! Release has been published then :)
Hi @nachtmahr ,
It sounds like your Windows Integrated Authentication is broken. This is an operating-system level feature, and the only thing you can do in ProGet is turn it on, or turn it off.
WIA is pretty buggy these days, and sometimes it just breaks. Here is more information:
Hopefully just doing a reboot of the server will fix the problem. If not, then you'll have to troubleshoot it, which kind of sucks:
Note that you can disable WIA by using the Locked Out protocol:
https://docs.inedo.com/docs/installation/security-ldap-active-directory/various-ldap-troubleshooting#locked-out-restoring-default-admin-account
Hope that helps,
Alana
I think you might have an old version of Inedo Hub; you can just download a new version and the dropdown will be there. I guess the old version should work, I just worry about some bugfix/change that might cause newer versions to not work.
The feed is https://proget.inedo.com/upack/PrereleaseProducts
That error means that the ProGet application container cannot make a network connection to the the SQL Server container.
I would try to restart the containers, start with SQL Server. Make sure it's running. If it's running, you can try connecting to it with another tool like SSMS if you'd like to verify.'
Otherwise, on the ProGet side, you can only configure the connection string; upgrading or changing the license key has zero impact on this. If it worked in the past, then it means that the SSQL Server container is not running or your network/container configuration changed.
This can be hard to discover exacgtly what changed, so I would recommend "starting from scratch" on a new server/environment, then compare/contrasting what changed. Sometimes it's as simple as a typo or an errant -
(dash character) in the wrong place/script.
Cheers,
Alana
Hi @forbzie22_0253 ,
Similar to the UI, packages are still returned in the API - they just have a flag set to indicate they are unlisted/deprecated. It's up to the client to determine what to do about that.
I don't believe the Find-Package
cmdlet works with these properties; I think only Visual Studio will hide/warn about them.
Thanks,
Alana
@sneh-patel_0294 and as an FYI, if you haven't already, you can request a ProGet Trial key from My.Inedo.com, and then set it to ProGet Enterprise, which supports the Clustered installation
Hi @sneh-patel_0294 ,
A "chained connector" would be something like, "(Feed A) --> (Feed B) --> (Feed C)". We've seen some set-ups like "(Feed A) -> ((Feed B) + (Feed C --> Feed F)+ (Feed D --> Feed G))", and every now and then a "loop" (where Feed A eventually connects back to Feed A). Those are really bad for performance, especially with NuGet v2 which requires a query every every single connector.
As for a clustered installation, here's our set-up guide for that:
https://docs.inedo.com/docs/installation/high-availability-load-balancing/high-availability-load-balancing
But to answer your questions... a sstandard share drive and a common SQL Server is fine. The main thing is to spread the incoming network traffic across multiple web nodes.
Cheers,
Alana
Hi @sneh-patel_0294 ,
The underlying issue is that you ProGet server is getting overloaded, and you need to find a way to reduce peak traffic or switch to a load-balanced solution. Removing NuGet V2 APIS, chained connectors, etc. are a good step in reducing traffic.
See How to Prevent Server Overload in ProGet to learn more.
Keep in mind that the clients (build servers, dev workstations) are sending 1000's of simultaneous requests to ProGet at one time. ProGet is not a static file server (unlike nuget.org), and each request must be authenticated and often proxied/forwarded to connectors. There is only one network card on the server, and this is what happens when it gets overloaded.
As for why it's causing errors now, this is a result of changes to the underlying platform (.NET Framework to .NET Core). The older platform did a better job of throttling traffic under extreme load and, for whatever reason, didn't timeout as much.
You can configure a throttle in ProGet by going to Admin > HTTP/S Settings > Web Server > "edit", and then set a value of 100 or so. You mentioned a value of "500", but I would just set it to 100.
Cheers,
Alana
Hi @enrico-proget_8830 ,
Using ngnix is probably a better solution anyway if you don't mind setting that...
but the setting is now under Admin > HTTP Settings > Web Server " Edit"
Thanks,
Alana
Hi @udi-moshe_0021 ,
I don't know... as I mentioned, when we follow our instructions to set up a Debian feed in ProGet with a connector to http://ftp.debian.org/debian/
(Buster) it seemed to work fine. Other users seem to have no issues with the steps there, which is why it's likely your network.
Beyond that I really don't know enough about your configuration or apt
troubleshooting to help further. I can't try to reproduce your environment, but if you provide the exact error messages from apt
, I can search for them.
However, for faster help, please just simply search the error messages you are receiving from apt
that you are receiving and follow the advice of articles that come up on Google. or ask ChatGPT.
Since I know very little about apt
, all I can really do here is read the error messages, search for them, and link you to an article to try.
Thanks,
Alana
Hi @udi-moshe_0021 ,
Sure, anything would help; I'm basically looking for a very specific error message that I can search. Once you share the specific console outputs, I will try to search what the error means and summarize the results and how you might be able to troubleshoot it further.
I don't think there are any issues with your ProGet configuration, as it clearly works in Ubuntu desktop for you. It's likely a configuration of apt
that you need to make, so you may wish to search the exact apt
error messages as well.
Thanks,
Alana
Hi @udi-moshe_0021 ,
Can you can provide the specific commands and error messages you are receiving? I.e. just coyp/paste the entire console session with the commands you're typing and the output.
cheers
Alana
Thanks for clarifying that @rpangrazio_2287 , we'll explore that route as well.
We opted against DinD because of resource management (build servers can be rather resource-intensive) and general instability (not everything seems to work the same).
FYI - in case you haven't seen it already, BuildMaster does support Image-based Services (Containerized Builds)
Cheers,
Alana