Here is documentation on how to enable HTTPS in our products on Windows and Linux:
- https://docs.inedo.com/docs/installation/installing-on-iis/installation-windows-https-support
- https://docs.inedo.com/docs/installation/linux/https-support
-- Dean
Welcome to the Inedo Forums! Check out the Forums Guide for help getting started.
If you are experiencing any issues with the forum software, please visit the Contact Form on our website and let us know!
Here is documentation on how to enable HTTPS in our products on Windows and Linux:
-- Dean
Hi @steviecoaster,
The Native API isn't hacky, just harder to used. Here is the documentation on the Native API:
https://docs.inedo.com/docs/proget/reference-api/proget-api-http#native-api-endpoints
We don't have any articles/guidance on how to call the Native API beyond what's there.
The Users_*
procs have not changed in years and are very safe to use. There are a few forums posts here and there with "hints" on work with the User_
procs, like this:
https://forums.inedo.com/topic/4198/reset-proget-admin-password-via-api/2
-- Dean
Hi @steviecoaster ,
I agree it'd be nice, but it's also not trivial as you noticed. I believe it's possible to configure security with the Native API, but obviously not as easy. So I would explore that, it's probably close to what you wnat.
A first-class Built-in Users/Groups API has been requested over the years (and is now something that makes sense with pgutil
) - however if we made it, the API would be paid-editions only. We haven't had any interest from paid users in such a feature, as they generally use LDAP or don't mind non-API configuration.
-- Dean
@steviecoaster great, thanks for sharing!
I added this to our documentation (https://github.com/Inedo/inedo-docs/pull/253), but apparently I don't have ability to merge PRs in that repo so it'll go live sometime later I'm sure
@steviecoaster glad you were able to figure it out :)
Easy typo and we should consider a validator on that "Update SSL Certificate" page as well to save a headache like this!
Hi @v-makkenze_6348 ,
If you download (i.e. cache) the package, then you shouldn't see the compliance issue anymore. The reason is that ProGet does not have information about the package unless it's cached/local, or if you're viewing it on the package overview page.
When ProGet runs a build analysis (first screenshot), it only uses local/cached package data. This is for performance reasons, as users will have 100's of builds with 1000's of packages in each build, and that much traffic to each connector is problematic.
However, we are working on building a "remote metadata cache" that will fetch this data in a more performant manner.
-- Dean
Hi @steviecoaster ,
Sorry about giving the bad advice there -- I did not realize that the offline installer does not include the hub.exe
program. It looks like the Offline Installer Creation Process must strip that from the offline installer we provide.
As you discovered, the advice from before wouldn't really work. Without doing a deep dive in the code, I don't know how to make it work. This isn't a use case we designed for, and I'd hate to send you down a wild goose chase.
How about just using our standard silent installation approach, which I shared before:
# create working directories
mkdir C:\InedoHub
cd C:\InedoHub
# download and extract file to working directory
Invoke-WebRequest "https://proget.inedo.com/upack/Products/download/InedoReleases/DesktopHub?contentOnly=zip&latest" -OutFile C:\InedoHub\InedoHub.zip
Expand-Archive -Path InedoHub.zip -DestinationPath C:\InedoHub
# perform silent installation
hub.exe install ProGet:5.2.3 --ConnectionString="Data Source=localhost; Integrated Security=True;"
This will basically install the desired version and it's likely "good enough" for the time being.
-- Dean
@MY_9476 I'm afraid not; what you're seeing is a diff of the OtterScript, which is what's stored in the database. The issue is that the "serialization to code" order must have changed between major versions.
We try to not have that happen, but fortunately this only happens the first time you edit such a script. Next time, only your changes will be preserved.
Hi @jw,
There's no problem deleting the deprecated licenses and adding their SPDX identifiers to the new licenses. When you delete a license, it will remove the association with packages.
However, the compliance analysis scheduled job will reassociate them. This runs nightly, or you can manually run it. Also, if you visit the package page or download the package, it should associate with the new license.
We'll definitely reconsider the design/approach if there's more demand, but we need to really make sure there's a value to the user -- there's a relatively high cost to change things and then there's a chance of regression/bugs, which is really frustrating to users.
-- Dean
Hi @jw ,
[1] is somewhat expected behavior, as some older versions of ProGet allowed non-normalized URLs to be added; newer versions should not allow this. We do not plan to "clean-up" the data at this time, but if you're "brave" you could could like do it with some API/DB calls
[2] This is probably some quirk related to older data, but you can probably just cut the items to clipboard, save the license, the edit again, and it should work-around the quirk
FYI, the URLs that we have in our database for LGPL-3.0-only
are: as follows
gnu.org/licenses/lgpl+gpl-3.0.txt
gnu.org/licenses/lgpl-3.0-standalone.html
opensource.org/licenses/LGPL-3.0
I'm guessing you have the www
from an older version
-- Dean
Hi @MY_9476 ,
I'm afraid this isn't technically feasible.
When you edit in Visual Mode, the statement is basically "serialized" to OtterScript, and the property order is determined by however the operation's code (i.e. C#) is laid out. Fortunately this is really consistent, and really only is a problem when you go back-and-forth.
But in this case, there must have been some change to the C# / order of properties (refactoring perhaps?) so it's inconsistent across versions of Otter? If you edit it again, the order will be the same.
-- Dean
Hi @steviecoaster,
We're currently working on the pgutil feeds
commands, and expected create
and update
to be finished in a week or so. In the mean time, I just published pgutil-1.1.2
, so please check that out - it will have a basic version of pgutil feeds create
.
As for the PowerShell command, you'll need to pass the JSON structure that's documented in that article. You would want to use something like ConvertTo-Json
to convert a Hash table to JSON.
-- Dean
Hi @steviecoaster,
If you want to use the offline installer in a script (which I think is best for a Chocolatey script), then just unzip it after downloading. I know it's an .exe
file, but you can just unzip it like any other zip file (e.g. with Expand-Archive
or something in PowerShell).
Once you've done that, you can just run the hub.exe
per the silent installation instructions.
Alternatively, you can write a script like this, which will instruct hub.exe
to download a version:
# create working directories
mkdir C:\InedoHub
cd C:\InedoHub
# download and extract file to working directory
Invoke-WebRequest "https://proget.inedo.com/upack/Products/download/InedoReleases/DesktopHub?contentOnly=zip&latest" -OutFile C:\InedoHub\InedoHub.zip
Expand-Archive -Path InedoHub.zip -DestinationPath C:\InedoHub
# perform silent installation
hub.exe install ProGet:5.2.3 --ConnectionString="Data Source=localhost; Integrated Security=True;"
The only downside to this approach is that it's not really version controlled (i.e. it's always using latest Inedo Hub) and the package can't be realizably internalized, since hub.exe
will download files from the internet.
-- Dean
Hi @cstekelenburg_4169 ,
If you created a connector with the proper URL, username, and password, then you've set it up correctly.
Please note, you will not be able to list or search remote packages in the ProGet UI. This is a limitation, since ADO is rudimentary and does not support listing/searching.
Instead, you will need to use the NuGet API (Visual Studio, Nuget.exe) to pull packages. Once package has been pulled through ProGet, you will "see it" in the UI since it's been cached.
I'm not trying to be pedantic with terminology, but please note it's not a "sync" -- it's more of a "proxy" or "pass through". When you make a request via the NuGet API, that API request is forwarded to your connector (I.e. Azure DevOps).
-- Dean
Azure Artifacts does not support the NuGet Search API, so this means you won't see packages on the feed/search page in ProGet. This is expected behavior with rudimentary NuGet repositories that don't support listing/searching.
In the past, I'm guessing that you either:
-- Dean
Hi @cstekelenburg_4169 ,
ProGet does not "sync" remote repositories, but instead displays and caches results through a "connector" you configure. If there are any connector errors, they will be logged when you do a query.
Azure artifacts do not support searching or listing packages, so you won't "see" a package unless you type in the exact package name.
-- Dean
Hi @johnsen_7555 ,
Thanks for sharing the additional details; I was able to reproduce this, and its a regression/bug.
We fixed this via PG-2723 (FIX: Noncompliant Pypi packages can be downloaded when blocking is enabled on trial licenses), which is going to be shipped in the next maintenance release.
However, in the mean time, you can try a patch/prerelease version that contains the fix; since you mentioned Docker, the tag to pull would be 24.0.9-ci.1
.
Hopefully that will solve the issue!
-- Dean
Hi @johnsen_7555 ,
Thanks for sharing all of the configuration details; can you navigate to one of the GPL packages that you're trying to install, and see what it says on the package page within ProGet?
On that page, you can also ReAnalyze the package (using the dropdown button ) and get a log of which policies/rules were applied.
-- Dean
Thank you so much for the kind words, and it sounds like you're on the right track for setup/configuration!
ProGet is designed for the scenario you described (i.e. a multiple instances that point to the same database/files), and we call it a server cluster:
https://docs.inedo.com/docs/installation-high-availability-load-balancing
The above instructions are for Windows, but the same principle would apply for Azure Web Apps, Kubernetes, etc. We just don't have documentation or the ability to support the underlying platform. What I mean by that, if the "scale out" feature in Azure doesn't do what it's supposed to, you would need to contact Azure support.
All that said, do note that a ProGet clustered installation requires a ProGet Enterprise license:
https://docs.inedo.com/docs/proget-administration-license
Hope that helps,
-- Dean
Hi @scott-wright_8356 ,
That procedure is ancient and hasn't caused problems before. I suspect the issue might be the size of the data in that able - you can see what that looks like on the connector cache management page.
You could also try to optimize the query like so:
ALTER PROCEDURE [dbo].[Connectors_GetCachedResponse]
(
@Connector_Id INT,
@Request_Hash BINARY(32)
)
AS BEGIN
SELECT * INTO #ConnectorResponses
FROM [ConnectorResponses] WITH (READUNCOMMITTED)
WHERE [Connector_Id] = @Connector_Id
AND [Request_Hash] = @Request_Hash
UPDATE [ConnectorResponses]
SET [LastUsed_Date] = GETUTCDATE(),
[Request_Count] = CR.[Request_Count] + 1
FROM [ConnectorResponses] CR
JOIN #ConnectorResponses _CR ON CR.[Connector_Id] = _CR.[Connector_Id]
SELECT * FROM #ConnectorResponses
END
Let us know if that works, we can update the code accordingly if so.
-- Dean
Troubleshooting the database can be a pain, but there are a lot of tools that can help. Here is a nice guide from Microsoft, which includes a script that shows you how to use the sys.dm_exec_query_stats
and sys.Dm_exec_sql_text
views:
https://learn.microsoft.com/en-us/troubleshoot/sql/database-engine/performance/troubleshoot-slow-running-queries
Just make sure to remove WHERE t.text like '<Your Query>%'
from the sample, if you want to see all queries. Some red flags on that query will a disproportional total_elapsed_time
, etc.
That said, there are two different jobs:
A "Server Checker" is run frequently and should be very fast. There is very little database activity (just Servers_GetServers
to get all servers, then Servers_UpdateServerStatus
per server).
A "Routine Configuration" is obviously more complicated, and may involve a lot more database activity.
-- Dean
Hi @scott-wright_8356 ,
Thanks for sharing a lot of the details; it's not really clear what objects the locks are on, can you look at the XML view of the deadlock report to find out?
Since you're "already there", you're welcome to try doing what we would do --- just add WITH (READUNCOMMITTED)
to the problematic queries (i.e. the ones where object locks are occurring on).
For example, like this: SELECT * INTO #PgvdPackageNames FROM [PgvdPackageNames_Extended] WITH (READUNCOMMITTED) WHERE [PackageName_Id] = @PackageName_Id
As an FYI, this should not be an issue in ProGet 2024, since package analysis is cached for a short while. Also, our internal development practices (going forward) are to READUNCOMMITTED
in "hot" queries this anyway.
-- Dean
hi @andreas-unverdorben_1551 ,
The timeout message you shared is for a NuGet feed - so the issue isn't the 727 packages that you're installing (which often results in 1400-2100 requests being fired off simultaneously) , but the NuGet build(s) that are also going on at the same time and hammering the server.
The server cluster will definitely help with this peak load.
FYI --- that error message you shared is is from NuGet v2 API, so something is still calling it.
Cheers,
Dean
Hi @andreas-unverdorben_1551 ,
What's happening here is your server is being overloaded due to lots and lots of traffic -- effectively you are doing a Denial of Service attack on you server. As I mentioned in npm install slow on proxy feed, ProGet is not a static file server and there is a lot of processing required for each request.
The best way to handle this is to run a load-balanced ProGet server cluster.
Alternatively, you will need to reduce or throttle traffic.
Best,
Steve
Hi @andreas-unverdorben_1551 ,
npmjs.org primarily serves static content and runs on massive server farms running in Microsoft's datacenters.
Your ProGet server is much less powerful and does not serve static content. Not only is every request is dynamic (authentication, authorization, vulnerability checking, license checking, etc), but most requests (such as "what is the latest version of package X") need to be forwarded to npmjs.org and aggregated with local data.
So, a much less powerful server doing a lot more processing is going to be a little slower ;)
Running ProGet in a server cluster will certainly help.
Cheers,
Dean
In this case, can you open a ticket and send us that tar.gz file (attach on the ticket)?
From there we will load it in a debug version of ProGet and identify where the problem is occurring. Not something that will be easy to spot by inspecting a file :)
Thanks,
Dean
This means that the file you uploaded contained unexpected data; like maybe something in the package.json file was wrong, or it was in the wrong format, etc.
How did you create this npm package? Or did you download it from somewhere? We could probably look at the file and say what's wrong with it.
Best,
Dean
Hi @henh-lieu_7061 ,
This error indicates that you don't have access to the c:\ProgramData\Romp directory, or there is otherwise a file error when writing to that folder.
You can try deleting that folder, then reinstalling again.
Best,
Dean
Hi @Ricardo_C_RST ,
For EC2 (a virtual server), you can just follow the ordinary Inedo Hub installation process. As for HTTPS and Domain, that's a setting probably easiest to do in AWS. I'm not familiar with AWS, but something like CloudFront might help?
Dean
Hi @Justinvolved,
I don't know if I'm totally tracking the structure, and what you mean by "referencing the latest builds" in the deployment scripts.
Are you looking to have three applications? One app per repo, and then a "controller" application?
Or, do you want to have one application that builds from two repos? And it will always build/deploy content from those at the same time?
Dean
Hi @MaxCascone ,
Thanks for pointing these out; we will get the RPM icon fixed (probably a minor CSS thing), and then then hide the option on the Manage Asset Directory page (via PG-2540).
This view is intended mostly for ProGet ISV Edition use cases, and we likely won't expand on this in ProGet 2024, since you're probably first person who commented on it
Cheers,
Dean
Hi @kigiwow570_6179,
I don't have enough information/specifics to answer the questions; it really depends on the API queries being made, etc. Some API calls result in errors, others do not. In general, doing a NuGet restore will not result in errors being logged.
Best,
Dean
Hi @kigiwow570_6179 ,
These errors are being "forwarded" from NuGet.org, and if you're seeing them it means that "something" on your network is making requests to a ProGet feed that has a connector to nuget.org.
This is typically an old, outdated tool or script. You'll need to track this down and update/disable it.
It could also be a self-connector that is connecting to a feed that also has a connector to ProGet using the V2 APi - so double check your self-connectors.
Best,
Dean
You mention the server is crashing? That's really strange, and we've never quite seen anything like this before. We routinely test on much less powerful hardware in nearly identical scenarios that you are doing now.
SQL Server / .NET will use as much ram as it can, so the usage isn't surprising or really concerning.
Otherwise we don't really any info info on the errors --- You'll need to track down error messages from the ProGet side of things; perhaps the Docker container error logs, the SQL Server error logs, etc. Maybe try putting the npm client (i.e. BuildMaster) on a different server, just to help isolate any potential issues.
We've seen a lot of weird things happen -- every now and then, there's a problem with the network controller/hardware/driver on the server, and it's having a hard time processing the 1000's of simultaneous "loopback" connections that npm is making from the BuildMaster container to the ProGet container, and the "outbound" connections that npm would be making to npmjs.org.
But until you find out what those errors are on the ProGet side, it's impossible to guess.
Thanks @philippe-camelio_3885 , roger that!
I thought you were reffering to the API :)
We'll get this fixed as BM-3912 in a future maintenance release, likely Dec 14 since it's a holiday week coming up here in the states.
Cheers,
Dean
Thanks for clarifying; so it sounds like you're talking about Variable Prompts (Templates), which are part of a pipeline?
These are not "variables" per se; instead, they are used to prompt users at certain points during a pipeline (creating build, deploying build, etc.) to input values. Those values will then be created as variables.
We don't plan to extend templates outside of pipelines. If you reaaaaly wanted to, you could create a BuildMaster "application that creates applications" using the API and then leverages the pipeline prompts to create the variables in the format you want.
It's probably just easier to clone an application, however.
Cheers,
Dean
I'm sorry it's not really clear what you're trying to accomplish.... you can define list-based variables at all levels. Just specify the value like @(a,b,c)
, for example.
Cheers,
Dean
Sorry - it's not really clear what the issue is; can you share more details? Ideally a way I can reproduce this in a new instance of BuildMaster, but also maybe a screenshot or the APi calls you're making?
Thanks,
Dean
These errors are being "forwarded" from NuGet.org, and if you're seeing them it means that "something" on your network is making requests to a ProGet feed that has a connector to nuget.org.
This is typically an old, outdated tool or script. You'll need to track this down and update/disable it.
It could also be a self-connector that is connecting to a feed that also has a connector to ProGet using the V2 APi - so double check your self-connectors.
Best,
Dean
@Srinidhi-Patwari_0272 here is the api for healthcheck:
https://docs.inedo.com/docs/proget-reference-api-health
the url would be /health
Hi @Srinidhi-Patwari_0272 ... it sounds like everything's working on the ProGet side, but I'm not familiar enough with troubleshooting Terraform/etc. to advise how to troubleshoot automating that.
The domain name is abcd.efg.com
and the port is 8624
; it's very likely someone will need to open that port on the firewall as well.
Cheers,
Dean
Hi @v-makkenze_6348 ,
This is the first time I've seen a "link": true
type entry in a lock file.
The docs for package-lock.json aren't very clear for what this signifies, aside from:
A flag to indicate that this is a symbolic link. If this is present, no other fields are specified, since the link target will also be included in the lockfile.
Not sure how a "link" entry like that gets added to a lock file, but I think we should just skip it then? Basically add code in that loop that does this?
if (npmDependencyPackage.Value.TryGetProperty("link", out var link) && link.GetBoolean() == true)
continue;
What do you think?
Cheers,
Dean
@Srinidhi-Patwari_0272 said in How to automate ProGet installation via ansible so that ProGet and SQL EXPRESS gets added to D drive instead of default C drive:
How to make ProGet URL accessible to all?
If you're able to access ProGet while being logged into the server, then it means the web server is up and running without issues.
From here, you'll need to adjust Windows firewall settings and/or enable DNS so that others can access. There's also certificate settings you'll need possibly. I'd check w/ your network team on this --- definitely not a change you can make inside of ProGet itself.
Hi @Evan_Mulawski_8840 ,
That particular bug may be resolved by running the patch script attached to this upcoming change: https://inedo.myjetbrains.com/youtrack/issue/PG-2484
As for retrying the migration; there will be a button on the ProGet root page if there is a failed/incomplete migration (/) that allows you to retry it. This will open to the /retry-migration
URL, which I would recommend using instead of a database insert.
Best,
Dean
Due to some reasons I don't fully understand, authenticating to the Docker API requires using a Bearer authentication token Authorization: Bearer {token}
Here is an example for how to get that token:
function GetDockerToken() {
param (
[string] $packageName = $(throw "-packageName is required. This is the namespace and image name. For example: library/my-container-image"),
[string] $feed = $(throw "-feed is required"),
[string] $actionToAuthorize = $(throw "-action is required. This is the docker action to be authorized (pull, push, delete, etc)"),
[string] $apiKey = $(throw "-apiKey is required"),
[string] $progetBaseUrl = $(throw "-progetBaseUrl is required. "),
[string] $service
)
if ($service -eq "") {
# This expects that $progetBaseUrl is prepended with "https://" If you are using "http://" then change 8 to 7 below.
$service = $progetBaseUrl.SubString(8,$progetBaseUrl.Length-8)
}
$base64AuthInfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(("{0}:{1}" -f "api","$apiKey")))
$response = Invoke-WebRequest -Uri "$progetBaseUrl/v2/_auth?service=$service&scope=repository`:$feed/$packageName`:$actionToAuthorize" -Headers @{Authorization=("Basic {0}" -f $base64AuthInfo)}
if ($response.StatusDescription -eq "OK") {
$token = ($response.Content | ConvertFrom-Json).token
$token
}
}
I know we plan to document all this better some day, for now you can find the Semantic Versioning for Containers page.
Dean
Hi @Stephen-Schaff,
Thanks for the detailed write-up; the behavior you describe makes sense, and made for very copy/pasting PG-2478 :)
We anticipate getting this in the next maintenance release, scheduled for Sept 15
Best,
Dean
DockerImages_GetTags
is part of the Native API, which is basically just a wrapper around the SQL ProGet database. That's fine to use, but you can also use the Docker API to list tags as follows:
curl -X GET https://proget-url/v2/feedName/repositoryName/tags/list
The Docker API will show you tags that come through a connector, where as the Native API is will only be local tags.
The Docker API does not provide a way to directly retrieve the operating system information of Docker images. However, you can infer the operating system information by examining the contents of the Docker image itself or by using external tools and scripts.
Here's an example in Python of how to do that:
import docker
def get_os_of_image(image_name, tag):
client = docker.from_env()
image = client.images.get(f"{image_name}:{tag}")
os_info = image.attrs['Os']
return os_info
# Example usage
os_info = get_os_of_image('my-image', 'latest')
print(f"Operating System: {os_info}")
Alternatively, you can also inspect the image layers directly. Docker image layers contain files from the image's filesystem, and you can look for specific files or patterns that are indicative of the operating system.
Best,
Dean
Hi @pariv_0352,
What version of ProGet are you using?
Perhaps this is related to this:
https://forums.inedo.com/topic/3596/proget-2022-6-the-hostname-could-not-be-parsed
Dean
@jw that's the correct column! Just the incorrect value being added :(