Hello,
Sorry for the delay on this. Would you be able to send me the JSON from:
Just change the domain name of your TFS server and change packagename_1 to be the name of your package.
Thanks,
Rich
Hello,
Sorry for the delay on this. Would you be able to send me the JSON from:
Just change the domain name of your TFS server and change packagename_1 to be the name of your package.
Thanks,
Rich
Are you able to see the log from the error on the first time you upgrade and it failed? That log would most likely show us the script with the error.
The other option is you can download InedoSQL and run the command:
inedosql.exe errors --all --connection-string="Data Source=(local);Initial Catalog=ProGet;Integrated Security=False;User ID=ProGet;Password=xxx"
That will tell us what scripts failed and their error.
Thanks,
Rich
It looks like this didn't get posted to the Support category and it was missed. Yes, ProGet can be moved to Azure. We actually have a guide on how to move ProGet to Azure. Please let us know if you have any other questions!
Thanks,
Rich
Is your Azure DevOps NuGet feed using the v2 or v3 API endpoint? If v3, can you send me the inex.json for your feed? You can send this to support@inedo.com with the subject of "[QA-786] Azure DevOps Null Reference" to keep it private.
I first want to make sure that your on-premise version is not missing one of the endpoints that ProGet uses.
Thanks,
Rich
Hi @janne-aho_4082,
Thank you for providing the extra details. When you click the Clear Cache option in the tasks, that clears the permissions cache, which will mean that calls to AD will need to be made again for the impersonated user and their AD groups. There is a timeout that can also be set in the Advanced Settings on how long to cache users. The default is 60 minutes, but you may want to make sure that the value was not changed. The cache is also cleared anytime permissions/restrictions were added or removed.
As for your testing environment, is your test environment in the cloud? If so, is it in the same region as your active directory server? This can sometimes cause the performance of AD to degrade. Also, you mentioned about 20-30 seconds later in the test environment, it seems to lose the cache. What is your application pool's setting for recycling and Idle time-out? If those are too quick, that will also cause the permissions cache to reset as well.
How long did it take previously in production to run these tests?
Thanks,
Rich
Hi @janne-aho_4082,
Thanks for providing us with the stack trace. It looks like we may have found the issue. Would you be able to install the InedoCore 1.31.1-RC.10 pre-release extension and see if that fixes your issue?
Thanks,
Rich
Hello @nmorissette_3673,
Can you please tell us what version of ProGet you are running?
Also, when you navigate to the package in the UI and get the 404 error, do you see any error messages show up in the diagnostics center?
Thanks,
Rich
Hi @cronventis,
I just wanted to let you know that I'm still researching this. Is it possible to tell me what version of Kubernetes and Rancher is installed?
Thanks,
Rich
Hi @cronventis,
I took a look through your tables and there is definitely something a bit odd going on with your Kubernetes output. Could you tell me what version of Kubernetes you are running?
Would it be possible to see the output from your Kubernetes API? Again, this is something you can send us via suppor@inedo.com with the subject of [QA-729] Kubernetes API
. To get the image list, you can simply run this PowerShell against your Kubernetes API:
$uri = [System.Uri]'http://localhost:8080/api/v1/pods?limit=999'
$response = Invoke-RestMethod -Method GET -Uri $uri.ToString()
$response | ConvertTo-JSON | Out-File "C:\temp\response.json"
Just change HTTP://localhost:8080 to your Kubernetes API host and port.
Thanks,
Rich
Hi @cronventis,
Thanks for sending this over to us! I can confirm we have received it and we are currently looking into this. I'll let you know when we have more information.
Thanks,
Rich
Hi @mcascone,
Sorry about that. I'll get that backported to ProGet 5.3 also. This will be tracked in ticket PG-2074.
Thanks,
Rich
Hi @Michael-s_5143,
Glad to hear that post helped you out. We updated our documentation to include information about the ASPNETCORE_URLS information.
Thanks,
Rich
Hi @pwe,
Thanks for updating us that you were able to figure this out. I have updated our documentation to include information about changing the internal port number.
Thanks,
Rich
Hi @cronventis,
Would you be able to send over your Docker tables from your ProGet database to us? They won't contain any of the actual layers of the images, just the digests and names so we can see where the disconnect is happening.
The easiest way to do this would be to :
pgexport
SELECT * INTO [pgexport].[DockerBlobs] FROM [DockerBlobs]
SELECT * INTO [pgexport].[DockerImageLayers] FROM [DockerImageLayers]
SELECT * INTO [pgexport].[DockerImages] FROM [DockerImages]
SELECT * INTO [pgexport].[DockerRepositoryTags] FROM [DockerRepositoryTags]
SELECT * INTO [pgexport].[ContainerUsage] FROM [ContainerUsage]
pgexport
and send it to usIf the backup is pretty small, you can just email it to support@inedo.com with the subject of [QA-729] Database Export
or I can create a one drive share link you can upload to.
Would you be able to do that?
Thanks,
Rich
Hi @Stephen-Schaff,
Did you have to wait 30 minutes after adding the publish permission to a user or is it a user that already has the publish permission and it randomly stops working?
Also, what version of ProGet are you currently running?
Thanks,
Rich
Ah, thanks for the extra details here. I was finally able to recreate this issue. I'm tracking the fix in BM-3770 which is expected to release in the next version of BuildMaster, which is due out this week.
Thanks,
Rich
Hi @cronventis,
I set up an example in our test environment here and everything is working as expected. I did verify that the Image_Id that is in the database should match the Image Digest in the Image Data table as well.
Is this image a local image or a cached image? Currently, usage will only show for a local image.
If it is a local image, I think the next steps will be to run a few different SQL queries to identify why the usage is not matching to the image.
Thanks,
Rich
Hi @cronventis,
Thank you for checking that for me. I dug into this further and it looks like we only actually use the Image_Id to find the image within a feed. We expect Kubernetes to return the Config Digest and that is what is stored in the Image_Id column. I'm going to work on creating this scenario and verify what is being pulled from Kubernetes. With the holidays coming up, I will not be able to look into this until next week. You should hear back from us by Tuesday or Wednesday of next week.
Thanks,
Rich
So far, I'm unable to recreate this issue. Do you have a reverse-proxy (like nginx, apache, traefik, etc...) in front of it? If so, can you try to bypass that and see if the issue still happens?
Thanks,
Rich
Hi @cronventis,
Thanks for checking that for us. That does mean the Container Scanner is working. Did you happen to configure the container usage on your Docker Registry in ProGet? You can check this by navigating to Docker Registry -> Manage Feed -> Detection & Blocking and then you should see your container scanner under Container Usage Scanning.
If you have configured your feed to use that, the next step would be to look at what the Kubernetes scanner identified for your images and how ProGet has these images stored. If you look at your ContainerUsage table, you will need to look at two columns; Container_Name and Image_Id. The Container_Name is your repository's name and the Image_Id is the Digest for your image. Can you confirm that you see the container names you are expecting in that table?
Thanks,
Rich
Thanks! I have received it. I'm going to look it over and test a few more cases and get back to you shortly.
Thanks,
Rich
Would you be able to send me over the contents of your Configuration
table on your BuildMaster database? I'm having trouble recreating this and I'm guessing it has to do with one or more of those settings. You can send it to support@inedo.com and include [QA-723] in the subject. I will keep an eye out for it that way and review it.
Thanks,
Rich
Just to confirm, on your test site, is it a brand new BuildMaster database also?
Thanks,
Rich
Hi @cronventis,
The fix will only prevent new duplicates from being created. Mainly this is because I cannot ensure that the first vulnerability is always the properly assessed vulnerability. For now, the best option will be to run a SQL query directly against SQL Server ProGet database after you upgrade to 6.0.5.
I have created a SQL query that will delete all the duplicates excluding the first vulnerability that was added to ProGet. If that criteria works for you, this query should be good enough.
BEGIN TRANSACTION
DELETE FROM [Vulnerabilities]
WHERE [Vulnerability_Id] in (
SELECT v.[Vulnerability_Id]
FROM [Vulnerabilities] v
INNER JOIN (
SELECT [External_Id]
,[FeedType_Name]
,[VulnerabilitySource_Id]
,COUNT([External_Id]) as [NumberOfDuplicates]
,MIN([Vulnerability_Id]) as [FirstVulnerability]
,MAX([Vulnerability_Id]) as [LastVulnerability]
FROM [Vulnerabilities_Extended]
GROUP BY External_Id, FeedType_Name, VulnerabilitySource_Id
HAVING count(External_Id) > 1
) duplicates on v.External_Id = duplicates.External_Id
WHERE v.Vulnerability_Id != duplicates.[FirstVulnerability]
)
ROLLBACK
Currently, I have the script set to rollback at the end (meaning it won't actually delete the duplicates). If this works for you, you can simply change ROLLBACK
to COMMIT
and rerun the query and it will remove the duplicates.
Please let me know if you have any questions!
Thanks,
Rich
Hi @cronventis,
Thanks for sending this over. I have found the issue, PG-2064, and have fixed it. It will be released tomorrow in ProGet 6.0.5.
Thanks,
Rich
Hi @cronventis,
Thanks for confirming thsi for me. I already have the fix in for ProGet 6.0.5 which is expected to release on Friday.
Thanks,
Rich
In my experience you should only need to add ` in front of all the $ and that should work. Can you give this version a try?
# ACL LOG4J https://www.haproxy.com/blog/december-2021-log4shell-mitigation/
option http-buffer-request
acl log4shell url,url_dec -m reg \`${[^}]*\`${
acl log4shell url,url_dec -m reg \`${jndi:(?:ldaps?|iiop|dns|rmi)://
acl log4shell url,url_dec -i -m reg \`${[\w`${}\-:]*j[\w`${}\-:]*n[\w`${}\-:]*d[\w`${}\-:]*i[\w`${}\-:]*:.*}
acl log4shell req.hdrs -m reg \`${[^}]*\`${
acl log4shell req.hdrs -m reg \`${jndi:(?:ldaps?|iiop|dns|rmi)://
acl log4shell req.hdrs -i -m reg \`${[\w`${}\-:]*j[\w`${}\-:]*n[\w`${}\-:]*d[\w`${}\-:]*i[\w`${}\-:]*:.*}
acl log4shell_form req.body,url_dec -m reg \`${[^}]*\`${
acl log4shell_form req.body,url_dec -m reg \`${jndi:(?:ldaps?|iiop|dns|rmi)://
acl log4shell_form req.body,url_dec -i -m reg \`${[\w`${}\-:]*j[\w`${}\-:]*n[\w`${}\-:]*d[\w`${}\-:]*i[\w`${}\-:]*:.*}
http-request deny if log4shell
http-request deny if { req.fhdr(content-type) -m str application/x-www-form-urlencoded } log4shell_form
Thanks,
Rich
Hi @cronventis,
I was able to recreate this issue and we should be able to get this corrected in the next release of ProGet. Can you please confirm which version of ProGet you are using?
Thanks,
Rich
Hi @cronventis,
Do you have multiple vulnerability sources configured in your ProGet instance? Are you able to provide screenshots of the actual vulnerabilities in ProGet?
Thanks,
Rich
Hi @Stephen-Schaff,
How many tags does the build/ops-dotnet-6.0
have? Is 1.0.1 the only tag it has or is there another? Does the base-prerelease
registry also have strict versioning enabled?
The error you are seeing with vulnerability scanning will not affect the setting of the tag on Docker images. Vulnerability scanning is a scheduled job that happens in the background and not automatically upon upload of a new image so that should not affect this image. I am curious about that error though. Do you have your Sonatype vulnerability source attempting to scan your Docker registry? If so, Sonatype does not support Docker registries. We only support using Clair for scanning vulnerabilities in Docker images.
Thanks,
Rich
So 5.3.27 definitely has the bug. The bug affected Docker Registries not using shared blob storage. Beginning in ProGet 5.3, all new Docker Registries will use common blob by default, but any existing registries that were upgraded do not. This caused the vulnerability scanner to not find the layers during the scan when looking at images in a registry not using common blob storage.
I'm currently testing with the quay.io/coreos/clair:v2.1.7
image. We have not implemented Clair's newest API yet because it does not seem to be stable as of yet. They are still making too many changes too frequently.
Hope this helps!
Thanks,
Rich
ProGet 5.3.39 and later include the fixes for these issues. We also improved the performance of the list repositories page in ProGet 6.0.3+. What version of ProGet are you currently running? Also, is this happening for every layer?
Thanks,
Rich
Hi @scusson_9923,
So after some further testing. I figured out this to be the valid request:
curl -d "" -X POST --user <username>:<password> http://proget.localhost/endpoints/public-files/dir/cli-api-test/
You could also use api:<API key>
for the --user parameter. The last time I tested CURL, I didn't need to include the -d ""
, nor does their documentation. Using only -X apparently ignores the content-length header where -d ""
will add a content-length of 0.
Hope this helps!
Thanks,
Rich
Hi @scusson_9923,
It looks like the issue in the command is that you are using -d
but not providing a value for it. So what is happening is that the -X
is being treated as the value of -d
and POST is being treated as the host. You should either use just -d
or remove the -d
and use -X POST
instead. I'm sorry I missed this before, but curl is not the best with error messages.
As for our documentation, it is meant as general documentation. Mainly, you need to make a POST/GET/etc.. request to that URL. You can use any tool you would like (curl, Invoke-WebRequest, Postman, etc...) or call from your code directly.
If this is something you are planning to use a lot, I would suggest that you add your own custom Feed Usage Instructions or Package Install Instructions for your Asset Directory.
Thanks,
Rich
Hi @scusson_9923,
I just wanted to make sure you were using the correct URL. It should be https://<ProGet_server>/endpoints/<asset_dir>/dir/<new_dir>/
I wanted to make sure you had the dir
URL part between your asset directory and your folder path.
What OS are you running the curl command from? I want to test this myself, but I know Linux and windows syntax can differ a bit.
Thanks,
Rich
@mcascone said in buildmaster linux docker install: sa login failures:
docker exec -it inedo-sql /opt/mssql-tools/bin/sqlcmd
-S localhost -U SA -P 'redacted'
-Q 'CREATE DATABASE [ProGet] COLLATE SQL_Latin1_General_CP1_CI_AS'
Hi @mcascone,
This line has the issue. You created a database named [ProGet] instead of a database name [BuildMaster] as you stated in the connection string on the third command.
Thanks,
Rich
That all looks correct. When you used localhost for the connector, did you still have the Web.BaseUrl
set? If so, can you place try clearing that when you are using localhost? Typically I suggest using Web.BaseUrl
when your self-connectors connect to that same URL, when using localhost you should leave that value blank. Also, is the internal port that ProGet is running on within the container port 80 also, or is it a port that Docker is mapping to an external port 80? That can also cause license violations in ProGet Free.
I'm also going to try to look through past tickets with users who use Traefik. I did a brief scan before my last comment and the X-Forward* headers seem to be the most common issue in the configuration.
Thanks,
Rich
What headers do you currently have forwarding in Traefik? We have documentation around setting up nginx, I know it is a different tool than Traefik, but it does include the standard headers that you need to forward.
Thanks,
Rich
Hi @lukel_3991,
What version of Otter are you running and are you running it on Windows or Docker?
Thanks,
Rich
Hi @forcenet_4316,
Thanks for bringing this to our attention. I'm not sure how this slipped through, but I made sure it is fixed in the next release, ProGet 6.0.3. The final fix is tracked via PG-2043.
Thanks,
Rich
Hi Paul,
Good catch! I have created a ticket to track this issue, BM-3753. As I said in your other post, it is too late to get this in for tomorrow's release, but this is tentatively scheduled for BuildMaster 7.0.14 which is due to release on November 18th, 2021.
Thanks,
Rich
Hi Paul,
Thanks for bringing this to our attention. I have created a ticket to track this issue, BM-3752. Unfortunately, it is too late to get this in for tomorrow's release, but this is tentatively scheduled for BuildMaster 7.0.14 which is due to release on November 18th, 2021.
Thanks,
Rich
Hi @shiv03_9800,
Thanks for sending that over. We have located the issue and we are working to fix it. I will let you know as soon as we have a fix.
Thanks,
Rich
Hi @shiv03_9800,
Would you be able to send over the full execution log from running the "PowerShellDemo"?
Thanks,
Rich
When setting the time for a scheduled Job, is it always 2 hours later? Can you tell me what time zone you have set in your User preferences?
Also, could you find the Job in the database and tell me what the CRON configuration is? you can do that by running the following SQL:
SELECT [Job_Id]
,[Job_Name]
,[LastRun_Date]
,[JobState_Code]
,[Job_Configuration]
,[Recurring_Indicator]
,[TriggeredBy_Template_Name]
FROM [Otter].[dbo].[Jobs]
WHERE [Recurring_Indicator] = 'Y'
Then open the XML in the Job_Configuration
column and look for the CronSchedule
attribute on the Properties
element.
Thanks,
Rich
Hi @Stephen-Schaff,
The latest tag is a bit odd. Each client handles it differently and I have found most clients will not repull a newer version if latest has already been pulled once. According to Docker's API (and Kubernetes), a client is supposed to pull the manifest from the server and compare it with the currently downloaded image and update if there is any difference, but I have found that does not always work. You can configure Kubernetes to always pull an image, but I cannot confirm how well it works.
As for getting a list of tags, we basically have two ways.
What it sounds like you are looking for is to just get a list of alternative tags. The only way to do this is to use our native API and using the DockerImages_GetTags
method and grouping by DockerImage_Id.
The other way to get a list of tags would be to use the docker API itself. This way will get all tags associated with a repository, not just the alternative tags. To do this:
http://<your proget server>/v2/_auth
with an Authorization
header with the value of Basic <BASE 64 encoded username:password>
token
propertyhttp://<your proget server>/v2/<feed name>/<image>/tags/list
(ex: `http://proget.localhost/v2/defaultdocker6/dotnet/core/aspnet/tags/list)
Authorization
header with a value Bearer <token from step 1>
tags
That will get you a list of tags using the Docker API.
Hope this helps!
Thanks,
Rich
Hi @shiv03_9800,
We have just released a new version of Otter that includes a fix for the segmentation fault.
You will need to make sure that .NET Framework 4.5.1 or greater is installed on the Windows server. But if you were able to run the Inedo Agent installer, then chances are you already have that installed.
As for the Inedo Agent error, I talked with some colleagues and this is something we have seen on slower or congested networks when checking Inedo Agent has the components needed to communicate with Otter. Normally if you wait for the next server check to occur, the issue ends up resolving itself. There was another issue, however, that can affect PowerShell scripts when using the Docker version of Otter, OT-432. This was fixed in the latest release as well.
Thanks,
Rich
Hi @shiv03_9800,
Can you please clarify what is not working with Inedo Agent 49? Is the connection always dropped or just when running a PowerShell script through it? Is it a specific command that is failing? We have not heard of any limitations using the Inedo Agent from our Docker image as of yet and we have multiple customers that are currently using it without issue.
As for the configuration change, that is currently a bug. I have scheduled this to be fixed in the next release of Otter. It is being tracked as ticket OT-431.
Thanks,
Rich