Ah, thanks for the extra details here. I was finally able to recreate this issue. I'm tracking the fix in BM-3770 which is expected to release in the next version of BuildMaster, which is due out this week.
Thanks,
Rich
Welcome to the Inedo Forums! Check out the Forums Guide for help getting started.
If you are experiencing any issues with the forum software, please visit the Contact Form on our website and let us know!
Ah, thanks for the extra details here. I was finally able to recreate this issue. I'm tracking the fix in BM-3770 which is expected to release in the next version of BuildMaster, which is due out this week.
Thanks,
Rich
Hi @cronventis,
I set up an example in our test environment here and everything is working as expected. I did verify that the Image_Id that is in the database should match the Image Digest in the Image Data table as well.
Is this image a local image or a cached image? Currently, usage will only show for a local image.
If it is a local image, I think the next steps will be to run a few different SQL queries to identify why the usage is not matching to the image.
Thanks,
Rich
Hi @cronventis,
Thank you for checking that for me. I dug into this further and it looks like we only actually use the Image_Id to find the image within a feed. We expect Kubernetes to return the Config Digest and that is what is stored in the Image_Id column. I'm going to work on creating this scenario and verify what is being pulled from Kubernetes. With the holidays coming up, I will not be able to look into this until next week. You should hear back from us by Tuesday or Wednesday of next week.
Thanks,
Rich
So far, I'm unable to recreate this issue. Do you have a reverse-proxy (like nginx, apache, traefik, etc...) in front of it? If so, can you try to bypass that and see if the issue still happens?
Thanks,
Rich
Hi @cronventis,
Thanks for checking that for us. That does mean the Container Scanner is working. Did you happen to configure the container usage on your Docker Registry in ProGet? You can check this by navigating to Docker Registry -> Manage Feed -> Detection & Blocking and then you should see your container scanner under Container Usage Scanning.
If you have configured your feed to use that, the next step would be to look at what the Kubernetes scanner identified for your images and how ProGet has these images stored. If you look at your ContainerUsage table, you will need to look at two columns; Container_Name and Image_Id. The Container_Name is your repository's name and the Image_Id is the Digest for your image. Can you confirm that you see the container names you are expecting in that table?
Thanks,
Rich
Thanks! I have received it. I'm going to look it over and test a few more cases and get back to you shortly.
Thanks,
Rich
Would you be able to send me over the contents of your Configuration
table on your BuildMaster database? I'm having trouble recreating this and I'm guessing it has to do with one or more of those settings. You can send it to support@inedo.com and include [QA-723] in the subject. I will keep an eye out for it that way and review it.
Thanks,
Rich
Just to confirm, on your test site, is it a brand new BuildMaster database also?
Thanks,
Rich
Hi @cronventis,
The fix will only prevent new duplicates from being created. Mainly this is because I cannot ensure that the first vulnerability is always the properly assessed vulnerability. For now, the best option will be to run a SQL query directly against SQL Server ProGet database after you upgrade to 6.0.5.
I have created a SQL query that will delete all the duplicates excluding the first vulnerability that was added to ProGet. If that criteria works for you, this query should be good enough.
BEGIN TRANSACTION
DELETE FROM [Vulnerabilities]
WHERE [Vulnerability_Id] in (
SELECT v.[Vulnerability_Id]
FROM [Vulnerabilities] v
INNER JOIN (
SELECT [External_Id]
,[FeedType_Name]
,[VulnerabilitySource_Id]
,COUNT([External_Id]) as [NumberOfDuplicates]
,MIN([Vulnerability_Id]) as [FirstVulnerability]
,MAX([Vulnerability_Id]) as [LastVulnerability]
FROM [Vulnerabilities_Extended]
GROUP BY External_Id, FeedType_Name, VulnerabilitySource_Id
HAVING count(External_Id) > 1
) duplicates on v.External_Id = duplicates.External_Id
WHERE v.Vulnerability_Id != duplicates.[FirstVulnerability]
)
ROLLBACK
Currently, I have the script set to rollback at the end (meaning it won't actually delete the duplicates). If this works for you, you can simply change ROLLBACK
to COMMIT
and rerun the query and it will remove the duplicates.
Please let me know if you have any questions!
Thanks,
Rich
Hi @cronventis,
Thanks for sending this over. I have found the issue, PG-2064, and have fixed it. It will be released tomorrow in ProGet 6.0.5.
Thanks,
Rich
Hi @cronventis,
Thanks for confirming thsi for me. I already have the fix in for ProGet 6.0.5 which is expected to release on Friday.
Thanks,
Rich
In my experience you should only need to add ` in front of all the $ and that should work. Can you give this version a try?
# ACL LOG4J https://www.haproxy.com/blog/december-2021-log4shell-mitigation/
option http-buffer-request
acl log4shell url,url_dec -m reg \`${[^}]*\`${
acl log4shell url,url_dec -m reg \`${jndi:(?:ldaps?|iiop|dns|rmi)://
acl log4shell url,url_dec -i -m reg \`${[\w`${}\-:]*j[\w`${}\-:]*n[\w`${}\-:]*d[\w`${}\-:]*i[\w`${}\-:]*:.*}
acl log4shell req.hdrs -m reg \`${[^}]*\`${
acl log4shell req.hdrs -m reg \`${jndi:(?:ldaps?|iiop|dns|rmi)://
acl log4shell req.hdrs -i -m reg \`${[\w`${}\-:]*j[\w`${}\-:]*n[\w`${}\-:]*d[\w`${}\-:]*i[\w`${}\-:]*:.*}
acl log4shell_form req.body,url_dec -m reg \`${[^}]*\`${
acl log4shell_form req.body,url_dec -m reg \`${jndi:(?:ldaps?|iiop|dns|rmi)://
acl log4shell_form req.body,url_dec -i -m reg \`${[\w`${}\-:]*j[\w`${}\-:]*n[\w`${}\-:]*d[\w`${}\-:]*i[\w`${}\-:]*:.*}
http-request deny if log4shell
http-request deny if { req.fhdr(content-type) -m str application/x-www-form-urlencoded } log4shell_form
Thanks,
Rich
Hi @cronventis,
I was able to recreate this issue and we should be able to get this corrected in the next release of ProGet. Can you please confirm which version of ProGet you are using?
Thanks,
Rich
Hi @cronventis,
Do you have multiple vulnerability sources configured in your ProGet instance? Are you able to provide screenshots of the actual vulnerabilities in ProGet?
Thanks,
Rich
Hi @Stephen-Schaff,
How many tags does the build/ops-dotnet-6.0
have? Is 1.0.1 the only tag it has or is there another? Does the base-prerelease
registry also have strict versioning enabled?
The error you are seeing with vulnerability scanning will not affect the setting of the tag on Docker images. Vulnerability scanning is a scheduled job that happens in the background and not automatically upon upload of a new image so that should not affect this image. I am curious about that error though. Do you have your Sonatype vulnerability source attempting to scan your Docker registry? If so, Sonatype does not support Docker registries. We only support using Clair for scanning vulnerabilities in Docker images.
Thanks,
Rich
So 5.3.27 definitely has the bug. The bug affected Docker Registries not using shared blob storage. Beginning in ProGet 5.3, all new Docker Registries will use common blob by default, but any existing registries that were upgraded do not. This caused the vulnerability scanner to not find the layers during the scan when looking at images in a registry not using common blob storage.
I'm currently testing with the quay.io/coreos/clair:v2.1.7
image. We have not implemented Clair's newest API yet because it does not seem to be stable as of yet. They are still making too many changes too frequently.
Hope this helps!
Thanks,
Rich
ProGet 5.3.39 and later include the fixes for these issues. We also improved the performance of the list repositories page in ProGet 6.0.3+. What version of ProGet are you currently running? Also, is this happening for every layer?
Thanks,
Rich
Hi @scusson_9923,
So after some further testing. I figured out this to be the valid request:
curl -d "" -X POST --user <username>:<password> http://proget.localhost/endpoints/public-files/dir/cli-api-test/
You could also use api:<API key>
for the --user parameter. The last time I tested CURL, I didn't need to include the -d ""
, nor does their documentation. Using only -X apparently ignores the content-length header where -d ""
will add a content-length of 0.
Hope this helps!
Thanks,
Rich
Hi @scusson_9923,
It looks like the issue in the command is that you are using -d
but not providing a value for it. So what is happening is that the -X
is being treated as the value of -d
and POST is being treated as the host. You should either use just -d
or remove the -d
and use -X POST
instead. I'm sorry I missed this before, but curl is not the best with error messages.
As for our documentation, it is meant as general documentation. Mainly, you need to make a POST/GET/etc.. request to that URL. You can use any tool you would like (curl, Invoke-WebRequest, Postman, etc...) or call from your code directly.
If this is something you are planning to use a lot, I would suggest that you add your own custom Feed Usage Instructions or Package Install Instructions for your Asset Directory.
Thanks,
Rich
Hi @scusson_9923,
I just wanted to make sure you were using the correct URL. It should be https://<ProGet_server>/endpoints/<asset_dir>/dir/<new_dir>/
I wanted to make sure you had the dir
URL part between your asset directory and your folder path.
What OS are you running the curl command from? I want to test this myself, but I know Linux and windows syntax can differ a bit.
Thanks,
Rich
@mcascone said in buildmaster linux docker install: sa login failures:
docker exec -it inedo-sql /opt/mssql-tools/bin/sqlcmd
-S localhost -U SA -P 'redacted'
-Q 'CREATE DATABASE [ProGet] COLLATE SQL_Latin1_General_CP1_CI_AS'
Hi @mcascone,
This line has the issue. You created a database named [ProGet] instead of a database name [BuildMaster] as you stated in the connection string on the third command.
Thanks,
Rich
That all looks correct. When you used localhost for the connector, did you still have the Web.BaseUrl
set? If so, can you place try clearing that when you are using localhost? Typically I suggest using Web.BaseUrl
when your self-connectors connect to that same URL, when using localhost you should leave that value blank. Also, is the internal port that ProGet is running on within the container port 80 also, or is it a port that Docker is mapping to an external port 80? That can also cause license violations in ProGet Free.
I'm also going to try to look through past tickets with users who use Traefik. I did a brief scan before my last comment and the X-Forward* headers seem to be the most common issue in the configuration.
Thanks,
Rich
What headers do you currently have forwarding in Traefik? We have documentation around setting up nginx, I know it is a different tool than Traefik, but it does include the standard headers that you need to forward.
Thanks,
Rich
Hi @lukel_3991,
What version of Otter are you running and are you running it on Windows or Docker?
Thanks,
Rich
Hi @forcenet_4316,
Thanks for bringing this to our attention. I'm not sure how this slipped through, but I made sure it is fixed in the next release, ProGet 6.0.3. The final fix is tracked via PG-2043.
Thanks,
Rich
Hi Paul,
Good catch! I have created a ticket to track this issue, BM-3753. As I said in your other post, it is too late to get this in for tomorrow's release, but this is tentatively scheduled for BuildMaster 7.0.14 which is due to release on November 18th, 2021.
Thanks,
Rich
Hi Paul,
Thanks for bringing this to our attention. I have created a ticket to track this issue, BM-3752. Unfortunately, it is too late to get this in for tomorrow's release, but this is tentatively scheduled for BuildMaster 7.0.14 which is due to release on November 18th, 2021.
Thanks,
Rich
Hi @shiv03_9800,
Thanks for sending that over. We have located the issue and we are working to fix it. I will let you know as soon as we have a fix.
Thanks,
Rich
Hi @shiv03_9800,
Would you be able to send over the full execution log from running the "PowerShellDemo"?
Thanks,
Rich
When setting the time for a scheduled Job, is it always 2 hours later? Can you tell me what time zone you have set in your User preferences?
Also, could you find the Job in the database and tell me what the CRON configuration is? you can do that by running the following SQL:
SELECT [Job_Id]
,[Job_Name]
,[LastRun_Date]
,[JobState_Code]
,[Job_Configuration]
,[Recurring_Indicator]
,[TriggeredBy_Template_Name]
FROM [Otter].[dbo].[Jobs]
WHERE [Recurring_Indicator] = 'Y'
Then open the XML in the Job_Configuration
column and look for the CronSchedule
attribute on the Properties
element.
Thanks,
Rich
Hi @Stephen-Schaff,
The latest tag is a bit odd. Each client handles it differently and I have found most clients will not repull a newer version if latest has already been pulled once. According to Docker's API (and Kubernetes), a client is supposed to pull the manifest from the server and compare it with the currently downloaded image and update if there is any difference, but I have found that does not always work. You can configure Kubernetes to always pull an image, but I cannot confirm how well it works.
As for getting a list of tags, we basically have two ways.
What it sounds like you are looking for is to just get a list of alternative tags. The only way to do this is to use our native API and using the DockerImages_GetTags
method and grouping by DockerImage_Id.
The other way to get a list of tags would be to use the docker API itself. This way will get all tags associated with a repository, not just the alternative tags. To do this:
http://<your proget server>/v2/_auth
with an Authorization
header with the value of Basic <BASE 64 encoded username:password>
token
propertyhttp://<your proget server>/v2/<feed name>/<image>/tags/list
(ex: `http://proget.localhost/v2/defaultdocker6/dotnet/core/aspnet/tags/list)
Authorization
header with a value Bearer <token from step 1>
tags
That will get you a list of tags using the Docker API.
Hope this helps!
Thanks,
Rich
Hi @shiv03_9800,
We have just released a new version of Otter that includes a fix for the segmentation fault.
You will need to make sure that .NET Framework 4.5.1 or greater is installed on the Windows server. But if you were able to run the Inedo Agent installer, then chances are you already have that installed.
As for the Inedo Agent error, I talked with some colleagues and this is something we have seen on slower or congested networks when checking Inedo Agent has the components needed to communicate with Otter. Normally if you wait for the next server check to occur, the issue ends up resolving itself. There was another issue, however, that can affect PowerShell scripts when using the Docker version of Otter, OT-432. This was fixed in the latest release as well.
Thanks,
Rich
Hi @shiv03_9800,
Can you please clarify what is not working with Inedo Agent 49? Is the connection always dropped or just when running a PowerShell script through it? Is it a specific command that is failing? We have not heard of any limitations using the Inedo Agent from our Docker image as of yet and we have multiple customers that are currently using it without issue.
As for the configuration change, that is currently a bug. I have scheduled this to be fixed in the next release of Otter. It is being tracked as ticket OT-431.
Thanks,
Rich
Hi @shiv03_9800,
After further research on this issue, it looks like PowerShell remoting via WSMan on Linux is not currently a supported feature from Microsoft. Even looking at the latest PowerShell Core SDK, they still have not added support for it as of yet. There is a new feature in PowerShell Core 7 that adds support for PSRemoting over SSH, but it includes a limited subset of commands. It would actually be better to just connect directly to the server using SSH in these cases. Especially since you would have to configure Window's built-in SSH server anyways.
The best option for connecting to Window's servers from Otter on Docker is to use the Inedo Agent. If you cannot use an Inedo Agent, then using SSH directly would be the fallback.
We will be making some changes, OT-430, to better address this issue in the UI and to bring awareness to the lack of support for PowerShell agent-less servers when using Docker. We will also be updating the documentation to reflect this.
Please let me know if you have any questions.
Thanks,
Rich
Hi @shiv03_9800,
After a bit more research, this looks like a problem with the deprecated omi library that Microsoft was using for PowerShell WSMan on Mac and Linux. We are still working through the problem, but I wanted to give you a quick update. I will provide an update when I have more information.
Thanks,
Rich
Hi @shiv03_9800,
I just wanted to let you know that we have received the logs and we are currently looking into the issue. I will let you know when we have more information.
Thanks,
Rich
This looks to be an issue when common blob storage is not enabled for a Docker registry. I have created a ticket, PG-2009, to track this fix. It is expected to be released in ProGet 5.3.39 which is scheduled to release on October 8th, 2021. I will post back here if anything changes.
There was also an issue, PG-2008, that was fixed 5.3.38 that would sometimes return a 500 error to Calair when trying to download the layer. PG-2008 seems to only affect ProGet running on Linux.
Thanks,
Rich
Hi @scusson_9923,
Could you please generate a temporary API Key and try wget with https://<proget_server>/api/docker-blobs/download/sha256%3Af033c4f65cdbf0bfa21d5543e56c0c41645eca4d893494bb4f0661b0f19ccc79?API_Key=<API_KEY>
from the container? Can you also try that from your browser (it should try to download the file)?
It is throwing me off that you are getting a 404 for all the layer download requests. It sounds like either ProGet cannot find the layer, which should show an error in the log, or that Clair is calling to the wrong server for the download the layer.
I apologize for all the back and forth with this. This is the first time we have experienced this with Clair and I'm still trying to determine which component has the issue.
We are currently running Clair on our ProGet.inedo.com and it doesn't seem to have this issue. I'm also not able to recreate this locally, which makes this a bit more difficult.
Thanks,
Rich
Hi @scusson_9923,
What happens if you try to wget https://<proget_server>/api/docker-blobs/download/sha256%3Af033c4f65cdbf0bfa21d5543e56c0c41645eca4d893494bb4f0661b0f19ccc79 from the Clair container? Does that also return a 404 error? Just to confirm, all the requests in the Vulnerability log are warnings still correct?
Thanks,
Rich
Hi @scusson_9923,
Progress! It looks like we are past the SSL issue now. Can you check the diagnostics center in ProGet and see if there are any errors in there now?
Thanks,
Rich
Hi @scusson_9923,
I was just researching this a bit and it looks like they may have added a toggle to disable SSL checks in Clair when downloading docker layers. Can you try adding -insecure-tls
to your docker run statement for Clair?
Thanks,
Rich
Hi @scusson_9923,
Please give me a little bit of time to work through this. If I have learned anything about Docker, it is that certificates are handled differently on every image. I need to do some digging to find out what is needed to make this work. I don't think HTTPS is a lost cause, we just need to figure out how Clair needs to handle these certs.
Thanks,
Rich
Hi @scusson_9923,
That is definitely the issue. It looks like the best way is to add your self-signed cert to the ca and add a docker mount to that (-v /path/to/quay/cert/ca.crt:/etc/pki/ca-trust/source/anchors/ca.crt
). You may be able to do it with the Clair config also, but I could not find anything easily for that.
Thanks,
Rich
Hi @scusson_9923,
When you ping the server from the Clair image, are you pinging the server or the value that is in Web.BaseUrl
? You should be using the value within Web.BaseUrl
since that is the connection it is using. Also, are you using a port other than 443 for your ProGet cluster for the Clair connection?
Also, on your Clair configuration, do you have anything set for API Authorization Header
? If so, could you try to remove that and see if that fixes your issue?
Lastly, does your Docker registry allow anonymous to pull your images? If not, could you temporarily allow anonymous access to that registry and give it a try? This will allow us to see if it is an issue with our automatic key creation logic for Docker images.
That would be the starting point I think to troubleshoot this. If that doesn't resolve the issue, then the next step would be to do some custom PowerShell calls to do a direct test with Clair.
I'm sorry for all the back and forth with this, but there definitely seems to be something blocking the connection, so now we just need to see which portion of the system is blocking it.
Thanks,
Rich
Hi @scusson_9923,
Looks like I misspoke earlier, the Clair integration will never return warnings unless an error happened while pulling an image. Would you be able to send a copy of the Vulnerability Scan logs to support@inedo.com with a subject of [QA-664] Clair Logs, so I can review the logs?
Thanks,
Rich
Hi @scusson_9923,
Two other things I forgot to ask.
Is your ProGet server accessible from your clair server? How the integration works, is ProGet sends a list of images to scan to Clair, which Clair then downloads the image layers from ProGet and scans them. Then ProGet calls back to clair to get the results.
Do you have your Web.BaseUrl
set in the Advanced Settings? Because this scan runs from the ProGet Service, the Web.BaseUrl
needs to be set so we know what URL to send to Clair to download the image layer.
Thanks,
Rich
Hi @scusson_9923,
Thanks for answering those. Let me do some checks in our lab and check back. The integration will always yield some warnings because not everything comes back in the expected layer format (configuration layers is a good example of this), but Clair randomly makes changes to their providers and their API and I want to make sure the test cases still work as expected.
Thanks.
Rich
Hi @scusson_9923,
I'm sorry to bombard you with some questions, but I think this will be a good way to start.
If you can answer those for me, that should give us a good start to resolve this issue.
Thanks,
Rich
Hi @brett-polivka,
Is only the health check failing? Are you still able to pull or search for images?
Also, do you see any errors in the Diagnostics center?
Thanks,
Rich