No problem! Glad it is working. ProGet 2024.6 will include a fix for this issue, PG-2695. So hopefully this won't happen again!
Thanks,
Rich
Welcome to the Inedo Forums! Check out the Forums Guide for help getting started.
If you are experiencing any issues with the forum software, please visit the Contact Form on our website and let us know!
No problem! Glad it is working. ProGet 2024.6 will include a fix for this issue, PG-2695. So hopefully this won't happen again!
Thanks,
Rich
Hi @Darren-Gipson_6156 ,
I think I have finally recreated the issue. Can you please try something for me? Please configure integrated authentication following these steps:
Once you do that, does windows authentication work?
It looks like there was a change in .NET 8.0 that automatically sets the User Principal on the HTTP Context to windows authentication name. By following the steps above, I was able to configure Windows Authentication in IIS to work around this issue.
Thanks,
Rich
Thank you for getting back to me. I got your email and this is all very helpful! I think I see where the issue is occurring. As you noted, it looks to have to do with how we are converting the integrated authentication user to a user principal in the HTTP context. I'm going to need to dig into this a little bit, but I should have an update by mid-day tomorrow (I'm in the EST timezone). I'll let you know what I find.
While I'm diving into this. Can you try restarting IIS after you enable integrated authentication and try again? I just want to rule out security caching.
Thanks,
Rich
Thanks for all that information. I'm sorry this is taking so long to figure out, but LDAP/AD and Integrated Windows Authentication are always difficult to track down.
Just a few notes on the query process.
\5c
you are seeing is the LDAP library we use (System.DirectoryServices on Windows and Novell.Directory.Ldap.NETSTandard on Linux) encoding the username in the LDAP query. It shouldn't be searching for domain\user
in that query, so most likely there is a bug in part of that process.Can you tell me what ProGet installation you are using (InedoHub or Docker) and which web server you are using (IIS or Integrated)?
One last thing, can you test a few things for me? This will help my pinpoint the issue further.
username@domain.com
DOMAIN\username
http://yourprogetserver/debug/integrated-auth
and send the results? If you need to distort the usernames, can you please leave format visible?If you do not feel comfortable posting the results of the debug page, you can email them to support@inedo.com with the subject [QA-1565] Results
.
Thanks,
Rich
Thanks for the additional information. We were able to recreate it and have a fix pending, PG-2639, that will be released this Friday in ProGet 2024.2.
Thank you for all the extra detail. I was able to recreate this issue and fix it as part of, PG-2628. This fixed will be released this Friday in ProGet 2023.34 and 2024.1.
Thanks,
Rich
Hi @v-makkenze_6348,
As @atripp stated in your other post, this is due to bad data. For that package exact;y, it was added with a NuGet quirks version that is 4 parts (most likely specified), 17.2.65.0
, which is getting handled correctly to a 3 part version due to NuGet's API specs. We are still working out how best to handle these cases.
Thanks,
Rich
Hi @arlymac_7956,
What version of ProGet did you have installed prior to upgrading to ProGet 2024 before you downgraded? I don't see any differences in the table schemas between ProGet 2023 and 2024, but I want to make sure I'm comparing to the correct schema.
Thanks,
Rich
Hi @jw,
Yes we believe this is the same data issue. Our initial thoughts were that this only affected analysis, but it seems to be affecting the NuGet feed itself as well.
Thanks,
Rich
I have a feeling that there was a problem connecting to the third-party Maven index. Which third-party Maven index where you trying to connect to? Also, can you try creating a blank Maven feed and then add the connector to it and see if you can pull artifacts from it? This would help to point us in a direction of where the issue may exist.
Thanks,
Rich
This is not expected behavior. Can you please tell me which version of ProGet you are using?
Thanks,
Rich
Hi @daniel-scati ,
Thanks for catching that! I have updated the documentation.
Thanks,
Rich
Hi @sebastian,
That option can be ignored. We have decided to remove that option from the feature because it was only something that changed a UI color and had no real affect on the operation. It looks like we missed it in that UI. We will remove that in an upcoming release of ProGet.
Thanks,
Rich
Hi @sebastian,
Thanks for asking this. We will definitely explain this better in our docs prior to the launch of ProGet 2024. Basically, the concept of build stages was a way to track your project through it's build lifecycle. Since the scan needs to be performed against the source code, a build is typically added at you CI server's build stage. Then the version will be promoted between stages until it is released. During this process, there are typically multiple CI builds that are created and then rejected before going to release. ProGet's build stages give you the ability to automatically handle archiving old versions and determine at what stage an automated build analysis should create issues.
With all that said, you can customize these build stages by navigating to Reporting & SCA -> Projects and then hover over the multi-button in the upper right corner and select "Build Stages". From there, you can modify the settings for how builds are handled in each stage (scan for issues, number of active builds to keep, etc...) and create new build stages to match your CI/CD process.
ProGet includes 4 stages out of the box and they are configured to do the following by default:
I hope this helps! Please let us know if you have any other questions.
Thanks,
Rich
Hi @jw,
Thanks for letting us know about these. To answer your questions:
Thanks,
Rich
Hello @jw,
We are currently in the process of testing the change to include the updated CycloneDX Specs. It is expected to be released in ProGet 2023.31.
Thanks,
Rich
Hi @andy222,
To expand on this further. If you are looking for it to just skip the stage based on that variable and proceed to the next stage, then I would also suggests @philippe-camelio_3885 method of checking in OtterScript using the $PipelinStageName
. If you are looking to block going further in the pipeline and stopping at a specific step, I would suggest using the Pipeline Stage Requirements and setting a Require Variable automated check. That can block deployment to a stage unless a variable is set to a specific value.
Thanks,
Rich
Hello @daniel-scati,
Sorry for the delay in our response. We have recreated the issue and have a fix, PG-2582, ready and will be released in ProGet 2023.30.
Thanks,
Rich
Hi @forbzie22_0253,
There is no way to split out each feed to have different app pools. The only way to accomplish that is to have multiple instances of ProGet where each instance has a different feed. That would require a separate license for each instance.
Thanks,
Rich
Hi @forbzie22_0253,
Since these will be free editions of ProGet, each instance will need to have its own database. The only way to share a database would be to purchase an Enterprise edition license and configure ProGet to use High Availability.
Thanks,
Rich
Hi @PhilipWhite,
The Repository Name field is actually the name of a Docker Repository Connection, not the Repository itself. To add a Docker Repository Connection:
Then take that resource name and add use that in the "Repository name
" field in the Docker::Buid-Image
operation. Also if you only have one Docker Repository Connection, you can leave it blank and it will use the variable $DockerRepository
by default, which is automatically set to your Docker Repository Connection.
Hope this helps!
Thanks,
Rich
Hi @Justinvolved,
The easiest way to setup a test environment for this would be to setup an instance of Otter (free edition is fine). Then once you have checked out https://github.com/Inedo/inedox-windows and made your changes, you can package the extension using the Inedo Extension Packager. This is available as a .NET tool. You can then navigate to the extensions page and upload the extension file to Otter. You may need to modify the AssemblyVersion
in AssemblyInfo.cs
to a version newer than the installed version to get it to pick it up as the lastest. Alternatively, you can copy that extension file to the Extensions.ExtensionsPath
and restart Otter to have it pick up as well.
The command I typically run to package the extension is:
inedoxpack pack InedoExtension Windows.upack -o --build=Debug
I run that command from the the solution file's directory.
Hope this helps! If you have any questions, please let me know.
Thanks,
Rich
Hi @Justinvolved,
Would you be able to send us the output of Get-Module -ListAvailable
on PowerShell 5.1? I would like to take a look and see if there is anything causing a parsing error in Otter. If it is not safe to post here, you can email it to support@inedo.com and prefix the subject with [QA-1405]
and then comment back here when you have sent it.
Thanks,
Rich
Hi @sebastian & @caterina,
I'm sorry, I realized that after I sent the last response. I have already fixed it as part of ticket PG-2563 in ProGet 2023.28. That version is due out this Friday, but I can provide you with a pre-release version early if you want to fix this issue immediately.
Thanks,
Rich
Hi @caterina,
I think I see what the issue is here. When it comes to the package purl for npm packages, the scope needs to be URI encoded. When it goes to parse the purl for scoped packages, it reads the @
in the scope as the character indicating a version and starts then fails to parse it as an invalid URI. I'll get a fix in pgscan to handle this shortly and reply back when I have an updated version.
Thanks,
Rich
Hi @MY_9476,
We just released 2023.3 on 12/1/2023. Can you please update to 2023.3 and verify that it fixed your issue?
Thanks,
Rich
Thanks for finding this and providing a work around. I have added a ticket, BM-3915, to fix this issue. It should be released within the next couple of versions of BuildMaster.
Thanks,
Rich
Hi @paul_6112 ,
Thanks for letting us know that this is still an issue. I created a ticket, BM-3914, to track this fix.
Thanks,
Rich
Hi @paul_6112 ,
Thanks for letting us know that this is still an issue. I created a ticket, BM-3913, to track this fix.
Thanks,
Rich
Hi @v-makkenze_6348,
This fix has been released in pgscan 1.5.7. Please let us know if you have any questions!
Thanks,
Rich
Based on that script, as long as your Dockerfile
is at the root of the $WorkingDirectory (From
defaults to $WorkingDirectory
) and the myapp.env
is specified within the Dockerfile
, that script should work. Can you please tell me what you are seeing while running Build-Image?
Thanks,
Rich
Hi @paul_6112,
What version of the Scripting extension do you have installed? This bug should be fixed in v2.4.0 of the Scripting extension. If it is not currently version 2.4.0, can you try updating that extension and see if that fixes the issue?
Thanks,
Rich
Hi @paul_6112,
Thanks for sending this over to us. I have resolved the issue in OT-505 and it will be released this Friday in Otter 2023.2.
Thanks,
Rich
Hi @paul_6112,
Thanks for sending this over. I found the issue and have resolved this as part of OT-504. It will be released this Friday in Otter 2023.2.
Thanks,
Rich
Hi @paul_6112,
Thanks for verifying this for us. We were able to find an issue in our code. This has been fixed in BM-3909 and will be released this Friday in BuildMaster 2023.5.
Thanks,
Rich
Hi @MY_9476,
Thanks for bringing this to our attention. I added a ticket, OT-502, to fix the issue. This should be released next week in Otter 2023.2.
Thanks,
Rich
Hi @devopsdude3113,
What scopes do you have configured for your personal access token? When I tested this, I created a personal access token and added only the read:packages
scope.
Also, do you see any error in your ProGet diagnostic center?
Thanks,
Rich
Hi @devopsdude3113,
When you search for the package by exact name in ProGet (ex: @owner/npm-package), are you able to see it?
Thanks,
Rich
Hi @devopsdude3113,
When you are searching for your package, are you searching using @owner/package-name
? GitHub only supports scoped packages, so the exact name requires the scope too. Also, if you have already pulled the package directly from GitHub, you will need to clear your local npm cache before it will attempt to pull from ProGet. Also, please verify only your ProGet repository is configured for your @owner scope in your npmrc file.
Thanks,
Rich
Hi @devopsdude3113,
The package count is what we check for the connector health, so it will always show 0 connector packages in the GitHub connector, and the search API is what allows you to partial search for packages in the remote repository. Once a package has been pulled locally to ProGet or has been cached in ProGet, then those packages will show on your list packages page and will allow partial name searching against them. When they have not been cached or pulled to ProGet, those only exist remotely in the GitHub repository and require that you type the exact name to see them in ProGet.
Thanks,
Rich
Hi @devopsdude3113 ,
GitHub npm connectors work a bit differently than other connectors. GitHub does not implement the full npm API specification so certain things like package count and the search API are not working. To get around this in ProGet 2023, you will need to make sure that you have updated to at least ProGet 2023.20 and use the following settings:
https://npm.pkg.github.com/<OWNER>
That should allow you to search for the package by full name and allow your npm applications to pull the packages properly. Please note that partial name searches will not return any values from your GitHub connector since the search API has not been implemented.
Please let me know if these steps fix your issue or if you have any other questions. I have also added a section to our docs to include setting up a GitHub connector as well. You can see this in the Troubleshooting section of our npm docs.
Thanks,
Rich
I think I have fixed the issue. Can you try upgrading your image to Otter 23.0.1-ci.2? It looks like we had a version mismatch in our base image.
Thanks,
Rich
Hi @jimbobmcgee,
I just wanted to let you know that we just released Otter 2023 and it includes the name filter on the List action type on the Infrastructure API.
Thanks,
Rich
Hi @Jon,
Looks like this was a result of recent change. I have fixed this in OT-499 and will be released in Otter 2022.15. If this is an immediate requirement, I can create a prerelease version of Otter you can install. Please let me know if you are interested.
Thanks,
Rich
Hi @caterina,
Here is the final solution:
auto
type and scanning for NuGet and npm dependencies:
npm
type and a package-lock.json file is specified
npm
type and a package-lock.json file is not specified
--include-dev
)--package-lock-only
)This has been implemented in pgscan 1.5.6 which I will be pushing shortly, and these options will be added to BuildMaster 2023.2.
Thanks,
Rich
Hi @caterina,
That is correct, those two files will be merged. The page you are looking at is just a history of each SBOM that has been uploaded to it. When you export the SBOM for that project, it generates an SBOM based on all the packages included in that project release and combines them in one file. Also if you remove a package dependency on the packages tab (like an npm dev dependency), those will not be included in the generated SBOM.
Thanks,
Rich
Hi @caterina,
I see the problem now, the package-lock.json of the dev dependency contains non-dev dependencies which would cause the extra dependencies. I may have a solution for this, but I will need to run a couple of tests.
I still think the two scans in this case would be best. When you run pgscan those two times (one for npm and one for NuGet), configure the scan to push the results of each scan to the same SCA project in ProGet. This will append the new dependencies to the project. This way, when you export the SBOM from ProGet, only one SBOM will be generated and exported including all the related dependencies (npm and NuGet).
Thanks,
Rich
Hi @caterina,
I was able to chat with the team and here was our consensus:
auto
type and scanning for NuGet and npm dependencies:
npm
type and a package-lock.json file is specified
npm
type and a package-lock.json file is not specified
--include-dev
)The thought is that this lines up with the other SBOM scanners' defaults as well as handles any hidden dependencies in the node_modules folder. This also handles the case of scanning only package-lock.json since you can explicitly specify it.
How does this sound to you?
Thanks,
Rich