Hi @paul_6112,
Thanks for sending this over. I found the issue and have resolved this as part of OT-504. It will be released this Friday in Otter 2023.2.
Thanks,
Rich
Hi @paul_6112,
Thanks for sending this over. I found the issue and have resolved this as part of OT-504. It will be released this Friday in Otter 2023.2.
Thanks,
Rich
Hi @paul_6112,
Thanks for verifying this for us. We were able to find an issue in our code. This has been fixed in BM-3909 and will be released this Friday in BuildMaster 2023.5.
Thanks,
Rich
Hi @MY_9476,
Thanks for bringing this to our attention. I added a ticket, OT-502, to fix the issue. This should be released next week in Otter 2023.2.
Thanks,
Rich
Hi @devopsdude3113,
What scopes do you have configured for your personal access token? When I tested this, I created a personal access token and added only the read:packages
scope.
Also, do you see any error in your ProGet diagnostic center?
Thanks,
Rich
Hi @devopsdude3113,
When you search for the package by exact name in ProGet (ex: @owner/npm-package), are you able to see it?
Thanks,
Rich
Hi @devopsdude3113,
When you are searching for your package, are you searching using @owner/package-name
? GitHub only supports scoped packages, so the exact name requires the scope too. Also, if you have already pulled the package directly from GitHub, you will need to clear your local npm cache before it will attempt to pull from ProGet. Also, please verify only your ProGet repository is configured for your @owner scope in your npmrc file.
Thanks,
Rich
Hi @devopsdude3113,
The package count is what we check for the connector health, so it will always show 0 connector packages in the GitHub connector, and the search API is what allows you to partial search for packages in the remote repository. Once a package has been pulled locally to ProGet or has been cached in ProGet, then those packages will show on your list packages page and will allow partial name searching against them. When they have not been cached or pulled to ProGet, those only exist remotely in the GitHub repository and require that you type the exact name to see them in ProGet.
Thanks,
Rich
Hi @devopsdude3113 ,
GitHub npm connectors work a bit differently than other connectors. GitHub does not implement the full npm API specification so certain things like package count and the search API are not working. To get around this in ProGet 2023, you will need to make sure that you have updated to at least ProGet 2023.20 and use the following settings:
https://npm.pkg.github.com/<OWNER>
That should allow you to search for the package by full name and allow your npm applications to pull the packages properly. Please note that partial name searches will not return any values from your GitHub connector since the search API has not been implemented.
Please let me know if these steps fix your issue or if you have any other questions. I have also added a section to our docs to include setting up a GitHub connector as well. You can see this in the Troubleshooting section of our npm docs.
Thanks,
Rich
I think I have fixed the issue. Can you try upgrading your image to Otter 23.0.1-ci.2? It looks like we had a version mismatch in our base image.
Thanks,
Rich
Hi @jimbobmcgee,
I just wanted to let you know that we just released Otter 2023 and it includes the name filter on the List action type on the Infrastructure API.
Thanks,
Rich
Hi @Jon,
Looks like this was a result of recent change. I have fixed this in OT-499 and will be released in Otter 2022.15. If this is an immediate requirement, I can create a prerelease version of Otter you can install. Please let me know if you are interested.
Thanks,
Rich
Hi @caterina,
Here is the final solution:
auto
type and scanning for NuGet and npm dependencies:
npm
type and a package-lock.json file is specified
npm
type and a package-lock.json file is not specified
--include-dev
)--package-lock-only
)This has been implemented in pgscan 1.5.6 which I will be pushing shortly, and these options will be added to BuildMaster 2023.2.
Thanks,
Rich
Hi @caterina,
That is correct, those two files will be merged. The page you are looking at is just a history of each SBOM that has been uploaded to it. When you export the SBOM for that project, it generates an SBOM based on all the packages included in that project release and combines them in one file. Also if you remove a package dependency on the packages tab (like an npm dev dependency), those will not be included in the generated SBOM.
Thanks,
Rich
Hi @caterina,
I see the problem now, the package-lock.json of the dev dependency contains non-dev dependencies which would cause the extra dependencies. I may have a solution for this, but I will need to run a couple of tests.
I still think the two scans in this case would be best. When you run pgscan those two times (one for npm and one for NuGet), configure the scan to push the results of each scan to the same SCA project in ProGet. This will append the new dependencies to the project. This way, when you export the SBOM from ProGet, only one SBOM will be generated and exported including all the related dependencies (npm and NuGet).
Thanks,
Rich
Hi @caterina,
I was able to chat with the team and here was our consensus:
auto
type and scanning for NuGet and npm dependencies:
npm
type and a package-lock.json file is specified
npm
type and a package-lock.json file is not specified
--include-dev
)The thought is that this lines up with the other SBOM scanners' defaults as well as handles any hidden dependencies in the node_modules folder. This also handles the case of scanning only package-lock.json since you can explicitly specify it.
How does this sound to you?
Thanks,
Rich
Hi @caterina,
Thank you for that explanation. That makes a lot of sense how and what is being included. I did some other research on this topic as well and it looks like dev dependencies will vary from environment to environment whether these should be included or not in the SBOM. From my research, it sounds like there is not a definitive answer on best-practice for this. Furthermore, it looks like the CycloneDX implementation of the dependencies scan has options on what to scan:
package-lock-only
: Whether to only use the lock file, ignoring "node_modules".
omit
: Dependency types to omit from the installation tree.
So as a summary, their defaults are to scan the node_modules folders but omit the dev packages when building a production package. I'm inclined to make that the default for pgscan. The pgscan library has been geared to be a lightweight alternative and when more complex scans are needed, it is suggested to use a tool like CycloneDX to generate an SBOM and upload that file to ProGet.
What are your thoughts on those defaults for pgscan? I will also discuss this internally with the team and post back what our thoughts are.
Thanks,
Rich
Hi Caterina,
No problem! This is a good catch! Please let me know what you do to resolve this. I'm thinking the node_modules scan may be more helpful in situations like this. If that package is being released (even if by accident), it makes sense that it is reflected in the SCA project. Let me know your thoughts on that as well.
Thanks,
Rich
Hi @caterina,
Just for some background. In pgscan, if the type is not specified or is set to auto and .NET is detected, it will perform a scan for .NET dependencies and for npm package dependencies and include them in the SBOM. When specifying a type that is not auto, pgscan will only scan for dependencies of that type. If you run 2 or more scans with pgscan, the results of each scan will append the new packages to the SCA project in ProGet, allowing you to append different dependency types as needed.
I know we discussed this with your team on issue #27 in the GitHub repository and determined there were no actual differences. Are you able to provide an example case where there are differences?
Just so other users can see a snippet of the conversation:
That is a fair point to make. My thought was that including the node_modules folder in the recursive search would allow us to include the child dependencies used by installed packages that were not marked as dependencies in the npm package. But in my research and testing, I have found the package-lock.json at the root of the node_modules folder includes a subset of the data in the main package-lock.json. So no extra information was added. Do your package-lock.json files under the node-modules folder have additional information the parent doesn't? Also, do your packages in that folder have package-lock.json outside of the root of that folder?
Looking at the hidden lock file documentation. The information in that file should be redundant as it is only used to improve performance, but if there is manual change in the node_modules tree by something other than npm, then the lock file is ignored (and should probably be removed anyways). I'm inclined to just exclude files from the node_modules folder as you suggest.
I can confirm your observation. There is no extra information in our package-lock.json files under the node-modules folder. Further, we do not have additional package-lock.json outside of the root folder.
We have created a low-priority issue #30 to remove the node_modules scan in the future, but it has not been prioritized based on the details in issue #27. If this truly is causing an issue we can prioritize it, but I would be interested to understand why your node_modules folder detected more dependencies.
Thanks,
Rich
Hi @w-repinski_1472,
We currently do not have any mechanism for alerting the user when an extension update is available. Our guidance around this is to check extensions for updates after a product is upgraded or when instructed by support. In this case, it was my fault for not alerting you to upgrade the extension to fix this issue. I'm sorry for that and I will make sure this does not happen again.
Also, many extensions are included in the install package. These extensions are updated automatically when the product is upgraded. So this won't be a problem with most extensions, it just so happens that the clair extension is not an included one.
Please let me know if you have any other questions for us.
Thanks,
Rich
Hi @w-repinski_1472,
You can see extension updates by navigating to Administration > Extensions and then an information block at the top of the page will display that there are extension updates. For the actual changes that were made in the 2.0.1 version of the Clair extension, you can view the 2.0.1 milestone in GitHub for our Clair extension.
Thanks,
Rich
Hi @w-repinski_1472,
Can you please ensure that your Clair extension is updated to 2.0.1 as well? Some of the fixes required changes to the Clair extension directly. The two issues that were fixed in the extensions:
Looking at your screenshots, it seems those issues will be fixed with an extension update.
Thanks,
Rich
Hi @w-repinski_1472,
Thanks for the additional input. To expand this a bit more:
Our auto-assessment uses the CVSS score from the vulnerability to determine which assessment to use automatically. The assessment type is then displayed as the label. It looks like there is a bug in both the CVSS score returned from the Clair extension as well as the stored procedure that saves that information back to ProGet. If you look at the execution log from the original ticket you submitted to us, at the end you can see an error occurred in the merge statement. Those two issues together are causing this.
We will look into what is returned from Clair to get this score set properly. When it comes to the vulnerability overview page, we only show the latest unassessed vulnerabilities. If you click the "view all" link at the top of that table, that table will show the assessment information and the score.
We have not seen Clair return this message for a layer that is not a manifest/configuration layer (sorry, I call these metadata layers which can be confusing). If there is an issue with how the layer is parsed, you will need to submit an issue to Clair for this. ProGet is only showing the information that Clair has returned to ProGet for the parsing. This is how our Clair implementation works:
I think this is also related to the stored procedure I was referring to earlier.
We hope to have these issues resolved soon. I hope this clarifies this for you. Please let me know if I missed anything or if you have more questions.
Thanks,
Rich
Hi @w-repinski_1472,
Thanks for all the information. It looks like this can be broken down into a couple of issues.
[Vulnerabilities_UpdateExternalVulnerabilities]
stored procedure that is causing the auto-assessment from being saved, but if not, it may be an issue with how the vulnerability score is calculated for Clair vulnerabilities. I created a ticket, PG-2443, to track this fix and this should be addressed in the next maintenance release of ProGet.As for the Clair could not process layer...
messages. These are expected. When Clair scans the layers, it looks at all layers including the metadata layers. Any metadata layer or other data-based layer will show this message. These are safe to ignore as long as you see request was valid
at the end of the layer process.
When you manually assess the vulnerability, do the assessments show?
Thanks,
Rich
Hi @dan-brown_0128,
The only linkage between the SCA project and the NuGet package/feed (and other feed types when associated) is that the SCA project will look at all the associated NuGet packages' feeds to pull their relevant license and vulnerability data for each associated package. The then displays that information in the SCA project and creates issues if it finds problems (blocked license, blocked due to vulnerability, missing package, etc...).
Thank,
Rich
Good catch! I created a ticket, OT-494, to track the fix. We should have this released in the next two versions of Otter (2022.12 or 2022.13).
Thanks,
Rich
Hi @dan-brown_0128,
Looking at your screenshot, the vulnerabilities associated with your SCA project come from the jQuery package you are using in ScaTestApp. When using the SCA feature in ProGet, your application dependencies are scanned using pgscan (or any other SBOM tool) and uploaded to ProGet. We can then look at each dependency, check for vulnerabilities, and associate that with the project.
Your last screenshot looks like you uploaded the ScaTestApp as a Nuget Package. In this case, only the name of the project and the version are sent to OSS Index to see if OSS Inedx has any known vulnerabilities for that package. It does not look at any dependencies, files, etc... for vulnerabilities.
Please let me know if you have any questions.
Thanks,
Rich
Hi @vishal_2561,
This sounds like it could potentially be an issue with the file becoming locked at the operating system level (like an anti-virus scan, a snapshot taken, etc...). But to be sure, could you please enable "Verbose logging" on your Deploy-Artifact operation and send us the execution again? That will give us a little more detail about the issue. If that execution is too sensitive, you could email it to us at support@inedo.com and use the subject [QA-1175] - Execution log
and let us know you sent it.
Thanks,
Rich
Hello,
This is most likely related to PG-2395 (ProGet 2022.30 fix) and PG-2390 (ProGet 2023.9 fix). We added support to handle when OSS Index removes vulnerabilities from their list. Unfortunately, this has brought to light the unreliability of the data returned from OSS Index. It looks like vulnerabilities are constantly removed and re-added, which caused assessments to be cleared out on vulnerabilities. In PG-2395 and PG-2390, we have updated ProGet to only add a comment if we see that OSS Index deleted it. This way the assessment is not lost when OSS Index removes the vulnerability.
Thanks,
Rich
Hi @falam_3608,
That's great to hear. The will also be released in ProGet 2023.11. You will not need to roll anything back upon the next upgrade, InedoHub handles these automatically.
Thanks,
Rich
Hi @falam_3608,
Can you please run the following SQL query on your ProGet database and let me know if that fixes your issue? This will update the NpmFeedPackageTags_Extended
view to fix a problem with tags and scoped packages.
IF OBJECT_ID('[NpmFeedPackageTags_Extended]') IS NOT NULL DROP VIEW [NpmFeedPackageTags_Extended]
GO
CREATE VIEW [NpmFeedPackageTags_Extended]
AS
SELECT NFPT.*,
PNI.[PackageGroup_Name],
PNI.[Package_Name]
FROM [NpmFeedPackageTags] NFPT
INNER JOIN [PackageNameIds] PNI
ON PNI.[PackageName_Id] = NFPT.[PackageName_Id]
AND PNI.[PackageType_Name] = 'npm'
GO
Thanks,
Rich
Hi @falam_3608,
Thanks for letting us know what you found. I'm looking at that query now and I think i see the issue. But to confirm, is this error happening for all packages or just packages with a scope (e.g. @myscope/pacakage)?
Thanks,
Rich
Hi @scroak_6473,
Have you tried restarting your container? Also, what version of InedoCore did you attempt to install?
Thanks,
Rich
Hi @osnibjunior,
Glad to hear this is working now! I think you are right about the circular reference issue. That is most likely what caused the issue initially.
Thanks,
Rich
When you say that a user in the group cannot authenticate, can you describe what happens? Are the users constantly prompted to log in or do you see an error?
Thanks,
Rich
Hi @guyk,
It looks like jetstack is not requiring Helm packages to use SemVer 2 for their package versions. We have a flag to bypass packages with an invalid version or metadata, but it looks like it was hidden in ProGet v2023. I have added it back as part of ticket PG-2376, and it is set to release in ProGet 2023.7 this Friday. I can push out a pre-release version of ProGet to get the fix faster if you would like.
Thanks,
Rich
Looking at the code for v2022.10, there is nothing apparent that sticks out to why this wouldn't work. Just to verify, you do not have any other credentials (or legacy credentials) with the same name do you? The only thing that I see in the code would be due to how things are returned and sorted by the function, if there would happen to be two credentials with the same name, it may return the wrong credential which would have the wrong value set.
Thanks,
Rich
Hi @MF-60085,
It looks like there is a bug in the migration process when you select a specific feed. If you re-run it with no feeds selected, it will rerun it for all feeds that need to be migrated and that message should go away. We will fix the per-feed retry migration in ProGet 2023.6 which is due out this Friday.
Thanks,
Rich
Hi @MF-60085,
Can you please try upgrading to ProGet v2023.5? We have fixed multiple migration issues since ProGet 2023.0, including the MERGE statement error when handling NuGet packages.
Thanks,
Rich
Can you please try downgrading to a previous version of v2023 and then upgrading to the latest version again? Also, can you please verify that web site in IIS is pointing to C:\Program Files\ProGet\Service
?
Thanks,
Rich
Hi @MF-60085,
You can attempt to run the migration again (it won't hurt any existing data for rollback), it may resolve the issue. What version of ProGet did you upgrade to?
Also, it may be helpful to email us the migration log to review in more detail. You can send it to support@inedo.com,use the subject [QA-1121] Migration Log
, and let us know you sent it so we can keep an eye out.
Thanks,
Rich
Sorry about that. Looks like the article was changed but not published. I published the updated.
Thanks,
Rich
The validity check of a certificate ion ProGet is primarily to verify the certificate itself is valid, not if it is valid for ProGet. Any self-signed or internal domain certificate will be invalid by default unless the certificate or certificate authority exists in the trusted root on your server. If it is a purchased certificate, I would check that your certificate's chain is properly installed on your server. If your certificate is a valid certificate but requires a custom certificate chain (many do), that chain will need to be installed on the server for ProGet to validate that properly. A .pfx
certificate does not store the certificate chain internally in the file. The browser handles the validation slightly differently, so that is most likely why it seems to work in the browser.
When it comes to the .pem
file. There are many ways to generate it, but I'm guessing the certificate chain was stored internally in the pem file, which then does not require the certificate chain to be installed on the server.
I'm speculating on the certificate chain in these cases because seeing why your certificate is not valid requires more than the screenshots you provided. I would actually need to see your certificate itself to truly validate this.
Lastly, when it comes to using a .pem
file, .NET tends to be very picky about it's format. It is not as forgiving as other frameworks. If you look in the "HTTPS Binding to a Port (Advanced) (Experimental)" of our HTTPS Support on Windows documentation, we have instructions on how to create a .pem
file from a .pfx
. I'm not sure if that is what you followed, but that is the simplest way we have found to generate a .pem
file that works with .NET.
Hope this helps!
Thanks,
Rich
Alana was correct, the change was not merged into the 2023 release. The fix, PG-2350, will be released on Friday in ProGet 2023.4. If you need it earlier than Friday, I can push a pre-release version of ProGet 2023.4 for you. Please let me know!
Thanks,
Rich
Hi @jw,
Does this only happen with that one .snupkg or all of them? Also, could you please verify that the Symbol Server is still enabled on your feed for the "Standard (.snupkg format)" support?
Thanks,
Rich
The HTTP/S & Certificate Settings page will update the ProGet.config file that is commonly stored at C:\ProgramData\Inedo\SharedConfig\ProGet.config
. As long as the inedoprogetwebsvc
Window's service's account has write access to that file, it will be able to save. On fresh installs, this will typically work without requiring changes. My guess is something was changed with the executing user or the server permissions that is preventing write access to that configuration file.
You can also manually setup HTTPS by editing the ProGet.config file directly. See the "HTTPS Binding to a Port (Advanced) (Experimental)" section of our HTTPS Support on Windows documenation for the different options.
Thanks,
Rich
Hi @msimkin_1572,
Can you please navigate to your Manage Feed page and verify that JSON-LD (v3)
is enabled under the Supported API?
Thanks,
Rich
Hi @msimkin_1572,
Can you please verify that you enabled the standard symbol server on your feed? Also can you please verify that the .snupkg
file exists at the same location and name (minus the extension) before you run nuget push?
Thanks,
Rich
Happy to hear the config fixed your issue on the clair container. In ProGet v2022 we moved the feed vulnerability source to the Reporting & SCA > Vulnerabilities > Configure Vulnerability Download Blocking page. You should be able to wire it up from there. I'll make sure to update our documentation with these changes as well.
Thanks,
Rich
Can you please tell me which version of ProGet you are running? Are you able to edit your vulnerability source in ProGet?
Thanks,
Rich