@stefan-hakansson_8938 No, we haven't had any more luck and have put the debian mirror back on the backburner for the time being
Welcome to the Inedo Forums! Check out the Forums Guide for help getting started.
If you are experiencing any issues with the forum software, please visit the Contact Form on our website and let us know!
Posts made by dan.brown_0128
-
RE: Debian feed mirror Performance
-
RE: nodejs.org proxy
Unfortunately we cannot share our scripts due to intellectual property rights. The hint I can give though is to note the format of the
index.json
in the public dist and maintain that in your private dist. -
RE: nodejs.org proxy
Being able to mirror the NodeJS distribution listing natively would be convenient. Currently we're having to do it by running a script daily that syncs the nodeJS Dist with an assets directory in proget
-
RE: Debian feed mirror Performance
To share with the community some things learned from a private support ticket -- There's currently a defect in the logic where if no packages were downloaded/uploaded to a debian type feed, the in-memory cache is never started. This leads to very long runtimes when running
apt update
since the response times for the/Release
and/InRelease
endpoints (from our experience) are at best 1 minute, at worst 10 minutes.The workaround support gave was to manually download a package via http before attempting to use apt. From our experience, this doesn't persist overnight, so it would basically need done daily and truly is counterintuitive to having a package manager like apt.
-
RE: Pagootle: pgutil, but PowerShell
Are there plans to replace the database calls with API functions to secure database controls and access?
-
Debian feed mirror Performance
Is anybody else attempting to use Proget's debian feeds to mirror the official mirror for use in isolated environments?
I am curious what sort of performance others are seeing in that use case. Our prior tests found that mirroring via Proget was substantially slower than the official mirrors.
-
RE: OCI support?
I do get your point of how versioning/tags are a bit different between traditional package artifacts vs container images. It can definitely have good and bad effects.
In regards to tag-mutability, you did bring up one method, using the SHA Digest since that will always point to a specific object. If the object changes, its hash changes. But humans don't like hashes, so tags were created.
Looking around, it does look like some registries actually support immutable tags (Ex ECR, Harbor). Some tags make sense to be excluded from immutability - like
latest
since those are dynamic by intention. -
RE: OCI support?
Bringing this thread back alive some to link out to: https://forums.inedo.com/topic/5364/oci-support-for-assets/
-
RE: OCI Support for Assets
Hey @stevedennis -
From that quote, I get the sense that the container feeds were implemented by trying to force the process into what was implemented for traditional file-based packages rather than treating container images as a different style of artifact. This would explain why it seems OCI isn't a direct drop in fit -- because other traditional package consumers (pypi, nuget, maven, etc) do not behave the same as OCI.
As for the scalability comment - I can see your point if you tried to cram traditional package types into OCI where the client doesn't support that. Could clients like nuget support OCI for downloading the nupkg files? Sure. But they haven't so far as I know.
On the shock that for container images that you have to provide the URL to access the image -- that's not all that different than traditional software packages. Just on those traditional package tools you're specifying that URL Prefix as part of the client's configuration. The client may add in some "middle parts" to that url. And actually you do run into that some with docker: If you take the common
nginx-ingress
image hosted on DockerHub, the image name is justnginx/nginx-ingress
which works just fine in docker and kubernetes deployments. Secretly, that full image URL parses out todocker.io/library/nginx/nginx-ingress
.Last one -- The URL thing. Take a look at some other common registries, including the public GitHub container registry. For those, the URL is always
ghcr.io
but they then prefix the image with the User/Org (eg: https://github.com/blakeblackshear/frigate/pkgs/container/frigate/394110795?tag=0.15.1). Truthfully, Proget could do a similar approach:proget.corp/MyOciOrDockerFeed/image/path/here:1.2.3
-
RE: OCI Support for Assets
Hi @stevedennis -- Thanks for your reply. I would like however to make some clarifications on some points you've made.
OCI is actually used for more than just transporting Container images -- in fact, helm charts are typically pulled and queried over the OCI protocol when set up as a proper helm repository. Ultimately helm charts are just a collection of files in a tarball (tgz). It is the industry norm to use OCI to interact with these repositories and query/pull chart versions and data.
Beyond container images and Helm Charts, the Open Container Initiative is specifically "designed generically enough to be leveraged as a distribution mechanism for any type of content" (https://opencontainers.org/about/overview/) (emphasis added). While the OCI is predominately for container technology, they do expressly state it is also designed for other content distribution.
Implementing OCI support inside of Proget is ultimately an Inedo decision, and we respect that business decision. However, we hope you have all the facts before making that call, particularly that OCI is actively used across the artifact management segment. As a prime example, Microsoft, in their Azure Container Registry supports container images, Helm charts, SBOM, scan results, etc in their OCI compliant registry (https://learn.microsoft.com/en-us/azure/container-registry/container-registry-manage-artifact). JFrog also supports various artifacts, WebAssembly modules, tar files, etc in their product (https://jfrog.com/blog/oci-support-in-jfrog-artifactory/). Cloudsmith supports similar additional artifacts in their OCI compliant registries (https://cloudsmith.com/blog/securely-store-and-distribute-oci-artifacts).
While you did hint at the initial intention of OCI, the industry has seen benefits and done exactly what OCI wanted, "enable innovation and experimentation above and around it" (https://opencontainers.org/faq/#what-are-the-values-guiding-the-oci-projects-and-specifications).
You might check out this blog article too (https://www.loft.sh/blog/leveraging-generic-artifact-stores-with-oci-images-and-oras) that even provides an example of pushing/pulling generic files (with custom filenames) to an OCI registry. You are correct that registry entries can be tagged as desired and those tags can be set/modified (depending on approved user permissions), just as your custom upack packages can be 'repackaged' - keeping the same content but providing a new pointer label.
Lastly we are aware of the BuildMaster product and have reviewed it, but we do not have any plans to switch from OctopusDeploy in the near future.
-
RE: SAML Authentication with Microsoft Active Directory
Have you found a solution to disabling users in Proget when their access has been removed, or updating their groups when updated on the backend (without needing re-login to the web app)? We're running into challenges with keeping API Key access in sync with the user's actual access since Proget's internal user database doesn't synchronize against our identity backend.
-
OCI Support for Assets
Would it be possible for Asset Directories to be accessible via OCI? This could actually remedy one of our largest challenges with using assets with our deployment tool (Octopus Deploy), but would be usable for other tools.
Currently in order to select a 'package" and version in Octopus, it must be one of the following feed types -- so we can't use upack as an option:
As a result, we often create Nugets for deployment assets even though it may not be c# based. For example we receive patch sets from a third party vendor (they are zip archives of Java patches) and they can be quite large (multi gig). We're uploading them into assets, but in our CI process, creating a nuget package so they can be selected on the deployment side.
By using OCI, we could keep the patches in the Asset directory, and not have to repackage them into a nuget (saving duplicated storage costs). Then on deployment, we could simply pick the proper version of the asset files (eg patch v2024.1, v2024.4, v2024.10, v2025.1 etc) and the deployment tool can interact with Assets natively over OCI protocol
All in all this would give us a similar experience to Helm and Docker chart/image version selection (currently using Azure Container Registry for OCI compliance) but for generic assets.
-
RE: OCI support?
I hadn't realized that the docker and helm feeds weren't OCI compatible. We're currently using Azure Container Registry for our docker/helm storage, but have been considering changing that as our usage grows and we're needing to tackle the Vulnerability/License compliance management.
In order to use the Proget docker/helm feeds as a drop in replacement, we'd absolutely need these to be OCI compliant and usable over the OCI Protocols.
-
RE: ProGet: Auto package promotion from NuGet mirror?
In theory, having a filtered feed and promoting from that would work -- at least it would accomplish the same result.
The initial creation of the filters, at least in our case, would be quite lengthy (we have ~2500 packages in our approved feed today). Also, how scalable is the feed filtering? I could see the risk of performance issues as the number of packages in the filter grows.
I need to double check our policy/procedure for one specific. If package X is approved at version Y, I am not certain that versions prior to version Y are inherited in the approval. I know future ones are, and I think prior ones would be, but need to verify.
-
RE: ProGet: Auto package promotion from NuGet mirror?
To piggyback on this -- this has been an idea we've been interested in as well. As part of our corporate policies, generally once a package has been approved (at a name level), all subsequent versions are OK assuming there's no vulnerabilities or license issues. Denying a request for this is very rare.
By automating version promotion, it would allow developers access to newer versions of packages sooner, making access easier and devs will be more likely to upgrade.
We thought about doing this by filters, but managing that list would quickly get out of hand.
This isn't a critical functionality for us today, more of a nice to have.
-
RE: SCA Feedback/suggestions 2024
The reason for blocking application packages like this, that have vulnerable dependencies, comes down to security governance. While yes, it can be inconvenient if a package that has worked before is now blocked, that does prevent us from introducing known vulnerabilities into our environment. In the event of an emergency deployment (ex prod rollback, etc), we could apply a temporary exemption to allow the package to still deploy -- after doing a risk assessment.
To use the log4j example again -- if we have an application that was built with a vulnerable version of log4j, nobody would want that package to get deployed again (while also remediating it anywhere that it was already deployed). From what I can tell the most effective way would be to block the download from ProGet - if we can leverage it's automatic blocking functions.
Adding the
audit
into the deployment process is definitely one way to partly add this security layer but it'd require active implementation for all of our deployments, and opens more opportunity for teams to skip or work around it. Basically its better than nothing yes, but it's not the most effective security enforcement measure. -
RE: SCA Feedback/suggestions 2024
For #3 (which unfortunately is probably the biggest potential win for us), I might have an idea...
Currently when editing the Project's settings, you can specify Feeds - which I'm understanding are the feed(s) that provide the packages outlined in the SBOM - ie the dependency source from the build).
What if at that level it was broken out to input/output feeds? The artifact
MyCorp.Package
, which resulted in the SBOM, was stored in the "internal-apps" feed, and it was built using dependencies in the "approved-nuget-feed." Now, this would assume that the project name and resultant package name are the same, but that shouldn't be too unreasonable.Again at a high level, what we're wanting is to be able to block our custom built packages if its dependencies (identified by SBOM) become vulnerable (or are otherwise deemed noncompliant). Since these packages will never be known by any OSS index or the ProGet Security, it wouldn't be flagged by traditional vulnerability means, and that's why the SCA/SBOM is useful.
-
SCA Feedback/suggestions 2024
In re-evaluating the SCA Functions of Proget 2024, we've come up with a few functionalities that would be big benefits. Posting it publicly in case others have similar thoughts
-
Be able to add a link to our CI build and CD deployment on a build's release (for Inedo, this was mentioned in EDO-10735)
This could probably be as simple as adding two optional fields in the "Create Build" modal and/or in the build promotion process. Of course this would also want/need to be implemented into the pguitl CLI. -
Dependent Package's 'Usage in Projects' should show latest version in addition to prior versions (EDO-10736)
It'd be helpful to quickly see all versions of an SCA project that reference a specific version of a dependent package. Right now it's labeled for the Latest version, although on our instance it appears to not correctly show the latest. -
Link between SCA Build and the artifact
Bringing back up this topic from last year. Having a way to link an SCA Project/build back to the artifact that corresponds to the full BOM would be great, especially if it shows the vulnerabilities of the package discovered from the BOM analysis. This would let us block downloading of packages we've built that may now contain vulnerabilities that have come up over time.
I wrote up a more narritive version of what I'm thinking:
- Imagine we have MyCorp.Package v1.0.0 which is a nuget stored in the internal-apps feed
- It has dependencies of Ext.Dep.1.1.1 and Ext.MS.Dep2.2.2 which are in the approved-nuget feed
- We upload the SBOM
- If you go to Ext.Dep or Ext.MS.Dep in approved-nuget feed you will see that each is used in the MyCorp.Package project (makes sense, those are dependencies of the package).
- But if you go to MyCorp.Package in internal-apps, you don't see it used in a project, since its not a dependency, but rather is the project itself. Wouldn't it make sense for this to link? That way if your package has a vulnerable dep (and the project shows vulnerabilities), you can see that at the top level (and maybe block downloading the applications package)t at the top level (and maybe block downloading the applications package)
-
-
Infrastructure As Code Scanning -- Azure ARM/Bicep
I've noticed that other Artifact management systems are offering products for IaC Scanning -- looking for vulnerabilities, known misconfigurations, best practices, etc. This could be a feature that would be a nice supplement to ProGet.
Most of what I've seen has the IaC scanning centered around Terraform templates. I'm using ARM/Bicep since we're completely Azure based for cloud presence.
I'm not 100% sure how the other products are doing IaC scanning -- it may be along the lines of scanning a Terraform Private Registry, ensuring that modules there are up to standards. With Bicep/ARM, Azure provides Template Specs as a manner of storing and cataloging private modules that can be imported into templates. I don't think its prudent to reinvent Template Specs into ProGet, but it would be nice to get the best practice and quality checks to be made against ARM/Bicep templates stored inside of artifacts in ProGet.
In our instance, the templates are typically stored inside of a nuget package, although a universal package could be used instead.
Are there any others looking into IaC scanning, particularly in an Azure + ARM/Bicep organization?
-
RE: NPM Package name case sensitivity
I wonder if its an issue if the old
JSONPath
was pulled locally first, thenjsonpath
was pulled. Right now, we just have a workaround by disablingnpm audit
support in ProGet, which allows our builds to run, but that disables a legitimate feature because of this one edge case.My guess is that some DB cleanup would need to happen to purge
JSONPath
andjsonpath
from all memory? -
RE: NPM Package name case sensitivity
Thanks for the details. We did delete
jsonpath
andJSONPath
from our local feed, but once ProGet caches thejsonpath
package locally, it starts to display asJSONPath
on the ProGet site. Is there a way to force the site to recognize it asjsonpath
instead since v1.1.1 is the only one locally? -
RE: NPM Vulnerability with Exception
The vulnerability has already been assessed as "Not Applicable" and that's where the "info" comes up, as shown by your code snippet. The problem is that NPM does not have info as a classification and cannot parse it
-
RE: NPM Package name case sensitivity
Part of the issue here is that the results from ProGet do not match that of the public registry. Yes, it is an edgecase where there are 2 packages with the same name but different capitalization, but its also a fairly common package.
npm audit is a step that is run by default as part of
npm install
and other standard functions. This is to advise of package updates, end of support, and vulnerabilities by priority. Simply replacing NPM with pgutil is not a small feat, especially with standard common tooling already in use across the field.The fact that our builds fail because of the mismatch is problematic, and shows that two unique packages are being mixed by ProGet. The audit caught it, and prevented the two from being crossed
-
NPM Vulnerability with Exception
We ran into an issue when a NPM package has a vulnerability, but an exception has been made to allow the package download. The example we have was
static-eval
v 2.1.1We have an exception on the PGV-2133354 vulnerability since it has been withdrawn.
npm audit
andnpm install
fail because the severity level of the vulnerability with exception isinfo
which is not a supported level by NPM. the output from npm is:npm ERR! undefined is not iterable (cannot read property Symbol(Symbol.iterator))
NPM Audit Severity Levels
And you can see in the NPM Audit code that it does not expectinfo
as a severity levelOutput from ProGet npm/v1/security/advisories/bulk endpoint:
Right now we are NOT able to build with this package without disabling vulnerability detection, which is not permitted by corporate policy. How can we correct the output from ProGet so that npm will successfully build
-
NPM Package name case sensitivity
we recently had users having issues if their NPM project depended on the
jsonpath
package. Our NPM feed uses a connector to the public NPM registry, and we had pulled version 1.1.1 to our local proget. Apparently when this was done, the package was saved to ProGet asJSONPath
.This resulted in the packument data pull in
npm audit
to fail since the metadata JSON (at https://progetserver/npm/feedname/jsonpath) had the nameJSONPath
but the package name is all lower case.Removing the package and all versions from ProGet and clearing it from disk does appear to work, but once we pull it back into ProGet, it will fail.
NPM is very clear in their documentation that package names should always be all lowercase because of case sensitivity differences between OSes.
How can we fix ProGet so that we can pull the jsonpath package locally but keep the proper naming convention?
-
RE: Debian Feed (New) connector errors
I was finally able to try this with a mounted network drive (using SysInternals), and that works
-
RE: Debian Feed - Package can only be downloaded by apt once
Quick update -- Just installed 2023.30 in our test environment. At first I was still not able to download packages with
+
in the version (despite the index being correct). After clearing the feed and re-promoting packages it does seem to work now! -
RE: Debian Feed - Package can only be downloaded by apt once
Oh and also to note: there is no network traffic (captured by our networking team) or from IIS logs when attempting to
apt install
unixodbc-common from ProGet (pulled to feed). Maybe apt is thinking that filename is invalid and skips it -
RE: Debian Feed - Package can only be downloaded by apt once
I'm starting to think this is related to package versions with a
+
. We can keep going down the road ofunixodbc-common_2.3.11-2+deb12u1_all.deb
for example.If I disable the cache and do not pull packages to proget, I can download/install just fine.
If I promote the package to another feed, I cannot install it. BUT if I try a package without a
+
in the version/filename in that second feed, it does install or at least resolve (ex: bcal)There is one big difference in the index that might be the key... The original/upstream Debian index does not urlencode the
filename
but it looks like ProGet isdebian upstream:
Filename: pool/main/u/unixodbc/unixodbc-common_2.3.11-2+deb12u1_all.deb
Proget:Filename: pool/bookworm/main/all/unixodbc-common/unixodbc-common_2.3.11-2%2Bdeb12u1_all.deb
-
RE: Debian Feed - Package can only be downloaded by apt once
Dockerfile used:
FROM privatedockerrepo/python:3.8-slim RUN apt update RUN apt install -y wget gnupg2 apt-transport-https apt-rdepends RUN echo 'deb https://user:pass@proget.local/debian/import-debian bookworm main contrib non-free' | \ tee /etc/apt/sources.list.d/proget-approved-deb.list RUN wget -qO - https://user:pass@proget.local/debian/import-debian/keys/import-debian.asc | apt-key add - RUN mv /etc/apt/sources.list.d/debian.sources /etc/apt/sources.list.d/debian.sources.old RUN apt update && apt install unixodbc-common CMD ["bash"]
Output from first run:
PS> docker build . [+] Building 42.5s (15/15) FINISHED docker:default => [internal] load build definition from Dockerfile => => transferring dockerfile: 1.17kB => [internal] load metadata for privatedockerrepo/python:3.8-slim => [auth] python:pull token for privatedockerrepo => [internal] load .dockerignore => => transferring context: 2B => [internal] load build context => => transferring context: 71B => [1/7] FROM privatedockerrepo/python:3.8-slim => [2/7] RUN apt update => [3/7] RUN apt install -y wget gnupg2 apt-transport-https apt-rdepends => [4/7] RUN echo 'deb https://user:pass@proget.local/debian/import-debian bookworm main contrib non-free' | tee /etc/apt/sources.list.d 0.0s => [5/7] RUN wget -qO - https://user:pass@proget.local/debian/import-debian/keys/import-debian.asc | apt-key add - => [6/7] RUN mv /etc/apt/sources.list.d/debian.sources /etc/apt/sources.list.d/debian.sources.old => [7/7] RUN apt update && apt install unixodbc-common => exporting to image => => exporting layers => => writing image sha256:...
Output from second run:
PS> docker build . [+] Building 42.5s (15/15) FINISHED docker:default => [internal] load build definition from Dockerfile => => transferring dockerfile: 1.17kB => [internal] load metadata for privatedockerrepo/python:3.8-slim => [auth] python:pull token for privatedockerrepo => [internal] load .dockerignore => => transferring context: 2B => [internal] load build context => => transferring context: 71B => [1/7] FROM privatedockerrepo/python:3.8-slim => [2/7] RUN apt update => [3/7] RUN apt install -y wget gnupg2 apt-transport-https apt-rdepends => [4/7] RUN echo 'deb https://user:pass@proget.local/debian/import-debian bookworm main contrib non-free' | tee /etc/apt/sources.list.d 0.0s => [5/7] RUN wget -qO - https://user:pass@proget.local/debian/import-debian/keys/import-debian.asc | apt-key add - => [6/7] RUN mv /etc/apt/sources.list.d/debian.sources /etc/apt/sources.list.d/debian.sources.old => ERROR [7/7] RUN apt update && apt install unixodbc-common ------ > [7/7] RUN apt update && apt install unixodbc-common: 0.413 0.413 WARNING: apt does not have a stable CLI interface. Use with caution in scripts. 0.413 29.70 Get:1 https://proget.local/debian/import-debian bookworm InRelease [39.1 kB] 30.13 Get:2 https://proget.local/debian/import-debian bookworm/non-free all Packages [53.7 kB] 30.50 Get:3 https://proget.local/debian/import-debian bookworm/contrib all Packages [27.7 kB] 30.74 Get:4 https://proget.local/debian/import-debian bookworm/contrib amd64 Packages [66.1 kB] 31.02 Get:5 https://proget.local/debian/import-debian bookworm/non-free amd64 Packages [125 kB] 31.27 Get:6 https://proget.local/debian/import-debian bookworm/main amd64 Packages [10.6 MB] 36.09 Get:7 https://proget.local/debian/import-debian bookworm/main all Packages [5719 kB] 38.67 Fetched 16.7 MB in 38s (436 kB/s) 38.67 Reading package lists... 38.99 Building dependency tree... 39.08 Reading state information... 39.09 21 packages can be upgraded. Run 'apt list --upgradable' to see them. 39.09 W: https://user:pass@proget.local/debian/import-debian/dists/bookworm/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details. 39.10 39.10 WARNING: apt does not have a stable CLI interface. Use with caution in scripts. 39.10 39.10 Reading package lists... 39.42 Building dependency tree... 39.51 Reading state information... 39.52 E: Unable to locate package unixodbc-common ------ Dockerfile:16 -------------------- 14 | RUN wget -qO - https://user:pass@proget.local/debian/import-debian/keys/import-debian.asc | apt-key add - 15 | RUN mv /etc/apt/sources.list.d/debian.sources /etc/apt/sources.list.d/debian.sources.old 16 | >>> RUN apt update && apt install unixodbc-common 17 | 18 | CMD ["bash"] -------------------- ERROR: failed to solve: process "/bin/sh -c apt update && apt install unixodbc-common" did not complete successfully: exit code: 100 =>
-
Debian Feed - Package can only be downloaded by apt once
This is an oddly specific issue I'm finding. Background info - fully standard, no customizations
- ProGet is running on Windows Server
- Proget v 2023.29 (Build 17)
- New Debian connector set up with bookworm
- debian feed called
import-deb
linked to above connector. - Debian bookworm container setup with private ProGet Debian apt source + key, public Debian feed removed
When I build the container for the first time (works on a VM or any other Debian instance), it will successfully download and install the package from Proget (for discussion,
unixodbc-common
. This is good!If I do another build or attempt to install the same package on another computer, apt cannot locate the package. This is bad! I can, however, still see and download the package from the Proget UI and even via curl on the container/vm, ruling out network issues.
If I then delete that package from Proget, and attempt to install via apt -- it works again, but only once. Once the package has been pulled from the Public Debian repo and saved into ProGet, it seems apt cannot find it again.
-
RE: Debian Feed (New) connector errors
That's what I suspected. We'll do some more testing on our end about UNC/mounted drives and see what comes up. Are there any debug logs I can get from the ProGet service that might give more info?
-
RE: Debian Feed (New) connector errors
@stevedennis - If we go the route of storing the sqlite DB onto a local disk on each node of a cluster, are there any risks that would come of that? Either packages being out of sync, or slowdowns?
We're still researching the network path on our end
-
RE: Debian Feed (New) connector errors
@stevedennis -- we were able to rule out antivirus and similar. Windows Defender shows no threats or actions, and nothing on our file servers.
We were able to have success by switching the Storage.DebianPackagesLibrary path to a local path (
E:\ProGetPackages\.debian
vs\\server.corp\qa\progetpackages\.debian
).With the local path, the connector passes heathcheck and I see remote packages in the feed. Unfortunately this will not work well for our production use since we're using a HA setup.
I did try copying the connector folder (and sqlite file) back to the network share and changed the storage setting back to the network share. When I ran the heathcheck, the sqlite file was deleted. Looks to me that there's something happening specific to the Debian2 connector and storage paths using a network location (
\\server\path
notation). Is this something you can validate too? -
RE: Debian Feed (New) connector errors
I checked on the file share, and the sqllite database is not found. ProGet definitely can write to the directory, and does for all other feeds.
Just to confirm -- even if ProGet is using SqlServer for all other database operations, does the Debian feed connector require SqlLite? Our corporate policy currently prohibits the use of sqllite and that could very well be why we don't see the file on disk
-
RE: Debian Feed (New) connector errors
@gdivis -- We are running ProGet on Windows
Upping the timeout doesn't seem to make a difference. The "unable to open database file" message is logged almost immediately whenever the connector healthcheck runs.
When viewing a new Deb feed with the connector added, it only shows the message "An unexpected error occurred while listing packages: unable to open database file. Additional information has been logged in the diagnostics center." and doesn't clear (24 hours).
Here's the stack trace that shows in the messages. Not sure what Sqlite DB it's trying to access. The main database is SqlServer
Details:code = CantOpen (14), message = System.Data.SQLite.SQLiteException (0x800007FF): unable to open database file at System.Data.SQLite.SQLite3.Open(String strFilename, String vfsName, SQLiteConnectionFlags connectionFlags, SQLiteOpenFlagsEnum openFlags, Int32 maxPoolSize, Boolean usePool) at System.Data.SQLite.SQLiteConnection.Open() at Inedo.ProGet.Feeds.Debian2.Debian2ConnectorIndex.Open(String fileName, Debian2Connector connector) at Inedo.ProGet.Feeds.Debian2.Debian2Connector.GetIndex() at Inedo.ProGet.Feeds.Debian2.Debian2Connector.GetAndUpdateIndexAsync() at Inedo.ProGet.Feeds.Debian2.Debian2Connector.ListPackagesAsync(Nullable`1 maxCount)+MoveNext() at Inedo.ProGet.Feeds.Debian2.Debian2Connector.ListPackagesAsync(Nullable`1 maxCount)+System.Threading.Tasks.Sources.IValueTaskSource<System.Boolean>.GetResult() at Inedo.ProGet.Feeds.Debian2.Debian2Feed.GetConnectorPackagesAsync(Func`2 getPackages)+MoveNext() at Inedo.ProGet.Feeds.Debian2.Debian2Feed.GetConnectorPackagesAsync(Func`2 getPackages)+MoveNext() at Inedo.ProGet.Feeds.Debian2.Debian2Feed.GetConnectorPackagesAsync(Func`2 getPackages)+System.Threading.Tasks.Sources.IValueTaskSource<System.Boolean>.GetResult() at System.Linq.AsyncEnumerable.UnionAsyncIterator`1.MoveNextCore() in /_/Ix.NET/Source/System.Linq.Async/System/Linq/Operators/Union.cs:line 131 at System.Linq.AsyncIteratorBase`1.MoveNextAsync() in /_/Ix.NET/Source/System.Linq.Async/System/Linq/AsyncIterator.cs:line 70 at System.Linq.AsyncIteratorBase`1.MoveNextAsync() in /_/Ix.NET/Source/System.Linq.Async/System/Linq/AsyncIterator.cs:line 75 at System.Linq.Internal.Lookup`2.CreateAsync(IAsyncEnumerable`1 source, Func`2 keySelector, IEqualityComparer`1 comparer, CancellationToken cancellationToken) in /_/Ix.NET/Source/System.Linq.Async/System/Linq/Operators/Lookup.cs:line 105 at System.Linq.Internal.Lookup`2.CreateAsync(IAsyncEnumerable`1 source, Func`2 keySelector, IEqualityComparer`1 comparer, CancellationToken cancellationToken) in /_/Ix.NET/Source/System.Linq.Async/System/Linq/Operators/Lookup.cs:line 105 at System.Linq.AsyncEnumerable.GroupedAsyncEnumerable`2.MoveNextCore() in /_/Ix.NET/Source/System.Linq.Async/System/Linq/Operators/GroupBy.cs:line 1094 at System.Linq.AsyncIteratorBase`1.MoveNextAsync() in /_/Ix.NET/Source/System.Linq.Async/System/Linq/AsyncIterator.cs:line 70 at System.Linq.AsyncIteratorBase`1.MoveNextAsync() in /_/Ix.NET/Source/System.Linq.Async/System/Linq/AsyncIterator.cs:line 75 at Inedo.ProGet.Feeds.Debian2.Debian2Feed.GetLatestVersionOfEach(IAsyncEnumerable`1 packages)+MoveNext() at Inedo.ProGet.Feeds.Debian2.Debian2Feed.GetLatestVersionOfEach(IAsyncEnumerable`1 packages)+MoveNext() at Inedo.ProGet.Feeds.Debian2.Debian2Feed.GetLatestVersionOfEach(IAsyncEnumerable`1 packages)+System.Threading.Tasks.Sources.IValueTaskSource<System.Boolean>.GetResult() at System.Linq.AsyncEnumerable.SelectEnumerableAsyncIterator`2.MoveNextCore() in /_/Ix.NET/Source/System.Linq.Async/System/Linq/Operators/Select.cs:line 221 at System.Linq.AsyncIteratorBase`1.MoveNextAsync() in /_/Ix.NET/Source/System.Linq.Async/System/Linq/AsyncIterator.cs:line 70 at System.Linq.AsyncIteratorBase`1.MoveNextAsync() in /_/Ix.NET/Source/System.Linq.Async/System/Linq/AsyncIterator.cs:line 75 at System.Linq.AsyncEnumerable.<ToListAsync>g__Core|424_0[TSource](IAsyncEnumerable`1 source, CancellationToken cancellationToken) in /_/Ix.NET/Source/System.Linq.Async/System/Linq/Operators/ToList.cs:line 36 at System.Linq.AsyncEnumerable.<ToListAsync>g__Core|424_0[TSource](IAsyncEnumerable`1 source, CancellationToken cancellationToken) in /_/Ix.NET/Source/System.Linq.Async/System/Linq/Operators/ToList.cs:line 36 at Inedo.ProGet.Feeds.Debian2.Debian2Feed.SearchPackagesAsync(String query, Int32 maxCount, Boolean includePrerelease) at Inedo.ProGet.WebApplication.Pages.Packages.ListPackagesPage.PackageList.InitializeAsyncInternal() at Inedo.ProGet.WebApplication.Pages.Packages.ListPackagesPage.PackageList.InitializeAsync()
-
Debian Feed (New) connector errors
Trying to set up the new Debian feed and connector on ProGet 2023.29
I can create the feed and connector, but the healthcheck fails with:
Connector debian-buster health check reported error: unable to open database file
Connector Settings:
Type Debian2
Name debian-buster
Url http://ftp.debian.org/debian/I have tried restarting the Service and the ISS site and app pool, but still get the same connector error. Any ideas?
-
Debian Feeds and PGVC
Does PGVC watch/monitor vulnerabilities that could be in Debian packages, or is it only on Nuget/Python/Java/etc?
-
PGVC URLs
We are looking to switch from using OSS Feeds to PGVC, but are running into challenges on whitelisting locations/services on the firewall to access the repo.
Is there a listing of URLs/IPs and ports that our ProGet instance will need access to for PGVC to work?
-
RE: Link between SCA Project and Package
@rhessinger Thanks for the clarification. Just to be sure - there's no linkage between the SCA Project back to the Nuget package / feed, right? The two modules are basically independent of each other
-
RE: Link between SCA Project and Package
@atripp I've set up a testing app just to experiment with. It currently only exists in one feed
In this example, v2013.7.13.4 has issues associated from SCA (one vulnerability from 3 CVE and one missing package)
When I go to that version within the feed, I see no known vulnerabilities. Note we don't have any OSS Index associated with this feed since it's our internally built tools - closed source.
My thought is that the vulnerabilities listed on the Project + Version/Release level should propagate down to the package in the feed as well. Is that a fair understanding of how it should work?
-
Link between SCA Project and Package
We have started evaluating the SCA / SBOM function within ProGet. When using pgscan to build/load the SBOM, I see the release and project appearing in the "Reporting & SCA" section -- with a nice breakdown of all imports and associated vulnerability risk.
As it stands right now, the two locations show conflicting data:
- the SCA/Project page says that there are vulnerabilities
but
- The Package page shows that there are none (persumably since our internally built packages aren't found on any OSS listing)
Is there any way that these findings can be propagated to the actual Package/artifact within our internal feed?