@stefan-hakansson_8938 No, we haven't had any more luck and have put the debian mirror back on the backburner for the time being
Welcome to the Inedo Forums! Check out the Forums Guide for help getting started.
If you are experiencing any issues with the forum software, please visit the Contact Form on our website and let us know!
dan.brown_0128
@dan.brown_0128
Best posts made by dan.brown_0128
Latest posts made by dan.brown_0128
-
RE: Debian feed mirror Performance
-
RE: nodejs.org proxy
Unfortunately we cannot share our scripts due to intellectual property rights. The hint I can give though is to note the format of the
index.json
in the public dist and maintain that in your private dist. -
RE: nodejs.org proxy
Being able to mirror the NodeJS distribution listing natively would be convenient. Currently we're having to do it by running a script daily that syncs the nodeJS Dist with an assets directory in proget
-
RE: Debian feed mirror Performance
To share with the community some things learned from a private support ticket -- There's currently a defect in the logic where if no packages were downloaded/uploaded to a debian type feed, the in-memory cache is never started. This leads to very long runtimes when running
apt update
since the response times for the/Release
and/InRelease
endpoints (from our experience) are at best 1 minute, at worst 10 minutes.The workaround support gave was to manually download a package via http before attempting to use apt. From our experience, this doesn't persist overnight, so it would basically need done daily and truly is counterintuitive to having a package manager like apt.
-
RE: Pagootle: pgutil, but PowerShell
Are there plans to replace the database calls with API functions to secure database controls and access?
-
Debian feed mirror Performance
Is anybody else attempting to use Proget's debian feeds to mirror the official mirror for use in isolated environments?
I am curious what sort of performance others are seeing in that use case. Our prior tests found that mirroring via Proget was substantially slower than the official mirrors.
-
RE: OCI support?
I do get your point of how versioning/tags are a bit different between traditional package artifacts vs container images. It can definitely have good and bad effects.
In regards to tag-mutability, you did bring up one method, using the SHA Digest since that will always point to a specific object. If the object changes, its hash changes. But humans don't like hashes, so tags were created.
Looking around, it does look like some registries actually support immutable tags (Ex ECR, Harbor). Some tags make sense to be excluded from immutability - like
latest
since those are dynamic by intention. -
RE: OCI support?
Bringing this thread back alive some to link out to: https://forums.inedo.com/topic/5364/oci-support-for-assets/
-
RE: OCI Support for Assets
Hey @stevedennis -
From that quote, I get the sense that the container feeds were implemented by trying to force the process into what was implemented for traditional file-based packages rather than treating container images as a different style of artifact. This would explain why it seems OCI isn't a direct drop in fit -- because other traditional package consumers (pypi, nuget, maven, etc) do not behave the same as OCI.
As for the scalability comment - I can see your point if you tried to cram traditional package types into OCI where the client doesn't support that. Could clients like nuget support OCI for downloading the nupkg files? Sure. But they haven't so far as I know.
On the shock that for container images that you have to provide the URL to access the image -- that's not all that different than traditional software packages. Just on those traditional package tools you're specifying that URL Prefix as part of the client's configuration. The client may add in some "middle parts" to that url. And actually you do run into that some with docker: If you take the common
nginx-ingress
image hosted on DockerHub, the image name is justnginx/nginx-ingress
which works just fine in docker and kubernetes deployments. Secretly, that full image URL parses out todocker.io/library/nginx/nginx-ingress
.Last one -- The URL thing. Take a look at some other common registries, including the public GitHub container registry. For those, the URL is always
ghcr.io
but they then prefix the image with the User/Org (eg: https://github.com/blakeblackshear/frigate/pkgs/container/frigate/394110795?tag=0.15.1). Truthfully, Proget could do a similar approach:proget.corp/MyOciOrDockerFeed/image/path/here:1.2.3
-
RE: OCI Support for Assets
Hi @stevedennis -- Thanks for your reply. I would like however to make some clarifications on some points you've made.
OCI is actually used for more than just transporting Container images -- in fact, helm charts are typically pulled and queried over the OCI protocol when set up as a proper helm repository. Ultimately helm charts are just a collection of files in a tarball (tgz). It is the industry norm to use OCI to interact with these repositories and query/pull chart versions and data.
Beyond container images and Helm Charts, the Open Container Initiative is specifically "designed generically enough to be leveraged as a distribution mechanism for any type of content" (https://opencontainers.org/about/overview/) (emphasis added). While the OCI is predominately for container technology, they do expressly state it is also designed for other content distribution.
Implementing OCI support inside of Proget is ultimately an Inedo decision, and we respect that business decision. However, we hope you have all the facts before making that call, particularly that OCI is actively used across the artifact management segment. As a prime example, Microsoft, in their Azure Container Registry supports container images, Helm charts, SBOM, scan results, etc in their OCI compliant registry (https://learn.microsoft.com/en-us/azure/container-registry/container-registry-manage-artifact). JFrog also supports various artifacts, WebAssembly modules, tar files, etc in their product (https://jfrog.com/blog/oci-support-in-jfrog-artifactory/). Cloudsmith supports similar additional artifacts in their OCI compliant registries (https://cloudsmith.com/blog/securely-store-and-distribute-oci-artifacts).
While you did hint at the initial intention of OCI, the industry has seen benefits and done exactly what OCI wanted, "enable innovation and experimentation above and around it" (https://opencontainers.org/faq/#what-are-the-values-guiding-the-oci-projects-and-specifications).
You might check out this blog article too (https://www.loft.sh/blog/leveraging-generic-artifact-stores-with-oci-images-and-oras) that even provides an example of pushing/pulling generic files (with custom filenames) to an OCI registry. You are correct that registry entries can be tagged as desired and those tags can be set/modified (depending on approved user permissions), just as your custom upack packages can be 'repackaged' - keeping the same content but providing a new pointer label.
Lastly we are aware of the BuildMaster product and have reviewed it, but we do not have any plans to switch from OctopusDeploy in the near future.