Hi @udi-moshe_0021 ,
Can you can provide the specific commands and error messages you are receiving? I.e. just coyp/paste the entire console session with the commands you're typing and the output.
cheers
Alana
Welcome to the Inedo Forums! Check out the Forums Guide for help getting started.
If you are experiencing any issues with the forum software, please visit the Contact Form on our website and let us know!
Hi @udi-moshe_0021 ,
Can you can provide the specific commands and error messages you are receiving? I.e. just coyp/paste the entire console session with the commands you're typing and the output.
cheers
Alana
Thanks for clarifying that @rpangrazio_2287 , we'll explore that route as well.
We opted against DinD because of resource management (build servers can be rather resource-intensive) and general instability (not everything seems to work the same).
FYI - in case you haven't seen it already, BuildMaster does support Image-based Services (Containerized Builds)
Cheers,
Alana
Hi @rpangrazio_2287 ,
Thanks for sharing that solution; the general approach we arrived at was setting up an SSL Agent to connect to the BuildMaster's Docker host (Installing & Integrating with Docker Engine).
The approach you took is interesting; are you essentially installing Docker within the container? I assume that uses the Docker engine of the host, not like "docker in docker"?
Thanks,
Alana
Hi @udi-moshe_0021,
First and foremost, I would use the latest version of ProGet 2024. That eliminates any question of bugs that may have been fixed.
Otherwise, I'm afraid that we don't have a lot of experience in troubleshooting debian client issues. What I can say is that, when I follow our instructions to set up a Debian feed in ProGet with a connector to http://ftp.debian.org/debian/
(Buster) it seemed to work fine.
You may need to query the endpoints directly and see what data PRoGet is generating vs archive.ubuntu.org.
Thanks,
Alana
Hi @marc-ledent_9164 ,
I'm not really sure how to debug OpenShift/Kubernetes, so I really don't even know what to suggest to look at. This may also be unrelated to the platform at all, but we just aren't seeing the real error. Ultimately I do think the web service isn't starting, which makes it really hard to see errors.
There is a database table in BuildMaster that might contain some iinformation (LogMessages
); it should be prety obvious which message it is.
Another thing you could/should try is a fresh, Docker-only installation of BuildMaster 2024. That should "just work" out of the box. Then bring to OpenShift/Kube. if that works, we at least know it's some kind of configuration delta.
BuildMaster 7 was our first Linux edition, so it's possible there was some configuration that worked on that, but not in newer. But we just need a starting point/error.
Hopefully the error table has some info
Thanks,
Alana
Hi @marc-ledent_9164 ,
I'm guessing that, if you're getting that error, it probably means that the BuildMaster service is somehow crashing on start-up and thus not running.
If this is the case, you should see some kidn of error message in the service (container) console output logs.
I don't know enough about OpenShift to advise how to find these logs, but if you can just do the equivalent of docker run
without -d
(detached) then the console output will be streamed to the console, and you can see the error messages.
Cheers,
Alana
Hi @paul-kendik_9721 ,
It looks like you're on the right track with troubleshooting -- the issue is indeed related to SSL/Certificates. This is all handled at the operating system level, which means there's nothing we can do on the application (i.e. ProGet) side of things.
Unfortunately, I've never seen this error on a modern Windows server so I don't know what exactly to suggest to fix. I didn't even know TLS 1.0 was enabled on Windows still. Maybe it's not even the server, but like a proxy or some kind of intermediate server?
The "good news" is that this is an easily searchable problem, so I would start by searching for "Windows 2022 SSL HandshakeFailure TLS 1.0" and see what comes up. There is certainly some kind of setting on your server taht is causing this problem, I just don't know what to suggest to look for.
Please let us know wat you find!
Alana
@daniel-lundqvist_1790 no problem.. docker is really confusing/silly... , glad we got it solved :)
Hi @daniel-lundqvist_1790 ,
Thanks, now I understand what the issue is: basically the command history isn't lining up to the layers. This is a minor UI/display thing.
I can fix that fairly easy by grouping history entries marked with empty_layer
into the following command. Here is what it looks like when grouped (ignore red warnings, it's dev environment):
Assuming the history entries are accurate, the bottom-most entry should be the one that generated the layer.
This will be in the next maintenance release (friday) via PG-2848.
Thanks,
Alana
Hi @Dony-Thomas_7156 ,
You can ignore these errors. ProGet doesn't process these requests (i.e. store the content); instead ProGet simply generates the appropriate hash of a maven artifact when you request .ext.md5, etc. It's dynamic, not a file system.
That said, I do know that later versions of ProGet will simply return a 200
so that it does'nt cause an error with Marvin. I recommend to upload to the latest ProGet 2024, and use the New Maven feeds.
Thanks,
Alana
Hi @marc-ledent_9164 ,
Sorry but that seems to be an issue in v7 displaying some events
I'm afraid there's no workaround, but you can find the data in the database (EventLogOccurences table), which might be helpful?
Thanks,
Alnaa
Hi @daniel-lundqvist_1790 ,
Thanks for sharing this; as you can see from the manifest file you shared, there are 24 layers in your image. These are the 24 layers that ProGet is showing on the Layers tab.
We can clearly see that there are no additional layers; the 24th layer that ProGet is showing is indeed the last (24th) entry in the manifest file:
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 13829131,
"digest": "sha256:b7593e740f79fdea31249ff3717f4fe50dc1925ca728bb423e56e12e2e1b4b6e"
}
The docker image history
command does not show layers. It shows image history, which is orthogonal to layers. ProGet is not designed to show the image history, only the layers.
If you were to look at your Container Configuration File, under the "history" property, you will see the exact number of items that docker image history
shows`. The command is simply giving a user-friendly printout of that file.
Hope that helps clarify,
Alana
Hi @daniel-lundqvist_1790,
Can you paste in the manifest file here? It's JSON and it's going to be on the "Metadata" page in a textbox
Thanks,
Alana
Hi @daniel-lundqvist_1790 ,
The Image Layers and Image History are two different things.
I'm not sure how to better explain this... but ProGet is not designed to show you the "Image History" -- if you want that, then use the docker image history
command.
Instead, ProGet's "Layers" page shows the Image Layers.
It's confusing because these things are similar, but they are in fact different.
Thanks,
Alana
Hi @daniel-lundqvist_1790,
Thanks for clarifying. These are different things...
The docker image history
displays all lines in the history
section of the container configuration file (CCF), which includes both layer-creating and nonlayer-creating commands. A CCF is optional, I suspect that command wouldn't work without the CCF, but I don't know.
The "Layers" tab in ProGet displayed the actual layers (i.e. .tar.gz
files on disk) that the image. If available, the CCF is used to augment the layer with command information.
As an example, you can see the Docker image for ProGet itself:
You can see the CCF lists the history:
"history": [
{
"created": "2024-07-02T01:25:02.331012304Z",
"created_by": "/bin/sh -c #(nop) ADD file:b24689567a7c604de93e4ef1dc87c372514f692556744da43925c575b4f80df6 in / "
},
{
"created": "2024-07-02T01:25:02.745660567Z",
"created_by": "/bin/sh -c #(nop) CMD [\"bash\"]",
"empty_layer": true
},
{
"created": "2024-07-09T14:47:21.224481352Z",
"created_by": "ENV APP_UID=1654 ASPNETCORE_HTTP_PORTS=8080 DOTNET_RUNNING_IN_CONTAINER=true",
"comment": "buildkit.dockerfile.v0",
"empty_layer": true
},
{
"created": "2024-07-09T14:47:21.224481352Z",
"created_by": "RUN /bin/sh -c apt-get update \u0026\u0026 apt-get install -y --no-install-recommends ca-certificates libc6 libgcc-s1 libicu72 libssl3 libstdc++6 tzdata zlib1g \u0026\u0026 rm -rf /var/lib/apt/lists/* # buildkit",
"comment": "buildkit.dockerfile.v0"
},
But only two of those generated FS changes and became layers. So that's what is displayed on the "Layers" page.
Thanks,
Alana
Hi @daniel-lundqvist_1790 ,
Good question! ProGet 2024 ships with the latest version of all available extensions, so there's usually not going to be a new one unless we add ones later (e.g. GCP) or update/fix ones.
Most extensions are used by BuildMaster; for example, the Jenkins extension lets you do cool stuff like import/orchestrate Jenkins servers and deploy artifacts from them:
https://docs.inedo.com/docs/buildmaster/tools-service-integrations/buildmaster-integrations-jenkins
Thanks,
Alana
What version of ProGet are you using? I know at some point we removed the requirement to specify a content type (maybe ProGet 2023?)
pgutil
is really only designed/tested with ProGet 2024+; earlier versions might work, but we just didn't build or test for those APIs. We're open to pull requests if you wanted to take a stab at trying to get it working on old api (maybe it's just a matter of adding the content type header)?
Thanks,
Alana
Hi @daniel-lundqvist_1790 ,
Custom extensions in ProGet are not very common, but they need to be manually installed; are you trying to build your own extension? We generally don't advise that for ProGet, since whatever you build would likely be a general-purpose integration that we would work with you to "adopt" (like cloud file system, etc)
We have some guidance on how to manually install extensions here:
https://docs.inedo.com/docs/proget/administration/extensions#manual-installation
Thanks,
Alana
Hi @daniel-lundqvist_1790 ,
Are you sure there are 36 layers in your image? You can see this in the manifest file.
I don't believe that commands like LABEL
and ARG
lines will change the filesystem, and thus a new layer would not be created.
The container configuration file, if it exists, may contain these extra commands. You can see the details of these files on the metadata of your image.
Thanks,
Alana
Hi @forbzie22_0253 ,
We have never seen a case where changing an IIS setting has benefited performance in any way whatsoever; instead we have only found headaches and problems from users modifying them. Those settings are not designed for modern .NET applications, but for like .NET Framework, classic ASP, etc.
So please don't touch them :)
Thanks,
Alana
Hi @sneh-patel_0294 ,
Here is the NuGet API Documentation:
https://learn.microsoft.com/en-us/nuget/api/overview
Here is the Packages API query you would want:
https://docs.inedo.com/docs/proget/reference-api/proget-api-packages/proget-api-packages-list-versions#http-response-specification
Regardless of what APi you use, you'll need to change your query strategy:
CPS.Regression
from the server9.1
and then sort by latest that's not prereleaseThere are some sample scripts that might be helpful on our docs as well:
https://docs.inedo.com/docs/proget/reference-api/proget-api-packages/proget-api-packages-list-versions#list-all-non-prerelease-versions-powershell
Thanks,
Alana
I don't see an option to do anything like that, but we're happy to brainstorm/think of an option to add. The main issue is documentation and avoiding having 1000 options.
I see that there's a ---do-not-scan-node_modules
switch; maybe that could be converted to a --excludePaths={relativePathCsv}
or something? So ---do-not-scan-node_modules
would become --excludePaths=node_modules
and you could do --excludePaths="myproj1.csproj,myproj2.csproj"
or something?
Just brainstorming here, not sure if that's even feasible.
Here is the current documentation for the command
Description:
Generates a minimal SBOM from project dependencies and uploads it to ProGet
Usage:
pgutil builds scan [options]
Options:
--input=<input> (REQUIRED) Project to scan for dependencies
--project-name=<project-name> (REQUIRED) Name of the component consuming the dependencies
--version=<version> (REQUIRED) Version of the component consuming the dependencies
--api-key=<api-key> ProGet API key used to authorize access
--do-not-scan-node_modules Do not scan the node_modules directory when scanning for package-lock.json
files
--include-dev-dependencies Include npm development dependencies from the package-lock.json file in the
generated SBOM document
--include-project-references Include dependencies from referenced projects in the generated SBOM
document
--password=<password> ProGet user password used to authorize access
--project-type=<project-type> Type of the consuming project (default=library)
--scanner-type=<scanner-type> Type of project scanner to use; auto, npm, NuGet, PyPI, or Conda
(default=auto)
--source=<source> Named source or URL of ProGet
--username=<username> ProGet user name used to authorize access
-?, --help Show help and usage information
Examples:
$> pgutil builds scan --input=WebDataTool.csproj --project-name="Web Data Tool" --version=1.2.3
``
Hi @george_4088,
That is correct, but a brute-force attack wouldn't succeed unless an administrator used something silly like admin
for their username and password
for their password. You could just as easily integrate with an LDAP/Active Directory server, which will add timeouts and account lockouts to make it impossible to "crack" in our lifetime. SAML is fine too.
My point is that it's like 1000 times more likely that the API Key used to publish those Chocolatey packages would be exposed in logs, configuration files, etc. That's the attack surface you want to be careful of.
Cheers,
Alana
Hi @forbzie22_0253 ,
We do not recommend modifying this value from the defaults; in fact, we recommend you avoid using IIS altogether.
IIS works poorly with modern .NET applications and tweaking settings will make it worse. For example, if you change "Max Worker Processes" to a value other than 1 (the default) you will likely get performance problems, since modern .NET are already multi-threaded and they will "compete" with each other.
ProGet has a built-in traffic queue that you can tweak under Admin > HTTP Settings > Web Server > edit. If you are having performance problems, then set it to 100 or so.
Cheers,
Alana
Hi @george_4088 ,
Users who are looking for MFA in our products will configure SAML to work with a login provider such as Entra ID that does MFA; we have some documentation on how to configure SAML here:
https://docs.inedo.com/docs/installation/saml-authentication/various-saml-overview
SAML is a ProGet Enterprise feature.
As an aside, we are often asked about best practices regarding MFA and public-facing repositories. I don't think MFA adds a lot of value to a product like ProGet because it's so API-key heavy, and API-based authentication can't use MFA obviously. The most important attack surface to cover is API keys. Those are often overlocked and tend to be haphazardly entered/exposed in scripts, logs, etc.
Cheers,
Alana
Hi @sneh-patel_0294 ,
This API has been deprecated for a long time. It sounds like you're doing your own, custom queries.... ProGet only supports the queries made by certain clients like NuGet. See NuGet ODATA (v2) API.
While we haven't touched the ODATA (v2) code in many years, there must have been some other change that caused this behavior to happen. As you might imagine, we don't want to open up that code and risk introducing another regression, so it'd be best for you to work-around this issue.
Your best bet is to switch to the Packages API; you can also use the NuGet v3 API, which is well documented and supported by ProGet, but it's a lot harder to use.
Cheers,
Alana
Hi @it4it_9320 ,
I was able to reproduce this; it looks like the jammy/main/binary-amd64/Packages index is invalid and duplicates several packages, including this:
Package: aadsshlogin
Version: 1.0.023850001
Architecture: amd64
Section: utils
Priority: optional
Maintainer: Yancho Yanev <yyanev@microsoft.com>
Description: AAD NSS, PAM and certhandler extensions
This package installs NSS, PAM and certhandler extensions to allow SSH login for AAD users.
Conflicts: aadlogin
Depends: libc6 (>= 2.34), libcurl4 (>= 7.16.2), libpam0g (>= 0.99.7.1), libselinux1 (>= 3.1~), libsemanage2 (>= 2.0.32), libssl3 (>= 3.0.0~~alpha1), libuuid1 (>= 2.16), passwd, openssh-server (>=6.9)
Pre-Depends: grep, sed
SHA256: efad79eb58c10155710ef59171fbe73d67e765a49ce4cc4f4e3622163f4c2f84
Size: 332574
Filename: pool/main/a/aadsshlogin/aadsshlogin_1.0.023850001_amd64.deb
Package: aadsshlogin-selinux
Version: 1.0.023850001
Architecture: amd64
Section: utils
Priority: optional
Maintainer: Yancho Yanev <yyanev@microsoft.com>
Description: Selinux configuration for AAD NSS and PAM extensions.
Conflicts: aadlogin-selinux
Depends: policycoreutils (>=3.3-1), selinux-utils, selinux-policy-default
SHA256: 6a0c3277754585d81d7c1216a23fa034bca6cacef7f162aba0af301ea734fc49
Size: 2214
Filename: pool/main/a/aadsshlogin-selinux/aadsshlogin-selinux_1.0.023850001_amd64.deb
So, as a result, the error occurrs. We will add some checking code for this bad index file, and plan to fix this in the upcoming maintenance release via PG-2834
Thanks,
Alana
Hi @frei_zs ,
Replication uses the package file's hash value, so "new" package files should be transferred when the files are updated. I know a lot of users rely on this behavior, including us every now and then. Debian is a little different than most package feeds because of how the component part works, but it should be the same.
This should be relatively easy to test/verify:
A-1.0.deb
archive, save it as alt-A-1.0.deb
If you can reproduce this on a new feed configuration, maybe it's something to do with the package files; can you send them to us? Then we will try to rperoduce in a debugging environment.
Thanks,
Alana
Can you try this on Otter 2023, to see if it makes a difference? We made some Git library changes and that's the easiest thing to confirm if it's related to that.
Thanks,
Alana
Hi @caterina ,
This will be fixed in the next maintenance release via PG-2829. I just checked in the change.
I made a mistake when making the original change, and used the ProjectBuildId instead of the ProjectId when validating the if other builds exists. So it works if the Ids match perfectly (like when I tested it).
Note this only impacts the UI, and we don't really expect users to create builds via the UI.
Thanks,
Alana
@dan-brown_0128 @scampbell_8969 thanks for the feedback!
I've added a note to our internal board for ProGet 2025 roadmap consideration; after we get through the PostgreSQL migratoin, we will likely focus on SCA feature improvement, but maybe there will be room for this.
Any guidance/ideas on the UI/docs would be really helpful when we come to revisit it.
Hi @forbzie22_0253 ,
That's not exposed in the UI at this time; is there a reason/use case you'd want to use it? it's primarily intended as a kind of backup of sorts.
Thanks,
Alana
Hi @davidroberts63 ,
The Projects & Builds page (/projects
) requires Projects_View
permission.
Cheers,
Alana
Hi @udi-moshe_0021 ,
If you're talking about connecting to DockerHub, the behavior is a little weird... but yes, images without a prefix need the library
prefix. This is actually how the Docker client behaves behind the scenes.
When you request ubuntu
, the Docker client actually requests library/ubuntu
. You can edit the connector (Advanced tab) to automatically add the library
prefix, which will behave like the Docker client.
Thanks,
Alana
Hi @udi-moshe_0021 ,
I would suggest using the latest ProGet version.
A "manifest unknown" is a very generic error, and the docker client doesn't log any more details (log, error message from ProGet, etc). I suggest using a traffic capture tool to see exactly what the problem is.
Under the hood, Docker clients can have some problems with proxies, so this could be related. The DockerHub also has rate limiting, so that could be a problem you're experiencing. We just can't see it with just the Docker client sometimes.
Thanks,
Alana
Hi @daniel-lundqvist_1790 ,
You can select the edition you'd like to trial within the software itself, under Admin > License Key.
Thanks,
Alana
Hi @gurdip-sira_1271 ,
Based on the error message, it looks like the use doesn't have appropriate permissions; did you set a temp
directory on the SSH agent ? That directory may not exist , and user may not have permission to create it.
Thanks,
Alana
@davidroberts63 thanks for figuring that one out, that's definitely bug...
... box style was correct, but enabled/disabled text looked at wrong property
Easy fix, difficult to spot!
Hi @marc-ledent_9164 ,
Are you referring to the Machine UID from Manual Activation?
I'm not sure what the Machine UID looks like in BuildMaster 7 on Linux, but I do recall that early versions of our products sometimes couldn't generate the string on some hardware. I think that's fixed now.
The code is supposed to be based on the CPU (vendor ID, model, family, and stepping info) and the and the major version of the Inedo software (e.g. 5.1, 5.3, 2022, 2023).
Thanks,
Alana
Hi @davidroberts63 ,
Thanks for digging into this further and providing those logs; looking over the code, I'm think that you must have the licenses feed feature enabled?
This setting is on the manage feed page.
When that feature is enabled, we should see logs like:
Detecting licenses for {package}...
Found {licensesCount} licenses: {licenseCodes}
The info is also recorded in the database in the same block.
I can't say why you have other records; they may have come from other feeds, or maybe the feature was disabled later on... the PackageLicenses23
table is not feed specific.
anyway let us know what you discover; it's a little weird to see the behavior, so we would like to confirm and twaeak the UI a little bit to make it clearer
Thanks,
Alana
Hi @caterina ,
Thanks for investigating this! I just made the following changes via PG-2818:
This will go in the next maintenance release, shipping next Friday
Thanks,
Alana
Hi @caterina ,
Thanks for the additional detail.
So I investigated this a little further, and now I see there's a bug in the method that imports the SBOM file - basically it's just overwriting the Stage Name with "Build". That's not intentional.
It looks like the BuildStatus_Code is also reset to Active. I'm not sure if that should be the case; maybe an inactive build should not allow a new sbom added... what do you think
As for the duplicates, we know it's not possible to enter a row in the table with a constraint, so it means there's either whitespace in the buildnumber (two different builds) or there's an incorrect join (build displayed twice).
Just take a look at the projectBuildId
querystring parameter for the "2.0.0" builds -- that will clue us in to which case it is.
If it's whitespace, we'll have to figure out where it could be added -- and just trim before adding to database
Thanks,
Alana
Hi @caterina ,
Sorry on the slow reply, I swear I replied to this.... but clearly not.
As you can see from the 500/crash error (which we should obviously not do), there is a data constraint that prevents two builds within the same project and release from having the same number. Specifically, the constraint is <Project_Id, Release_Number, Build_Number>
; so it's not possible to have build 3.0.0
twice -- but maybe it's in a different project or release.
As far as "Import SBOM" vs "Create Build", the Import SBOM will create a Project (based on the Component
name) and a Build (based on the Version
) if it does not exist. Then, it adds the packages within the SBOM Document to the build's list of packages, and then adds the SBOM document itself to the build's SBOM documents.
When importing an SBOM, nothing is deleted; it's always added. As you've seen, Create Build will definitely error out if the build already exists.
Hope that helps clarify,
Alana
Hi @daniel-lundqvist_1790 ,
Based on the problems you're trying to solve, I think ProGet sounds like a good fit.
There are quite a few tools that can generate an SBOM document in the CycloneDX format that ProGet uses; see https://github.com/CycloneDX for the most popular ones. We consider pgutil scan
to be a "lightweight" version that works in most cases.
But I think you'll just want to generate your own. It's a pretty simple XML format and easy enough to generate. ProGet can just acts as the SBOM repository in this case.
There shouldn't be a problem to extend the trial extension; you're able to request one on MyInedo on the day of expiry.
Cheers,
Alana
Hi @davidroberts63,
Without looking at the database it's really hard to guess; we'd be happy to investigate your database is you send us a back-up.
But maybe we can figure it out as well.... behind the scenes, there is a table called PackageLicenses23
which associates a specific package version (e.g. Microsoft.Identity.Client 4.66.0) with a specific license Id (MIT). The "Unlicensed Local Packages" page uses that table to find packages (FeedPackageVersions) without an entry.
Data is added into the PAckageLicenses23
table whenever a package is analyzed. So I presume that, if you go to the package and Re-analyze it, then the message goes away? This reanalysis should also occur on a nightly basis with the compliance check job.
If you poke around in the database, note there are PackageLicenses23_Extended
and FeedPackageVersions_Extended
views won't require you to do a bunch of joins to find.
Thanks,
Alana
Hi @scampbell_8969,
ProGet does not have this feature; we've thought about it after some customer discussions in years past, but I don't think anyone's asked about it until now. And it wasn't the right solution for those customers.
We concluded it would be kind of complicated to document / configure / troubleshoot, especially once we got into the details and specifics. Here's some of those:
Instead we created a "latest version" compliance flag that allows you to flag packages that aren't the latest patch version. We'll see if that's popular.
Thanks,
Alana
@caterina thanks, we'll discuss this internally and get back to you soon!
Hi @caterina
If you're getting that message, it means that the package cannot be checked for "package status" (i.e. listed, deprecated) because it's not local to your feeds. If you cache the package, then it should go away.
Thanks,
Alana
Hi @udi-moshe_0021 ,
You're going to get a lot of headaches trying to use server-side virus/malware scanning with a tool like ProGet and we do not support using such tools in conjunction with ProGet.
These types of tools are hyperaggressive and really dumb -- they often quarantine "dangerous" files like .dll
, block "malicious" files like Javascript, or think that ProGet loading plugins is "malware behavior".
In addition, the tools are totally unnecessary, as ProGet already has vulnerability detection built in.
Cheers,
Alana
Hi @Scati,
pgutil will construct an purl using this method:
https://github.com/Inedo/pgutil/blob/c5b5e3733390b9e5dfb07752e36a3f6efaaa0f9c/pgutil/Packages/QualifierOptions.cs#L26
So I'm thinking there's something off about your url, but I can't spot it. Maybe try
That appears to be the order that pgutil uses...
Cheers,
Alana