Navigation

    Inedo Community Forums

    Forums

    • Login
    • Search
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    1. Home
    2. atripp
    3. Posts
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    Posts made by atripp

    • RE: ProGet: move storage paths

      Hi @mcascone ,

      Good question; the documentation isn't very clear. How do you feel about this, I just updated the docs :)


      If you want to change the directories your packages are stored, you'll also need move/copy the contents from the current location to the new location. We generally recommend:

      1. Scheduling a downtime and notifying your users
      2. Changing the desired settings in ProGet
      3. Disable the feed or ProGet application entirely
      4. Transferring the files to the new location
      5. Enabling ProGet again

      You can also keep ProGet online the entire time; this will just cause a number of "package file not found" errors if anyone tries to download the package before the transfer is complete.

      Depending on how many packages files you have, transferring may require a significant amount of time; you may not wish ProGet to be offline or for users to experience errors during the process. In this case, we recommend first mirroring the files using a tool like robocopy /MIR a few times (just in case packages were uploaded during the initial copy), and then changing the settings in ProGet.

      posted in Support
      atripp
      atripp
    • RE: Proget: retention policy for branches in a package

      Hi @mcascone!

      To the "simple" question about the retention behavior, each retention rule starts by building a list of all packages. It loops over every package, and removes items from the list based on criteria you select. The packages not removed from the list are deleted. This ultimately has the effect of having everything be an "AND" in a single rule. This means *mybranch* and *mybranch2* can be reduced to*mybranch2*.

      The rules run one after another. So the second rule would start with a new list, and elimate items based on what you checked off.

      To the more complex question... why not just let the "dev" packages packages get messy? You can use a time- and/or usage-based rule. That might simplify things a lot. You can enable differential storage on Windows, which will reduce real space consumtion by like 90% or more.

      Or maybe use a different feed? Just throwing ideas out :)

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: ProGet Extension: Error initializing extensions manager

      @can-oezkan_5440 thanks for letting us know what the issue was :)

      Hopefully this is a trivial thing to fix in our code; we'll take a look and let you know!

      posted in Support
      atripp
      atripp
    • RE: NuGet Basic Authentication support

      @jan-primozic_9264 thanks for posting the update!

      Please let us know if you can see a place for us to improve documentation :)

      posted in Support
      atripp
      atripp
    • RE: ProGet: no groupname option

      Hi @mcascone

      Group names are optional with universal packages.

      For example, we don't use them in this feed:
      https://proget.inedo.com/feeds/BuildMasterTemplates

      Unfortunately I'm not totally sure where the issue is, or how to troubleshoot the Jenkins plugin; it was created by the community, but I think it's possible to submit a pull request if there's an issue?

      Perhaps thsi should check for a groupName, and not append property if null?

      https://github.com/jenkinsci/inedo-proget-plugin/blob/master/src/main/java/com/inedo/proget/api/ProGetPackager.java#L78

      Are there any errors on the ProGet side?

      Thanks,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Whitelist npm packages licenses

      Hi @p-boeren_9744 ,

      The documentation isn't very clear 🙄

      I had to look this up myself in the code. If you set the allowed property, then a global rule is also created. Therefore, the following should work instead:

      {
        "licenseId": "package://@progress/kendo-react-grid/5.0.1/",
        "title": "package://@progress/kendo-react-grid/5.0.1/",
        "urls": [
          "package://@progress/kendo-react-grid/5.0.1/package/LICENSE.md"
        ],
        "allowedFeeds": ["NpmLicenseTest"]
      }
      

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Feature Request/Question: Azure DevOps On-Premises/Cloud integration extension

      Hi @bryan-ellis_2367 ,

      I'm not an Azure DevOps expert, but last I checked, it's not possible to add NuGet package sources other than it's own ADO Packages product or the public repositories. That may just refer to "upstream sources", but I'm not totally sure.

      However, if you want to use the ADO Pipeline's built-in NuGet commands to publish packages, I guess you can set up a service connection using this?

      https://docs.microsoft.com/en-us/azure/devops/pipelines/library/service-endpoints?view=azure-devops&tabs=yaml#nuget-service-connection

      Not totally sure -- but please let us know what you find :)

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Pulling variable values into PowerShell .ps1 scripts and potentially credentials

      Hi @moriah-morgan_0490 ,

      Glad to see the environment un-scoping worked! It's definitely possible to get environment-scoping of credentials to work... but it'd probably be best to confirm what you're looking to accomplish.

      The main purpose of the environment-scoping is to enable limited management access to Otter. For example, users can edit/maintain all configuration except for production servers. Take a look at Multiple Environments per Server to see how the behavior works.

      I assume that you're running the Inedo Agent? We're still learning the exact privileges ourselves 😏

      We've seen systems restricted in very unexpected ways (and only in the field of course, never or own environments), and they don't give any logical error messages. But we'd love to help you get this working, so we can document it.

      Here's what we know so far:

      • local administrators seem to have no problems; this is what most people do, because it's domain credentials they are interested in using and it doesn't matter if they're an admin on that particular server
      • try running the Inedo Agent as the user you wish to impersonate as; this will identify most permissions issues
      • make sure the user target has permissions to the extension cache path and appropriate root paths; see the Agent Configuration File
      • "run as a service" permission seems to be important
      • logging into the server as the impersonated user (at least once) may help for some scripts, so we've heard
      • antivirus or other security tools may block this impersonation as well

      Let us know what you find :)

      posted in Support
      atripp
      atripp
    • RE: Proget: Gradle connector?

      Hi @mcascone,

      We don't have that functionality in ProGet, but it should be pretty easy to do with a pair of Invoke-WebRequest PowerShell commands :)

      You could probably parse/scrape the HTML and download / upload in bulk as well.

      Please share the script if you end up doing that, it might be useful for other usecases as well!

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Pulling variable values into PowerShell .ps1 scripts and potentially credentials

      Hi @moriah-morgan_0490 ,

      If you've created a Secure Credential (Username & Password) under Admin > Secure Credentials that's named TestCred2 then the OtterScript you presented should work. There are several challenges with getting impersonation to work Windows in general (as I'm sure you've learned as a Windows admin), so we don't recommend using it unless you have to.

      In your Otter configuration, my guess is that there is a problem with environment scoping; that can get a bit tricky - I would just set it to use (Any) environment for now.

      As for "Secure" part of "Secure Credentials" - the password fields are stored as encrypted data, and you can only decrypt them with the Encryption Key (stored separately). They're also held in memory as securely as possible, and are "handed" to PowerShell as a PSCredential object.

      The "Allow encrypted properties to be accessed" prevents OtterScript from exposing the passwords using the $CredentialProperty() function. For example, this would cause an error unless that was checked:

      Log-Information $CredentialProperty(TestCred2, Password);
      

      However, PowerShell has no such restriction surrounding credentials. For example, you can always just do this:

      $credtest = Get-Credential
      Write-Host $credtest.GetNetworkCredential().Password
      

      Microsoft's recommended way to handle this is to use Windows Authentication.

      Our recommendation is to generally avoid passing important credentials to scripts (API keys, etc. seem fine)... but if you must, use a change process to ensure that you aren't running scripts that dump passwords like that.

      Hope that helps,

      Alana

      posted in Support
      atripp
      atripp
    • RE: Bug: ProGet Asset package repository permission system is broken (Feeds_AddPackage always "not permitted")

      Hi @mail_6495 ,

      Looks like this was a regression with API Key Authentication; the uploader control improperly required an API key. This will be fixed in PG-2104 on this Friday's maintenance key update.

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Proget: Gradle connector?

      Hi @mcascone

      Looking closer, it doesn't appear that https://services.gradle.org/distributions/ is a maven repository after all (no folder structure, missing metadata xml files)? It looks just a regular web page (HTML) with links to files that can be downloaded (i.e. there's no API)

      This seems like something you should make an asset directory for (but obviously a connector wouldn't be possible, since there's no API ). They probably just prepend distributionUrl to a known file name, like gradle-7.3-bin.zip?

      The error is definitely related to the SSL/HTTPS connection from Java (Gradle) to IIS (ProGet). It's certainly something you need to configure in Java, but I'm afraid I have no idea how to do that -- it does seem to be a common question people ask about (found on Stack overflow -- https://stackoverflow.com/questions/9210514/unable-to-find-valid-certification-path-to-requested-target-error-even-after-c)

      After you fix that, you could make an asset directory probably. Please let us know, would be nice to document!

      Cheers.
      Alana

      posted in Support
      atripp
      atripp
    • RE: Proget: Gradle connector?

      Hi @mcascone ,

      I'm almost certain that you can just set up a Maven feed/connector for this purpose -- please let us know, I'd love to update the docs to clarify.

      You probaly won't be able to "see" the packages from searching (this requires an index that many repos don't have), but only navigating to artifacts directly.

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Access prereleases? Proget 6.0.9

      Hi @janne-aho_4082 ,

      would it be possible to cache the authentication request and LDAP response for a short time

      That definitely seems possible, but that's the sort of thing we'd want to implement in the "v4" of this directory provider (not as a patch in a maintenance release). I meant to link to that last time, but here it is: https://docs.inedo.com/docs/installation-security-ldap-active-directory#ldap-ad-user-directories-versions --- but v4 is a little whiles out.

      switching from account credentials to api keys wouldn't happen over night

      We definitely recommend this path going forward, in particular from a security standpoint. Generally a smaller attack surface in case the API key gets leaked (compared to LDAP credentials).

      posted in Support
      atripp
      atripp
    • RE: Access prereleases? Proget 6.0.9

      Hi @janne-aho_4082 ,

      Looking at your CEIP sessions, there's a lot of factors going on.

      The biggest issue is that your LDAP response is incredibly slow. We can see that a basic a query to [1] find a user is taking 500-900ms, and a query to [2] find user groups is taking upwards of 7500ms. This is compounded by thousands of incoming requests, thousands of outgoing requests, relatively slow download times, and minimum hardware requirements. This all yields different/unpredictable performance, which is why you're seeing varying results so much.

      All told, it looks like ~70% of the time is going to LDAP queries (each request does the find user query), ~18% is going to outbound connections, and ~8% is going to the database (most to the "get package metadata" procedure).

      There's a few "overload" points, where the OS is spending more time managing multiple things than it is doing those things, and increasing CPUs ought to help.

      So, at this point, I would recommend:

      • switch to a "Feed API Key" instead of a username:password key or "Personal API Key"
      • enabling metadata query caching on the connector

      This should yield a significant performance improvement overall. We can consider new ways of caching things in v4 of this directory provider.... but if you have this kind of latency on your LDAP queries, it's best to just use Feed API keys...

      Alana

      posted in Support
      atripp
      atripp
    • RE: Can the Jenkins ProGet plugin upload to an Asset dir?

      Hi @mcascone ,

      The ProGet Jenkins Plugin is designed for creating and publishing universal packages, so it won't work for assets.

      The Asset Directory API is really simple though, and a simple PUT with curl or Invoke-WebRequest will do the trick. hopefully that's easy enough to implement :)

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Pulling variable values into PowerShell .ps1 scripts and potentially credentials

      Hi @moriah-morgan_0490 ,

      We are working on documenting this all much better, so thank you for bringing it up. But the scenario you describe (using Otter as a script repository/execution center) is definitely possible and is something we are actively working on improving and making easier.

      Otter can pass in variables, read your existing Comment-Based Help, and you can then build Job Templates around the variables. We have a tutorial about that here: https://docs.inedo.com/docs/otter-create-a-gui-for-scripts-with-input-forms

      As for Secure credentials, no problem. Behind the scenes, this is handled through the $PSCredential function in OtterScript, and now that I write this, I think we should add support to Job Templates for this.

      Anyways, after uploading a script named MyScriptThatHasCredentials.ps1 to Otter, and creating a SecureCredential in Otter named defaultAdminAccount, you would just need to write a "wrapper" in OtterScript for it:

      PSCall MyScriptThatHasCredentials
      (
        User: $PSCredential(defaultAdminAccount)
      );
      

      Do you want the Otter Service and/or Inedo agent to run as a GMSAs? Sure, there's no problem as long as there's access; https://inedo.com/support/kb/1077/running-as-a-windows-domain-account

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Access prereleases? Proget 6.0.9

      @janne-aho_4082 thanks!

      The timing might be okay then.

      npmjs.org will most certainly be faster. Not just because they have a massive server farm compared to you, but their content is static and unauthenticated.

      ProGet content isn't static -- and also needs to proxy most requests to the connectors, because they are "what is latest version of this package". Turning on metadata caching in the connector will help, but I still would expect slower response time.

      posted in Support
      atripp
      atripp
    • RE: Access prereleases? Proget 6.0.9

      @janne-aho_4082 great, thanks!

      Do you know what the old times were? I really don't know if 2-3 minutes for installing 1400 packages is unreasonable... that doesn't sound so bad to me , but I don't know.

      If it's easy to try the older version , we can try to compare CEIP data on both.

      Oh and the easiest way to find your CEIP data is from the server/machine name... but it's probably best to submit to the EDO-8231 ticket since it's perhaps sensitive data.

      posted in Support
      atripp
      atripp
    • RE: Access prereleases? Proget 6.0.9

      @janne-aho_4082 I'm not really sure what always-auth does, but my guess is that it first tries a request with no authorization, receives a 401, then spends the authorization header My guess is that it's unrelated; that initial 401 should be really quick if anonymous access isn't enabled.

      rc.4 seems to only have PG-2094 and PG-2098... both unrelated to LDAP, but prety minor. And you'll now have a "copy" button on the console :)

      posted in Support
      atripp
      atripp
    • RE: 500 Error after upgrading to 6.0.8

      Hi @albert-pender_6390 ,

      This is an internal Windows Error, and happens when another process (usually a UI window) has an open session to hive within the Windows Registry. It's a long-standing bug/issue with COM+ services (which Active Directory uses), and is not really ProGet specific.

      It's a side-effect of the ProGet upgrade process, which often stops/starts Windows Services and IIS Application pools. Ultimately restarting will fix it (as you've notcied), but changing the "Load User Profile" to "true" on the application pool is also known to fix it as well.

      Best,
      Alana

      posted in Support
      atripp
      atripp
    • RE: BuildMaster Artifacts Overview Filtering

      @paul_6112 well, as it turns out... this was actually trivial to fix 😅

      It will make it in 7.0.19, scheduled for Feb-25th

      posted in Support
      atripp
      atripp
    • RE: Error: HttpException Server Too Busy

      Hi @galaxyunfold ,

      The "Timeout expired" errors are indeed a result of database or network connectivity issues. It's possible to create connector loops (A ->B -> C -> A) that will yield this behavior as well.

      The "server too busy" is an internal IIS error, and it can be much more complicated. It's rarely related to load, and is more related to performing an operation during an application pool recycle. Frequently crashing application pools will see this error frequently.

      There are a lot of factors that determine load, and how you configure ProGet (especially with connectors and metadata caching) makes a big difference. But in general, it starts to make sense at around 50 engineers. At 250+ engineers, it makes sense not to go load-balanced / high-availability.

      Here is some more information: https://blog.inedo.com/proget-free-to-proget-enterprise

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: BuildMaster Artifacts Overview Filtering

      Hi @paul_6112 ,

      Just FYI is that selecting the "Application:" isn't refreshing/cascading to the list of releases or builds.

      As a workaround, you can select Application, then hit the refresh button in your browser. This is a nontrivial update, but one we'll get fixed via BM-3777 in an upcoming release.

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Error: HttpException Server Too Busy

      Hi @galaxyunfold ,

      Based on the symptoms your describing, it sounds like the problem is load-related. How many developers/machines are using this instance of ProGet?

      When you have a handful of engineers doing package restores with tools like npm, it's similar to a "DDoS" on the server -- the npm client tool makes hundreds of simultaneous requests to the server. And the server then has to make database connections, and often connections out to npmjs.org, etc. The network queues get overloaded, and then you get symptoms like this.

      See How to Prevent Server Overload in ProGet to learn more.

      Ultimately load-related issues are from a lack of network resources, not CPU/ram. You can reduce connections (throttle end-users, remove connectors, etc.), but the best bet is going with a high-availability / load-balanced configuration.

      I would also recommend upgrading, as there's been a lot of performance improvements in the 4-5 years since ProGet v4 was released.

      Alana

      posted in Support
      atripp
      atripp
    • RE: Whitelist npm packages licenses

      Hi @p-boeren_9744 ,

      I added support to have npm packages treat SEE LICENSE IN as a embedded file licenses via PG-2085.

      It now looks like this, and blocks/allows package:

      b651386f-8d59-4829-b6a3-692f136124a4-image.png

      This will be released in this week's upcoming maintenance release.
      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: How to provide ProGet feed name in Azure DevOps Docker-Compose task?

      Hi @dustin-davis_2758 ,

      I'm not really sure - I'm not familiar enough with ADO Docker Compose to help :/

      The error is coming because the container repository (image) name is incorrect; it should be like proget.initech.com/feedName/initech/repositoryName

      Generally you put this in your docker-compose.yml file, like this:

      https://docs.inedo.com/docs/docker-compose-installation-guide#example-docker-compose-configuration-file

      So that would be the first place I would look. If you have the proper image name in there, tehn I guess, ADO might be doing something different?

      Let us know what you find!

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Proget feed Nuget Package unavailable

      hello @nmorissette_3673 ,

      I can't think of anything in ProGet that could yield this behavior (most especially for a particular package), and I can't reproduce it with the package. So this is tricky to debug.

      Please try reproducing with a fresh new feed.

      1. Create NuGet Feed (nuuget), add connector to NuGet.org
      2. Navigate to /feeds/nuuget/Puma.Security.Rules/2.4.7
      3. Try to download package file /nuget/nuuget/package/Puma.Security.Rules/2.4.7

      IF that works, then there's some difference between the two feeds.

      If it doesn't work, it's likely something between ProGet (which would be weird, but maybe a content filter/proxy).

      Let us know what you find!

      cheers,
      Aana

      posted in Support
      atripp
      atripp
    • RE: Proget feed Nuget Package unavailable

      Hi @nmorissette_3673 ,

      That's odd, but I wonder if the package file is deleted from disk, and it's a cached package?

      If that's the case, you should see a very specific message about it, like "Could not find a part of the path 'c:\LocalDev\ProGet\PackageStore.nugetv2\F1\Puma.Security.Rules\Puma.Security.Rules.2.4.7.0.nupkg'.".

      Otherwise, here's what I did to reproduce:

      1. Create NuGet Feed (nuuget), add connector to NuGet.org
      2. Navigate to /feeds/nuuget/Puma.Security.Rules/2.4.7
      3. Try to download package file /nuget/nuuget/package/Puma.Security.Rules/2.4.7

      Of course, it's no problem. If i delete package on disk, then i'll get a 404 error.

      If I "delete cached package" from the Web UI, and then download again it's fine.

      Hope this helps...

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Whitelist npm packages licenses

      Well, that's an interesting way to specify an embedded license file. I don't know if that's a convention or specification, but that seems to be a new way of handling it. It's kind of documented now, which is good: https://docs.npmjs.com/cli/v8/configuring-npm/package-json#license

      Anyways, we already handle this for NuGet packages using a URL convention like this:

      • No license - packageid://Aspose.Words/21.9.0
      • File license -package://Aspose.Words/21.9.0/License\Aspose_End-User-License-Agreement.txt

      Those will be defaulted in the fields if the package specifies no license or a file license.

      Not sure if it works now for npm packages, but it'd be relatively easy to adoption that convention, and then suggest it when the "license" field starts with "SEE LICENSE IN"...

      Anyways we'll investigate this and update in a day or two.

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Proget 6.0.4: Unauthorized: Access is denied due to invalid credentials.

      @araxnid_6067 thanks, gload it worked! I'll work to update the documentation about this topic :)

      posted in Support
      atripp
      atripp
    • RE: kubernetes scanner not showing results

      Hi @cronventis ,

      Great find - that seems to explain what we're seeing: containerd reports on containers differently than dockerd. So, we'll just search for container images based on configurationblob_digest OR image_digest 🤷

      This change was trivial, and will be in the next maintenance release (or available as a prerelease upon request) as PG-2081 - scheduled release date is Feb 11.

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Proget 6.0.4: Unauthorized: Access is denied due to invalid credentials.

      Hello,

      Are you doing an API call using PowerShell or something to delete packages? Did this happen after a recent upgrade to ProGet v6?

      This post may help: https://forums.inedo.com/topic/3418/upgrading-from-5-to-6-causes-api-key-to-stop-working/2

      We can definitely consider adding the API-key authentication back - we didn't realize it worked in the first place :)

      Please let us know if this is the issue.

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Proget 6.0.4: can't remove docker image blob via API

      @araxnid_6067 is it in the [DockerBlobs] table? If not, then ProGet doesn't know about it, and then it's safe to delete

      Otherwise, it might still be referenced by a manifest, but ProGet doesn't have that relation in the database.

      You'd have to parse [ManifestJson_Bytes] to find out. If you're comfortable with SQL, you could do a "hack" query to convert that column a VARCHAR, then use OPENJSON or a LIKE query to search all manifests for that digest.

      Howver, that's what ProGet does during feed cleanup.

      posted in Support
      atripp
      atripp
    • RE: Upgrading from 5 to 6 causes API Key to stop working

      @Stephen-Schaff I'm afraid I don't... it's a bit tricky to use, since you need to request a bearer token first and then send that in a header value.

      https://docs.docker.com/registry/spec/auth/token/#how-to-authenticate

      posted in Support
      atripp
      atripp
    • RE: ProGet: Feature Request: Promoted/Repackaged flag on package listing

      Thanks @mcascone , I also added this to our "Promotion / Repackaging Visibility & Permissions Rethinking" task - sounds like something we can consider :)

      posted in Support
      atripp
      atripp
    • RE: ProGet: Feature Request: Allow disabling repackaging OR promotion while keeping the other

      Hi @mcascone ,

      I admit this can be confusing and is unintuitive because these were added separately over time, and the weren't originally designed for how they're used today. We need to rethink/redesign this based on the use cases.

      I'm going to add this thread under the "promotion/repackaging workflows" topic for our next major version of ProGet. Once we know what we want to do, we can may be able to implement some changes as a preview feature in v6.

      FYI, this is exactly how the big API Key changes and feed/package usage instructions came about!

      https://forums.inedo.com/topic/3204/proget-feature-request-api-key-admin-per-user

      So stay tuned :)

      posted in Support
      atripp
      atripp
    • RE: Upgrading from 5 to 6 causes API Key to stop working

      Hi @Stephen-Schaff ,

      The API Keys changes in ProGet v6 involved changing some of the authentication code, so seeing bugs/regressions where a connected systems (build/CI server) reports authentication errors is not unexpected.

      Based on the error message you're sending, it looks like you were using an X-ApiKey header to authenticate to the Docker registry API. That actually wasn't supposed to be supported before (Docker API requires token-based authentication), and must have only worked because of a bug / unclear specification in our old authentication code...

      So the options from here:

      1. Allow anonymous access to view the feed
      2. Modify your script to use Docker's token-based authentication
      3. Rollback to v5

      We can consider adding/documenting support for using X-ApiKey header in the Docker API, but as it's not possible at the moment....

      posted in Support
      atripp
      atripp
    • RE: kubernetes scanner not showing results

      Hi @cronventis, just wanted to let you know that this is complicated and it's not something we cannot quickly debug/diagnose.

      Based on our analysis, the data being returned from your Kubernetes API is different than our instance, and the instances we've seen in the field. Our instance's API is returning the configuration digest, but it looks like your instance is returning the manifest digest.

      Which one is correct? Why is your instance doing that? Why is ours doing this? It's a mess 🙄

      Code-wise, it would be a trivial fix in ProGet to make. Basically we just change this...

      var data = await new DB.Context(false).ContainerUsage_GetUsageAsync(Feed_Id: this.FeedId, Image_Id: this.Image.ContainerConfigBlob_Digest);
      

      ... to this...

      var data = await new DB.Context(false).ContainerUsage_GetUsageAsync(Feed_Id: this.FeedId, Image_Id: this.Image.Image_Digest);
      

      ... except that would break our instance and the others that return configuration digests.

      We're tempted to "munge" the data results (basically just concatenate both database resultsets), but it would be really nice to know (1) which is correct and (2) why one instance does one thing.

      Anyways that's our latest thought. Do you have any insight into this? This is just so bizarre.

      Well, we'll kepe thinking about it on our end as we have time. Just wanted to give you a sitrep.

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Proget 6.0.4: can't remove docker image blob via API

      Hi @araxnid_6067 ,

      This behavior is expected, and it's handled via Garbage Collection for Docker Registries:

      Unlike packages, a Docker image is not self-contained: it is a reference to a manifest blob, which in turn references a number of layer blobs. These layer blobs may be referenced by other manifests in the registry, which means that you can't simply delete referenced layer blobs when deleting a manifest blob.
      This is where garbage collection comes in; it's the process of removing blobs from the package store when they are no longer referenced by a manifest. ProGet performs garbage collection on Docker registries through the "FeedCleanUp" scheduled job.

      So basically, it will get deleted when the corresponding FeedCleanUp job runs. It's default to every night, and you can see the logs on the Admin > Manage Feed page.

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: Buildmaster odd issue - too many open files

      Hi @colin_0011 , that certainly is an odd issue!

      We've never seen it before, but it's coming from the library we're using (libgit2). I don't really know what it means, or what's causing it (is it a number of files in the repository, etc.), but I just have a couple ideas.

      1. Restart server
      2. Clear the GitWorkspaces (C:\ProgramData\BuildMaster\Temp\Service\GitWorkspaces)
      3. Try using git.exe instead of the built-in library

      You can do #3 by setting GitExePath parameter on the operation, or configuring a $DefaultGitExePath variable at the server or system level in BuildMaster; this will force all Git source control operations to use the CLI instead of the built-in library

      It's possible the bug was already fixed in a newer version of the lbirary. What version of BuildMaster are you using?

      posted in Support
      atripp
      atripp
    • RE: 500 upon GET-ing package via node/yarn (..not a valid Base-64 string..)

      Hi @robert_3065 ,

      Glad it's working!

      Good point about the error message; it's in a kind of general place, so I just replaced that unhelpful base64 decoding message with this (via PG-2069):

      string userPassString;
      try
      {
          userPassString = context.Request.ContentEncoding.GetString(Convert.FromBase64String(authHeader.Substring("Basic ".Length)));
      }
      catch (FormatException)
      {
          throw new HttpException(400, "Invalid Basic credential (expected base64 of username:password)");
      }
      

      Not the perfect solution, but better than now!

      Cheers,
      Alana

      posted in Support
      atripp
      atripp
    • RE: 500 upon GET-ing package via node/yarn (..not a valid Base-64 string..)

      Hi @robert_3065,

      Based on this, I think the _auth token in your .npmrc file isn't correct. That is the token that's sent to http://OURSERVER/npm/npm_internal/, which is supposed to be in a base64-formatted api:apikey or 'user:password format.

      Here's some more information about it:
      https://docs.inedo.com/docs/proget-feeds-npm#npm-token-authentication

      Npm auth isn't so intuitive unfortunately :/

      Alana

      posted in Support
      atripp
      atripp
    • RE: Latest tag not applied (but not consistently)

      @Stephen-Schaff great to hear! And I guess another way to do it would be enabling/disabling the semver restrictions on the feed

      Let us know if it keeps happening, and you can find a pattern - we'll see if we can identify what might be the cause of it

      posted in Support
      atripp
      atripp
    • RE: chocolatey connector healthy but shows no packages

      @mcascone

      existing connection forcibly closed

      I'm afraid this is more of the same; there's some sort of network policy that's blocking this connection. It could be the way your laptop is configured, but maybe it's also happening on the HTTPS/SSL level? Anyways, the remote server (not proget.inedo.com, but some intermediate) is disconnecting at some point.

      The ProGet-5.3.43.upack would really only be useful for manual installation; but it also might be bad/corrupt/incomplete. You could try unzipping it to see.

      Oh.... I probably should have said it before, but we have premade, single-exe offline installers for specific versions of ProGet: https://my.inedo.com/downloads/installers

      Here is some more information about them; https://docs.inedo.com/docs/desktophub-offline

      posted in Support
      atripp
      atripp
    • RE: chocolatey connector healthy but shows no packages

      These 403 errors are all coming from your proxy server (firewall); unfortunately we/you have no visibility on that.

      But it's clear that some requests are allowed, and others aren't. Maybe it doesn't like requests that are downloading a file called .upack. maybe it doesn't like the agent header. Maybe it tries to scan/verify contents with a virus check? It's a total guess 🙄

      From here, best bet is to check with IT to see if they can inspect the firewall/proxy logs.

      posted in Support
      atripp
      atripp
    • RE: Latest tag not applied (but not consistently)

      @Stephen-Schaff said "Any ideas on how I can get the latest tag to be auto applied?"

      the "virtual" tags are recomputed when a tag is added, so if you can try tagging your image 1.0.2 (or something), and then deleting that tag, you should see1, 1.0, and latest all applied to that image

      posted in Support
      atripp
      atripp
    • RE: chocolatey connector healthy but shows no packages

      Hi @mcascone,

      Our products our built with .NET 4.5.2, which uses the Windows certificate chain.

      I suspect that ZScaler is replacing the certificate, and that's causing a trust problem. Maybe you can try installing ZScaler certificate directly in the store, and there are some registry tweaks / hacks that might make it work. Unfortunately I don't have any specifics on what you can try.

      You should see same errors if you log-in as service account user, and try to visit the site in IE or Edge. PowerShell would also exhibit the same errors.

      In any case, I would search for like "ZSCaler Certificate TLS error Windws" and what not, and hopefully find some specific things to try...

      Best,
      Alana

      posted in Support
      atripp
      atripp
    • 1
    • 2
    • 18
    • 19
    • 20
    • 21
    • 22
    • 34
    • 35
    • 20 / 35