@jim-borden_4965 that doesn't sound right; can you create a new forum post (don't want to clutter this anymore) with some more details/etc, and then we can reply/investigate. Thanks
Posts made by stevedennis
-
RE: How to delete packages with the ProGet REST API?
-
RE: Support for hybrid SAML and Local User Authentication
Hi @scusson_9923, this was implemented in ProGet 2022 :)
-
RE: Problems with Clair integration for scanning docker images
I haven't seen that error before, but based on the text ("ProxyAuthenticationRequired for layer"), I think that Clair is trying to download an external layer?
Some container image manifests (especially Windows, but not entirely) will point to a URL outside of the registry. This is often done for licensing reasons. What this means is that, Clair (or the docker client) downloads the layers from a url instead of ProGet.
I'm not familiar enough with container scanners (Clair) to know how they search for vulnerabilities; I believe it's done by looking at the packages installed on the system. Log4j is not a package installed on the system (I think), but a library used in some applications.
Cheers,
Steve -
RE: Help with Git raft in Otter
Just to let you know, Otter v2022 has been released; from here it'd be best to start a new thread if you have a specific issue -- we'll be happy to help!
Steve
-
RE: How to use custom endpoint instead of localhost for Proget Feeds
@nkumari_3548 you'll likely want to check with your network team for help on this one
ProGet is a standard web application, and will listen for all traffic on the port you've configured. If you want to access ProGet with
proget.my-company.corp
, then you'll need to have someone configure appropriate domain/DNS records, etc. You may also need firewall settings, and a certificate for HTTPS.https://proget.inedo.com/ is our public instance of ProGet. It's not intended for direct access by users, but by tools like docker or Inedo Hub. You can't login to it.
-
RE: Vulnerabilities: finding affected consumers
@sebastian that's awesome, great you could figure it out!
The
Package_Versions
field is supposed to be some kind of range specifier (e.g. something like[3.4.4-3.4.4.8)
, but I don't know format offhand). However, we've also never seen it in the wild in any dataset. It's always just a CSV of versions.Regarding licensing, that information is not really in the database. It's parsed from the manifest file (e.g.
.nuspec
) in the front-end. That file is stored in database, but it's practical to use in SQL. We talked about building a kind of job that would normalize that into aPackageLicenses
table, and then allow custom reporting (or show how a consumer is using it).At some point, we'd love to get a copy of your data (database backup if possible) so we can see some real-world consumers/consumption and build some pages from it. We do have some sort of idea in PRoGet how to make this look/work better, but seeing real data would be helpful. That development won't start until Q2, so maybe we'll reach out in a month or so and ask :)
-
RE: Vulnerabilities: finding affected consumers
@sebastian great, let us know what you find!
When it comes to reporting/reading data, no problem directly querying the tables. We definitely support that, and some folks have quite advanced reports that tie together various systems.
I'd just go directly to SQL Server for that , and do SELECT on the appropriate views/tables. Let us know what questions / issues you have!
-
RE: Vulnerabilities: finding affected consumers
Hi @sebastian, thanks for the feedback!
These both could be pretty complex (specially the API) and would thus end up as strategic, roadmap-level items (as opposed to some minor enhancements); we'd already have ProGet v2022 mostly planned out, so it would be a bit before we can consider it.
How's your relational database / SQL skills? If you're comfortable exploring the data in there, it might give you some insight / ideas into what we can do as a low-risk, minor-enhancement that would add a lot of value. Once you're able to see what is possible with the existing data, then it might be a view we can do.
We don't have a "guide" on how to query the tables, but we can assist if you have questions. The database columns should be familiar, but tables of interest:
Vulnerabilities
,FeedVulnerabilitySources
(links feeds + vulnerability sources), andPackageDependents
(package consumers).Thanks,
Steve -
RE: Vulnerabilities: finding affected consumers
Hi @sebastian ,
Your current workflow sounds like it's the best approach for now, albeit cumbersome; there's also the possibility of directly querying the database, and generating a kind of report with your own tooling.
This is of course something we can consider improving, but it's hard to guess where without knowing more details and having real world datasets. So definitely an opportunity for a feature request / collaboration.
Let us know as you solidify your processes and learn what data you find valuable!
Thanks,
Steve -
RE: ProGet Retention Rules: option to keep package statistics
HI @sebastian,
There are three different package records:
- Downloads ("Record individual downloads for advanced statistics")
- Deployments ("Record where packages have been deployed")
- Usage ("Record where packages are being used") - i.e. package consumers
You can add these records for remote packages (or packages that don't even exist on the feed yet), and none of these records should be purged when you delete a package (manually or via retention). If you delete a package, and then add it back, those "old records" will reappear.
The "Download Count" is part of the "server-side package metadata", along with Listed/Unlisted Status, Tags, etc. This is one reason why "Download Count" and "Number of Downloads" can vary.
hope that helps,
Steve -
RE: ProGet Extension: Error initializing extensions manager
@can-oezkan_5440 just an update, we plan to ship this in the next maintence release in ProGet as PG-2111
-
RE: Permissions only work when set for specific user, not a group (LDAP)
@kichikawa_2913 I'm wondering if this might be a regression with the preview feature, but I can't imagine how. I have one other idea, too...
I used the "test privileges" function and it shows that the group has View/Download permissions.
Can you clarify this? The "test privileges" should only work with a username, not a group name. Could share what happens when you:
- Have a specific user navigate to a package in a NuGet feed, and then try to download it from UI? Is there a specific message body you see? (outside of 403)
- Enter that same username in the "test privileges" with that particular feed? What are all the permissions you see?
After doing those, the last thing I would try is to revert to the 6.0 behavior, and see if the problem still occurs. AT least that will tell us where to look....
-
RE: License blocking vs Vulnerability blocking behaviour
Great! Our recommend three-feed workflow (unapproved, approved, internal) is similar, and keeps the third-party packages in the first two feeds. This way, you can scan for vulnerabilities much more easily.
As I understand it though, to get full coverage the pgscan tool needs to be installed on every build server, and the pgscan publish... command needs to be implemented in every build?
This is correct. Dependency resolution is complex and often nondeterministic, so it can only really happen at build-time. Hopefully you can templatize pretty easily :)
Cheers,
Steve -
RE: Permissions only work when set for specific user, not a group (LDAP)
Hi @kichikawa_2913 ,
The NuGet client's behavior is based on NuGet.org, where no authentication is ever required to view/download packages. As such, it doesn't pass the API key when doing those queries; instead, you can use a username of
api
and the password of your api key.Based on the issue though, it sounds like ProGet is unable resolve the groups; I would use the "test privileges" function on the Tasks page to verify this. Thatw ill show you if the username can download packages or not.
The most common reason that groups aren't resolving is that the member is not directly in the group (i.e. they're in a group which is a member of the group), and you don't have recursive groups enabled; do note that this is really slow on some domains.
Cheers,
Steve -
RE: License blocking vs Vulnerability blocking behaviour
Good question; it sounds like you're creating a Package approval workflow, and are on the right track to thinking about "package governance".
Your observations about licenses and vulnerabilities are correct, and you could simply apply the same vulnerability source to each feed, if you want to use that feature. But i generally suggest putting it in the
approved-packages
feed (using our workflow above).A couple important things to note.
All packages have a license (even "no license" is a license), and a package with an unacceptable license (i.e. one you explicitly block) is always unacceptable. There are really no exceptions to this (maybe the CEO or legal officer someone could override this rule?), and if someone accidently uses an unacceptable license, it presents an immediate legal risk that you should remediate.
Very few packages have vulnerabilities, and those that do are usually acceptable to use. Most vulnerabilities won't impact your application, and those that do are usually easy to work-around aren't severe enough to warrant action. Severe vulnerabilities (i.e. where remediation is needed, like log4shell) will likely come up once or twice a year for you... if that.
When a severe vulnerability is discovered (i.e. one that you wish to block), tracking across feeds isn't really that big of a problem, compared to discovering which applications consumed that package. What I would recommend is investing in configuring package consumers (see section #2 of the article).
Cheers,
Steve -
RE: Access prereleases? Proget 6.0.9
Thanks for the update, @janne-aho_4082
1153 packages yields about 2000 quests to ProGet, and I'm guessing you have a connector to npmjs.org, that will probably yield another 1000-2000 requests outbound (e.g. "what is latest version of
del
for example). It's possible that some of those proxied requests are timing out, and that's why you're seeing 404s. Hard to say.Can you try using 6.0.9.rc.3 +1.13.1-rc.10? This will have both the LDAP User caching (it wasn't fully implemented in 6.0.8) plus the "memberOf" property caching.
The next step, if you enabled CEIP, we can track down your sessions and find what's taking so long.
-
RE: BuildMaster Release Status
Hi @paul_6112 ,
The general philosophy that BuildMaster was designed around is this:
- A "Release" is an intended set of changes to production
- A "Build" is an attempt at implementing those changes
- When a "Build" makes it to production (i.e. the final stage in the pipeline), then the changes were applied to production, thus the "Release" has occurred
- If you want to change production again, then you create a new "Release"
This means that, after build of 1.1 is successfully deployed to production (and no rollbacks are needed, etc.), it cannot be released again. You'd need to do 1.2, or so on.
However, in practice users want to deploy 1.1 build 1234 to production, and then 1.1 build 5678 to production.
So with this, we have a few options:
- Create the releases 1.1.1234 and 1.1.5678, and then have build 1 (sounds like you're doing this)
- Create Release 1.1.0 Build 1234, Release 1.1.1 Build 5678, and use a Release Name of 1.1 (this overrides the display in nearly all places)
- Edit the pipeline, and uncheck "Mark the release and build as deployed once it reaches the final stage."; this will give you more control over changing release status
- Don't use releases at all (set Release Usage to None on advanced settings), and have the build number be 1.1.1234, 1.1.5678
Definitely open to some other ideas as well. A few users have had to "workaround" this design philsophy, and we'd rather just support it!
-
RE: Access prereleases? Proget 6.0.9
Hi @janne-aho_4082 ,
That version will work on 6.0.8 as well! Please let us know, we're eager to get this improved.
Since there's a lot going on, I want to share a summary...
In some maintenance release of v5, we updated Active Directory libraries. Apparently a side effect for your domain was that querying the "memberOf" property of a LDAP user object is really slow. Who knows why.... it makes no sense. But that's LDAP. Anyways, I guess it must not have been noticed from the front-end? Hard to say.
In v6, we redid API authentication. However, we didn't use the LDAP User Cache. We fixed this mostly in 6.0.8, and hopefully fully in 6.0.9-rc.3. That seems like it helped some, but it must be still slow querying the "memberOf" property.
We figured out a way to cache the "memberOf" property, and then applied that to InedoCore-1.13.1-rc.10. So hopefully with the user object cached and memberOf property cached, this should be much faster.
Cheers,
Steve -
RE: Access prereleases? Proget 6.0.9
Hi @janne-aho_4082 ,
No problem, and great you can try this so quickly; PG-2096 is available in
6.0.9-rc.3
, and you can install this via the Inedo Hub; https://docs.inedo.com/docs/desktophub-overview#prerelease-product-versionsCheers,
Steve -
RE: Proget feed Nuget Package unavailable
Unless you blurred out
/feeds
in your screenshot, the URLS are different:- Server:
/feeds/nuget/Puma.Security.Rules/versions
- Workstation:
/nuget/Puma.Security.Rules/versions
Otherwise, I can't help but wonder if there's some rewrite rule, proxy server, or something that's interfering between your workstation and the server
Cheers,
Steve - Server:
-
RE: Rename Asset Subfolder
Hi @martin-noack_4528 ,
This isn't currently supported, in part because it wouldn't be possible to do with cloud storage providers. It also doesn't seem to happen that often, so we didn't implement it.
Is this an operation you'd do often? We can consider adding it, though it wouldn't work for cloud storage because, for whatever reason, the only way to "rename" a folder is by moving each item individually (which can take a long time).
Cheers,
Steve -
RE: Null reference exception on nuget package from connector
Hi @claudio_9251 ,
This is most certainly another bug/quirk with Telerik's proprietary feed, and them not following the clear NuGet API specifications
If it's easy we'll consider trying to work-around, but otherwise they really need to be following the API specifications. Or they should migrate away from their own server, like how Infragistics did ;)
Anyway, if you can share the specific HTTP request that is return an unexpected result (it's probably some GET query that returns 404?), then we can try that against the same credentials you sent earlier, and attach a debugger to see what's going on, and if it's an easy fix.
You can find this request by attaching Fiddler, then doing a side-by-side comparison in Visual Studio.
Cheers,
Steve -
RE: Marking packages as deprecated
No problem "resurrecting" topics! We definitely want to hear from users about feedback/feature requests.
We still haven't had anyone else ask for deprecation since this request, but I wonder if there's a better solution to solving your challenges than this feature. It sounds like you want to increase governance of your NuGet Packages, potentially with some sort of compliance in mind.
The
dotnet list package --vulnerable
is probably not what you want for your organization; NuGet's Built-in Vulnerability Scanning is really limited, in part because it only reports on a fraction of known package vulnerabilities (164 as of today). It also won't block packages that you deem problematic, unlike ProGet's feature.The same is true with
dotnet list package --outdated
-- it's probably not what you want, because it relies on developers to have to know (1) to run the command, and (2) know what to do if there's an outdated dependency.There are better ways to manage third-party packages (see How to Create a Package Approval Workflow for NuGet), and you'd better served knowing who's consuming outdated packages (see Use Package Consumers to Track Dependencies
Just some thoughts; like I said, we haven't had any demand for this feature, but these are proven solutions for improving governance of packages as organizations grow/expand their NuGet usage like you are.
Cheers,
Steve -
RE: Null reference exception on nuget package from connector
I believe that Telerik has a buggy/quirky API implementation, and sends invalid/broken metadata. It happens to work in Visual Studio, and I guess an older ProGet.
Anyways, if you can send us some credentials / instructions to connect to your Telerik feed, we will connect and attach a debugger, and trace the bad metadata.
You can email those to support at inedo dot com -- just reference [QA-743] in the subject. And please let us know when you send them email, because we don't monitor that address.
Cheers,
Steve -
RE: ProGet: Feature Request: Promoted/Repackaged flag on package listing
Hi @mcascone !
Interesting idea . just brainstorming here :)
Can you share the workflow you're thinking? Like... why do you want to know that a package was repackaged/promoted?
For example...
- I already know that a
-rc.xyz
version has been repackaged, because we only creat-ci.xyz
versions. - I already know that a package has been promoted, because that's the only way these packages are in a feed
... so this information isn't so helpful to me. Unless a mistake was made, only promoted packages are only in certain feeds, and all packages go through same workflow (repackaging).
Thoughts off the top of my head...
- Promotion records exist in the database, so perhaps easier to show
- Repackaging records are inside the package (I think), so maybe harder to show
Cheers,
Steve - I already know that a
-
RE: Getting proget to listen on another port
@Michael-s_5143 have you installed using the Inedo Hub?
You can change this in the configuration tab. Otherwise, this is in the Installation Configuration Files.
-
RE: ProGet: Feature Request: lock Repackaging to specific feeds, same as Promotions
@mcascone thanks so much for confirming that! This may be fixed in v6 already, but I created the issue PG-2072 just in case it's not.
We should get it shipped in the next or folllowing maintenance release.
-
RE: ProGet: Feature Request: Customizable Notifications on events
@mcascone ha! I guess I should say "Onedrive" or whatever it's called -- basically a place to version it as simply as possible.
Just FYI --- we're a pretty small team here, and had that same "gathering dust" worry. But our SOP are a collection of
.txt
and.ppt
files in one of two places:- "training" folder, which new staff use to follow common process (e.g. change management process)
- "program" folder, which describes a process someone is owns and is responsible or reporting/improving upon (e.g. release process)
It was a little bit of a culture shift for everyone, but now it's great to "not have to think" about the steps (just jump back to the folder), which also benefits new staff learning process (or someone covering for another person).
-
RE: Repository for SBOM files?
Hi @harald-somnes-hanssen_2204 ,
Just some random thoughts here...
ProGet has the Package Consumers feature, but that's not quite a traceable a BOM.
We largely see BuildMaster's artifacts and metadata serving as the BOM (several customers implemented it like that), though we don't necessarily call it "software bill of materials". We probably should from a marketing/positioning standpoint :)
There is no format/standard for a SBOM file, but an Asset Directory (and it's directory- and file-level metadata) could severe as such a repository. Universal Packages could as well, but I would imagine a SBOM would be like a XML or JSON file or something.
Steve
-
RE: ProGet: Feature Request: lock Repackaging to specific feeds, same as Promotions
Hi @mcascone
Just to confirm, are you referring to this field?
I didn't get a chance to test (want to confirm I understand first), but the "Target Feed" field should be either:
- a be drop down of all NuGet Feeds (when "Promote To Feed" is not set)
- a dropdown of two feeds (current, and "Promote To Feed")
Is that not the case? If not let's get that fixed :)
Cheers,
Steve -
RE: ProGet: Feature Request: Customizable Notifications on events
Hi @mcascone ,
Thanks for this suggestion, too! Along the same vein as ProGet: Feature Request: native integration with GitHub to perform automated tasks, this is more "process control/workflow management", and not something ProGet is built for.
If you develop a Manual Package Promotions/Repackaging Process (for example, when automation doesn't make sense), then that process should be a SOP document/checklist that lives somewhere (wiki, sharepoint, etc), along side other change management processes.
A bit outside our scope, but I always recommend SOP for organizations of all sizes (even 1). Helps with everything from onboarding to consistency to process improvement. Doesn't need to be anything more than a simple
.txt
file on Sharepoint with some basic instructions. It can always be improved later.BuildMaster does have this built-in for CI/CD-related processes, so a lot of SOP we see are mostly "follow the pipeline/prompts in BuildMaster", etc.
-
RE: ProGet: Feature Request: native integration with GitHub to perform automated tasks
Hi @mcascone,
Thanks for the feature request; this type of orchestration is something that should happen by at the CI/CD Pipeline-level, in a tool like BuildMaster.
For example, consider our open-source
Inedo.AssetDirectories
.NET library (GitHub, Package).In the corresponding Inedo.AssetDirectories BuildMaster application, when someone promotes the package to Release (see the Deployment Pipeline), the source code is tagged immediately after the promotion:
ProGet::Repack-Package ( PackageSource: NuGetLibraries, Name: Inedo.AssetDirectories, Version: $ReleaseNumber-rc.$BuildNumber, NewVersion: $ReleaseNumber ); ... snip ... # Tag Source Code # Tag the commit in source control using the id captured in the Build plan { GitHub::Tag ( From: GitHub, Tag: Inedo.AssetDirectories-$ReleaseNumber.$BuildNumber, Branch: $Branch, CommitHash: $CommitId ); }
You may be able to do this with ProGet's webhooks, but ultimately ProGet isn't an orchestrator (i.e. it doesn't have an execution/automation engine), and you won't get the level of reliability or flexibility needed.
-
RE: Docker Otter 3.0.13 NewJob w Custom Schedule - Invalid chron expression and EditJob w Remediate Drift - Script is required
Hi @shiv03_9800 , thanks for the detailed report on these!
We've identified these as OT-440 and OT-441, and they will be fixed in the next Otter maintenance release (3.0.14), scheduled for tomorrow.
Cheers,
Steve -
RE: Proget: deployment usage api not failing but no usage logged
It seems the package deployment endpoint is
POST
-only, but using the Native API you could use thePackages_GetPackageDeployments
method. Down the line we can certainly consider addingGET
method on the deployment api... if you can help us know what you/someone will use that info for ;) -
RE: Proget: deployment usage api not failing but no usage logged
Glad that helped @mcascone , I'll try to update the docs when I get a chance!
In older versions, that information used to be displayed more prominently, but unfortunately relatively few users seem to utilize that data, and several have moved to Package Usage. I think someone even built Usage Scanner that queried that deployment history table but we couldn't get much info beyond that...
We don't maintain that Jenkins plugin, but it is open-source and if you're comfortable enough with Java you might be able to add the required header fields?
But you may find upack.exe to be better, since it can "install" packages, which then maintains a history on the server itself.
-
RE: Proget: deployment usage api not failing but no usage logged
Hi @mcascone ,
These are two different features...
Deployment records show which servers a package has been deployed to from that instance of ProGet, at some point in the past. They are usually added when a package is downloaded, and the GET request has a special header parameter or a user agent string.
Package usage is more complex, in that it shows which servers/hosts a package (or container) is currently installed on, regardless of whether it was deployed from ProGet or not. It requires a
PackageContainerScanner
component, which is intended to bridge the gap between servers and packages.We currently have two scanners: one that interacts with Otter's API (which can return Docker, Debian, Rpm, Universal, and Chocolatey packages across your servers) and Kubernetes API (which is just Docker and Helm charts).
Hope that helps,
Steve -
RE: Proget: delete all versions of a package via API
@mcascone yep, that's a good way of putting it!
-
RE: No option for NuGet package path under Advanced Settings
Hi @kichikawa_2913 ,
I think it's this way for "historic reasons" - mostly all the other feed types came later, and it seems no one ever changes these paths or noticed.
Easy enough to make it configurable, but can you share your use case? Why do you want to use something other than a single root path with all of your packages?
Anyway I added a feature for this, and we should be able to get it in the next maintenance release PG-2006
Cheers,
Steve
-
RE: Proget: delete all versions of a package via API
@mcascone sounds like some great progress!
I got a little confused at the combination of "keep only last 5 versions" plus the 30-day window. From what i've read in the docs, all conditions must pass for the item to be deleted. How do i set up, "keep only the last 5 versions, but when nothing has been requested for 30 days, delete them all"?
We could definitely improve the docs on this area, and you're right all conditions must be passed for items to be deleted. When you add the "keep only the last 5 versions" rule, there will be, at a minimum 5 versions of a package.
You could be able to add a second rule, but it operates independently. More like an "OR" than an "UNLESS" I guess. Perhaps you could adjust the time-windows a bit?
- Rule 1. "Delete unused versions not requested in last 10 days." AND "Keep only last 5 versions"
- Rule 2. "Delete unused versions not requested in last 60 days."
I would look at your release cycles for guidance. For example, we release our products every two weeks, though maybe we'll skip a week every now and then. So, no -ci package will be needed past 1 month. And as you said, you can just rebuild if needed.
-
RE: Proget: delete all versions of a package via API
Hi @mcascone ,
We don't have a single API method that can be used to delete all package versions from the API, but the
foreach
loop will do the trick!I should add that I am doing this as the first stab at an attempt to automatically delete packages from a development feed, when the corresponding branch in github is deleted
I don't know the specifics/details of your use-case, but based on what I read, I'd recommend these guidelines:
- assuming: one GitHub repository, one project, one package you want to release
- use the same package name/group for all packages you create for this project, regardless of branch or development status
- create your "dev" packages using a prerelease version number, that has a sort of
-ci.##
version (assuming you use CI to build packages) - embed the commit id and branch in your upack metadata file, for traceability
- if you want to see which branch the packages was created from using the version number alone, add a
+branch
metadata label to the version number for branches (don't do this formaster
) - use repackaging and promotion to take your
-ci
packages to-rc
to stable (and the desired feed) - let retention policies automatically cleanup up the
-ci
packages
-
RE: Connector to ghcr.io no longer works
Hi @brett-polivka,
I haven't tried ghcr, but in my experience GitHub is really unstable on the API/integration side when it's outside of their core source/Git hosting functions (e.g. Packages are notoriously buggy), so if your PAT is okay, then this is the most likely scenario.
The Connector Health Check for Docker uses the catalog API (
/v2/_catalog
), and the response should look something like this: https://proget.inedo.com/v2/_catalog. This endpoint is particularly buggy in other repositories (especially ones that require authentication/authorization), so my guess is GitHub introduced a regression or something on their implementation.Another possibility...
403
is access related, so it could be something on your proxy side; please also check your Proxy settings, or something on your network side. -
RE: Test Instance License for ProGet?
We have a lot of customers who maintain a separate test instance of ProGet; while upgrade testing is important of course, a dedicated testing instance also lets you evaluate new ProGet feature usage patterns (such as requiring promotion workflows, etc.), try out new tools (perhaps new version of visual studio, etc.), and conduct training on ProGet usage -- all without risking/disturbing your production instance.
To keep things simple from a licensing perspective, we just treat testing instances separate instances (and thus require a separate license key). Many customers use a ProGet Free License for this, but of course not all the features are available. It's rare to see a second license be cost prohibitive, especially given the labor/server costs involved with maintaining a testing instance -- even ProGet Enterprise customers will have full instances just for testing and even DR purposes.
You're right --- Active Directory is usually a pain point; sometimes our code changes (we try to never touch this), but also people want to change their AD configuration (move to LDAPS, etc.). Wrong settings, and you can lock-out your instance. If it's an uncommon / one-off testing case then a temporary trial license is fine for this.
-
RE: Buildmaster Version 7.0.9 (Build 2) keeps suffering database timeouts
What version did you upgrade from? That could help trace code changes.
Does this happen only on one page (i.e. /releases)? Then it's probably related to a bad query/unexpected data, but you only have 161 releaes according to your query. Not so much.... easy way to test that is by adding querystring params:
/releases?ApplicationId=2&Status=Active
for example.If it's easy to see queryes on your RDS server, then we can see what query might be bad.
Does this happen only intermittently/randomly? If it's only you, then the problem probably isn't database/server load. And even with a ton of people, that's really rare. On very old instances with lots of retention jobs and years/gigs of data, doing an index cleanup is necessary, but I don't think that's the case here. The simplest/fastest thing I can think to do is reboot the BuildMaster server, and hope it goes away (maybe it's a weird underlying network stack thing).
Does this happen all the time (like nothing at all works on the website). Then it's probably network related?
Thanks,
Steve -
RE: Support for Homebrew in Proget
@yogurtearl_0881 this is the first I've heard of Homebrew... at first glance, it looks like a kind of open-source/hobbyist/alternative package manager for MacOS?
-
RE: Proget dows not activate on free license
Any updates on this issue? Were you able to resolve/fix this problem?
We've had another customer who reported a very similar problem (activation of a key on our new server, using sha256 vs sha1, causes an error in an old ProGet version). But we still can't reproduce it, and now I wonder if it's related to operating system version, or another operating system patch that's missing.
Thanks,
Steve -
RE: pgscan not sending --consumer-package-source
Hi @jeff-peirson_4344 ,
That's definitely what it's intended for, so I think this must be a bug...
I haven't had a chance to reproduce or look any further, but I wanted to at least share the code ASAP...
https://github.com/Inedo/pgscan/blob/master/pgscan/Program.cs#L90
... so please feel free to look/fix yourself, but we'll also take a look in the coming days as well! Just an FYI.
Thanks,
Steve -
RE: BuildMaster Configuration File Deployment
Hi @paul-reeves_6112 ,
This is by design; Configuration File Templates were intended to simplify maintenance of Configuration File Instances by combining common things into the template. Not saying it's the "right" design, but that's the use case.
However, we can definitely consider changing the behavior, to allow you to specify the default Template or a different Template when deploying.
Could I trouble you to share the configuration files (sensitive data redacted of course), so we can see the use-case better? We really want to document the configuration files better in the coming months, and having examples like this will help us tremendously.
We also want to make sure it's the best way to solve the problem. There is also the option of using those ASP/PHP-like OtterScript Snippets in Configuration Files, too. Maybe that's better to put that in your template? I don't know,...
Lots of options, and we want to make sure we document how/when to choose which ones.
Thanks,
Steve -
RE: BuildMaster Path Browser
Hi @paul-reeves_6112 ,
The [...] moving is a bit strange and definitely shouldn't do that... but the remote browsing not working is a concern. I can see why it's not, and it's a way with how new agents are constructed behind the scenes in v7.
This feature was originally removed from v7 due to UI/JavaScript challenges, but we ultimately brought it back.. but clearly this part was overlooked in testing.
Anyways we'll get it fixed pretty quickly in BM-3716 - thanks for reporting it!
Thanks,
Steve -
RE: Allow login cookies on ProGet to persist across browser restarts
Hi @hwittenborn ,
I can definitely see how this could get annoying; this has been the design of our products for quite a while, mostly for simplicity/security reasons, and there hasn't been much demand for changing it. We're definitely open, so if other users are interested we'll certainly consider it.
Most administrators prefer "short sessions" (i.e. logged out at browser close or with no activity) for their own management simplicity; if we were to add "long sessions" (the "Remember Me" checkbox using persistent cookies), then administrators would need to worry about which users are "logged in", for how long, and terminate those sessions. And then we'd have to add all the features to support that capability - so nontrivial.
Best,
Steve