Hi @steviecoaster ,
You can use pgutil
for this:
pgutil settings set --name=Licensing.Key --value=XXXXXX
This is in the Configuration
table FYI, which is what pgutil settings
does
Thanks,
Steve
Welcome to the Inedo Forums! Check out the Forums Guide for help getting started.
If you are experiencing any issues with the forum software, please visit the Contact Form on our website and let us know!
Hi @steviecoaster ,
You can use pgutil
for this:
pgutil settings set --name=Licensing.Key --value=XXXXXX
This is in the Configuration
table FYI, which is what pgutil settings
does
Thanks,
Steve
Hi @ds_6782 ,
If you haven't already ,I would make sure to check out the docs here:
https://docs.inedo.com/docs/proget/feeds/nuget/symbol-and-source-server
The NuGet client will automatically push both packages when you issue the dotnet nuget push
command. You do not need to push then separately.
It looks like, for whatever reason ,your NuGet client isn't doing that? We don't see this problem very often - it "just works" - and I'm not really an expert at troubleshooting NuGet client configuration.
But that's where I would start. It looks like the issue is on the client end, and somehow your clientis misconfigured to not push the source packages.
Thanks,
Steve
Hi @parthu-reddy ,
The default Timeout is 10 seconds, but you can configure it on each connector.
There is no "connection pool" for NuGet -- these are just normal HTTP connections like your browser or any other web api.
Thanks,
Steve
Thanks for the additional information; so if I'm reading it correctly, it's basically a different format/style of index files?
So far as I can tell, it's been around for quite a while (2016?), but doesn't seem to be widely used or mandatory? I didn't look too deeply.
In any case, supporting one style of Debian index files is already a challenge, since we are reimplementing everything from scratch (and not using their tools). As with many package formats, the "real specs" differ a bit from the docs and can only be discovered by studying the source code and/or behavior of clients. Supporting a second format is a big investment, and doesn't make sense unless there's a compelling need (like clients deprecating old format, etc).
If you're looking to query/manipulate packages, I would suggest checking out pgutil
instead:
https://github.com/Inedo/pgutil
Thanks,
Steve
I'm not familiar with deb882 / ubuntu.sources... is that a new kind of format? Is this causing some kind of problem or issue with clients?
Thanks,
Steve
Hi @jw ,
Thanks for the detailed report; this is definitely a bug, and we'll have it fixed in the next maintenance release via PG-2975 -- planning to ship next Friday.
Thanks,
Steve
Hi @parthu-reddy,
If you (the NuGet client) requests the NewtonSoft.Json-13.0.0
package, and ProGet already has that package cached , then ProGet will return the package without ever contacting NuGet.org.
However, the NuGet client will also request a list "all versions of NewtonSoft.Json
when doing a package restore. I don't know why, that's just what it does.
In this case, ProGet will contact the connector (NuGet.org) and aggregate the remote/local results. If the connector is unreachable (as in the case above), ProGet will log a warning and instead return a list of "all (cached) versions of NewtonSoft.Json`... which will be probably all that's needed.
This is likely why no one has complained/noticed and jobs aren't failing.
Thanks,
Steve
Hi @caterina ,
In ProGet 2024+, the Project Analyzer will give some kind of warning/error and builds 1001+ will be in an "inconclusive" state instead of "Compliant", "Warn", or "Noncompliant"
That said, ProGet 2024+ does include capabilities to auto-archive builds through use of status/pipelines, so that might be worth investigating.
Thanks,
Steve
hey @steviecoaster ,
Great idea! This will be the default starting in the next maintenance release via PG-2965
Thanks,
Steve
Hi @parthu-reddy,
This basically means that ProGet was unable to communicate with nuget.org
for one reason or another. More specifically... ProGet didn't receive a timely response.
Usually it's a temporary outage or network issue, and will hopefully go away and not cause any issues.
Thanks,
Steve
Currently, ProGet uses the nexus-maven-repository-index.gz
file to see what artifacts are in a Maven repository, and then downloads the files based on that. So you'll need to enable that index file by doing this, I think:
https://jfrog.com/help/r/jfrog-artifactory-documentation/maven-indexer
This only applies to Maven indexes, and that file is only used for importing/downloading a feed into ProGet. For other feed types, a different API is used.
Good news --- in an upcoming maintenance release of ProGet, we will be shifting to use the Artifactory API directly to import artifacts from a repository.
Thanks,
Steve
Hi Paddy,
Connector Filters allow/block packages by name (not version filters). The package name is FluentAssertions
not FluentAssertions:>=8.0.0
. So that's why it's not behaving as you expect.
Please see Dealing with Fluent Assertions License Changes in ProGet to learn more how to address this particular issue.
Thanks,
Steve
Hi @itadmin_9894 ,
Can you give us a few more details of what you're trying to do? A connector filter is intended to allow or block a package by name; it does not filter out versions.
Thanks,
Steve
Hi @mmaharjan_0067 ,
I'm afraid we're at a loss here; as you noticed, it works fine on the command line - but there's just something "off" about your runner.
The only thing we can figure is some kind of network block or interference. We don't really have a way to test/debug this any further.
Thanks,
Steve
Thanks for the additional feed back @dan-brown_0128!
We do have a thread about OCI Registries in ProGet, as they have been requested from time to time. It might be worth posting this there, too?
But our current take is that "OCI Registries are a poorly-designed solution in search of a misunderstood problem", and that they are technologically inferior to alternatives. Here's a quote from that post:
The main issue I have is that an OCI registry is tied to a hostname, not to a URL. This is not what users expect or want with ProGet -- we have feeds. Users want to proxy public content, promotion content across feeds, etc. None of this is possible in an OCI registry.
We got Docker working as a feed by "hijacking" the repository namespace to contain a feed name. Helm charts don't have namespaces, so this is a no go.
Personally I can't imagine how this is scalable. A lot of new Docker users (including me) are "shocked" that you have to include the "source" in the container name (e.g. proget.corp/my-docker-feed/my-group/my-app
) -- I can't see how this could ever work to expand this to all deployment artifacts (e.g. my-artifacts.corp/my-group/my-app/service-assembly
), especially considering how often everything is referenced in scripts and dependencies.
Anyways best to continue on that other thread if you'd like to keep the discussion going, we're always open to learning more :)
Hi @james-woods_8996 ,
You are correct - in a mirror scenario, either side could publish (or delete) packages, and it would be propagated to the other. Three-way is also possible, but you do that with two two-way relationships, if that makes sense?
So it's basically A <--> B
+ B <--> C
, and that means if you publish/delete a version on A
it will propagate like A --> B --> C
.
Thanks,
Steve
Hi @dan-brown_0128,
I'm afraid that's not possible -- Asset Directories are essentially a web-based file system where as OCI Registries are basically Docker Registries. Based on the context of that screenshot, I'd be surprised if the OCI registry could be used for something other than a container image registry.
OCI/Docker Registries can technically store any kind of large binary object, and does so using a 256-bit digest hash (think a double-guid). These "blobs" have no names or extensions - some are "tagged" to make them human readable/accessible. These tags are not only mutable, but can be anything you'd like (v2024.1
or purple
).
That said, OCI Registries are not a suitable storage for artifacts by any means. Packages are ideal (self-contained metadata) - but at least files have names/extensions that will give you a clue as to the provenance.
That said, might I suggest a different approach for deployment to explore down the line, as you're looking to improve/optimize things:
https://inedo.com/buildmaster/vs-octopus-deploy
Thanks,
Steve
Hi @pmsensi ,
Short of using the Native API, I'm afraid we don't have a first-class API to export/import package policies and their related information
We may consider adding that after ProGet 2025 is released.
Thanks,
Steve
Hi @pmsensi ,
The pgutil builds audit
command should show the same license information you see in the UI:
https://docs.inedo.com/docs/proget/api/sca/builds/analyze
The pgutil packages audit
command should also provide similar information on a package level.
Is that's what you're looking for?
Thanks,
Steve
Hi @v-makkenze_6348 ,
This error was related to an error on our end - basically there was some bad data in the package that the InedoHub downloaded, and it caused the installer script to crash. We fixed the package on our end, so reinstalling normally should work now.
Cheers,
Steve
Hi @jimbobmcgee ,
I'm afraid we've decided to not implement OT-516 at this time; for this particular issue, we weren't comfortable with a "blind change" (i.e. coding without testing) because we kind of forgot how it all works (it's been years since it was implemented) and we don't have a readily-available testing environment for the listener/incoming agents. So it's a higher level of effort than we can put in for a community/free user.
Anyway this will shift to our roadmap for Otter 2025, where we will dedicate time to make other improvements as well.
Cheers,
Steve
Hi @steviecoaster ,
That's great... sure go ahead publish, no objections on our end at all :)
Actually, there is at least one PowerShell Module out there already that we occasionally point people to: https://www.powershellgallery.com/packages/ProGetAutomation
That one's been around for a long while... certainly since before we had a lot of the API endpoints that you're using. I also don't think it does user provisioning, etc. So, it'd be nice to have alternatives to share.
We're also happy to take a look, but we most definitely aren't PowerShell Module experts. So we can't give feedback on the code/architecture/structure. The Native API / StoredProcs are usually pretty stable, but with the ongoing Postgres development, we are doing some minor refactoring of the SQL Stored Procs to ensure parameter consistency.
Cheers,
Steve
Hi @caterina ,
It looks like there was an error processing the template (i.e. what's on the customize webhook tab). Can you share that, and maybe we can spot it?
It's likely a syntax error of some kind.
Thanks,
Steve
Hi @pbspec2_5732 ,
The script in the linked gist should fix the problem for you; it's not feasible/possible to try editing in the database directly due to the complexity of the model.
https://gist.github.com/apxltd/351d328023c1c32852c30c335952fabb
Thanks,
Steve
Hi @r-vanmeurs_4680 ,
This is a false positive and you can disregard it; ProGet 5.2 is not impacted by the vulnerability in JQuery for may reasons, including the fact that they vulnerable code is not used and, if it were, ProGet is protected on the server-side from such "attacks".
Thanks,
Steve
What are you specifying for Distribution in the connector? When I tried these settings, it worked:
You need to specify one of the available distributions, which are listed here:
http://archive.ubuntu.com/ubuntu/dists/
Thanks,
steve
Hi @caterina,
We haven't thought of adding a --stage
option to the pgutil builds scan
command, but of course it's possible. I know that we are exploring other options like associating builds/projects with pipelines (i.e. a sequence of stages).
So perhaps that would mean, pgutil builds scan --workflow=MyWorkflow
, and then MyWorkflow
would start/end in different stages.
For now, we'd love to see how you utilize the stages. Maybe it means calling two commands temporarily, and we can revisit once we add new features, etc.?
For the pgutil builds create
command, the project must already exist, or you'll get that error. You can create or update a project using pgutil builds projects create
.
So basically, these command would probably do what you want?
pgutil builds projects create --project=BuildStageTest
pgutil builds create --build=1.0.0 --project=BuildStageTest --stage=DesiredStage
Note that when using pgutil builds create
, you can also specify a stage name, like in the example above.
Hi @arunkrishnasaamy-balasundaram_4161 ,
Thanks for clarifying!
[1] The MavenIndex file is not at all required to download artifacts from a remote Maven Repository nor to see the latest version of artifacts. In ProGet, all this allows you to do is browse remote artifact files in the ProGet UI which typically isn't very helpful.
[2] It's not possible to change this
[3] ProGet does not have "group repositories", but uses feeds with connectors. The model is different, and feeds with connectors will often cache packages
in a lot of organizations.
[4] It's likely you will be unsuccessful in your ProGet configuration with a setup like this or at least give your users a big headache and lots of pain/confusion. This is considered an "old mindset" the for configuring artifact repositories that were based on "files and folders on a share drive" not packages.
This "closed approach" will greatly slows down development, causes duplicate code, and lots of other problems. Modern development approaches do not use this level of highly-granular permission. Instead, they take a innersource model. You do not need to make everything available to everyone.
However, less than 1% of your 2k projects will contain sensitive IP or data that other teams can't access - those projects should be segregated into sensitive feeds. The logic is, "if everything is sensitive, then nothing is sensitive"
[5] ProGet does not generate a "support zip file"; if we require additional information when supporting users we ask for the specific information
Hi @uvonceumern_6611 ,
Thanks for providing all of the additional information; based on what you shared, it looks like the file is actually being uploaded incorrectly... using a "multi-party / form upload encoding" instead of a basic PUT
of POST
of the body contents.
Please see the Upload Asset File documentation for more information.
Thanks,
Steve
Hi @arunkrishnasaamy-balasundaram_4161,
I'll do my best to answer these!
You can configure the FullMavenConnectorIndex
job to run a routine basis under Admin > Scheduled tasks; it's not enabled by default because the MavenCentral Index is very large
I'm not sure what "Context path" means?
I'm not sure what you mean by "Group" repo?
The way to handle this in ProGet is by using Policies & Rules to define noncompliant artifacts; certain versions of log4j would be considered noncompliant because it has that severe vulnerability
We do not recommend 2000 feeds in any scenario; I wonder i there's a disconnect between what a "Feed" is and what you're looking for. A feed is place where you store all of the artifacts for a division/group of projects. The volume isn't a problem, but even a massive organization should have dozens of feeds at most
This should show on the "History" page
There is the "Packags" page at the top that can do some cross-feed searching
Yes; see https://docs.inedo.com/docs/proget/administration/retention-rules
I'm not sure what a support zip file is.
In general you would follow the migration guides we've published; however, your existing artifact server may not be configured to allow importing. If you're running into issues, best to open a new thread on the forums and we can review/investigate
Hope this helps point you in the right direction,
Steve
Hi @husterk_2844 ,
I'm afraid we're at a loss here; no one else has reported any kind of errors like this, and I can't imagine what would even cause such a problem.
I suspect there is something off about your Docker Compose file? That seems to be the only thing different than the basic setup.
I would just try to re-follow the basic instructions we posted:
https://docs.inedo.com/docs/installation/linux/docker-guide
That's what we use to test, and lots of users install and upgrade without a problem.
Thanks,
Steve
Hi @forbzie22_0253,
There's no Windows event logged, but once the /heath
page is reachable, then the application is ready. If you're using SQL Server and IIS on the same box, then both of those must first load before ProGet can start.
Thanks,
Steve
@v-makkenze_6348 whoops, good catch - yes thank you :)
Thanks so much Valentijn!
Looks like this was a display bug, and the code on Licenses Overview was looking at UsedByPackage_Count
instead of UsedByBuilds_Count
. Easy fix, which will ship via PG-2774 in next maintenance release:
As an FYI, the package with GPL-2.0 is node-forge@1.3.1
in the VicreaNpmJs feed. Looking closer, that package is dual-licensed as BSD-3, so it's not really a problem.
That said, the Licenses Overview page predates Policies, and I don't think the "License Usage Issues" makes a lot of sense anymore. The old model (block/allow) was much simpler with a basic Allow/Block rule. However, Policies are quite a bit more complicated.
We're very open to ideas on what to do in its place, or if you have any suggestions on what could be improved in general in the SCA UI. It's very easy for us to "see" what you're talking about, since we have the backup now :)
Thanks,
Steve
Hi @russell_8876,
ProGet itself does not have an upload limitation, so it's likely something else like your reverse proxy server, etc. Typically such large requests are blocked/prevented by middleware.
That said, HTTP is not a reliable protocol and should never be used for such large requests. You will run into a lot of problems trying to upload 24GB files in a single request. You'll need a new approach.
The easiest solution is to use drop paths, and then a file transfer protocol that is designed for large/reliable file transfers (most are).
Another solution is to use "upload chunking", which only asset directories support; pgutil
should handle the chunking/uploading for you. If you want to use packages, you can can then import that uploaded asset into a universal package feed:
https://docs.inedo.com/docs/proget/upack/proget-api-universalfeed/proget-api-universalfeed-import
-- Dean
Hi @v-makkenze_6348,
In theory, you should be able to find the noncompliant build on the Projects > Builds page, then narrow it down from there. But if you have a lot that are noncompliant, this may be difficult.
You've sent us your database in the past; would you mind uploading a BAK again? We can take a look and improve the UX so this will be discoverable. You can use an old link that we sent you a while ago, or fill out a support ticket and we'll get you a new link.
Let us know if you upload it - and we'll take a look!
Thanks,
Steve
Hi @jw,
I'm afraid I'm at at a loss then...
It seems to be related to one of the following settings / issues:
Based on the stack trace, I can't pinpoint which one it is, but I'll share the code - and maybe you'll see something.
I would first try to pinpoint which it is by trying as Admin vs Non-Admin, and then see which of those issues it could be.
private static List<Notification> GetNotificationsInternal(AhHttpContext context)
{
var notifications = new List<Notification>();
if(!string.IsNullOrWhiteSpace(ProGetConfig.Web.AdminBannerMessage) && (ProGetConfig.Web.AdminBannerExpiry == null || DateTime.UtcNow < ProGetConfig.Web.AdminBannerExpiry))
{
notifications.Add(new Notification(
NotificationType.warning,
ProGetConfig.Web.AdminBannerMessage
));
}
if (WebUserContext.IsAuthorizedForTask(ProGetSecuredTask.Admin_ConfigureProGet))
{
if (ShowUpdateNotification(context))
{
notifications.Add(new Notification(
NotificationType.update,
InfoBlock.Success(
new A(Localization.Global.UpdatesAvailable) { Href = UpdatesOverviewPage.BuildUrl(returnUrl: AhHttpContext.Current.Request.Url.PathAndQuery) }
)
));
}
if (ShowLicenseViolationNotification(context, out var violationUrl))
{
notifications.Add(new Notification(NotificationType.error,
new A(Localization.Global.LicenseViolation)
{ Href = violationUrl }
));
}
var expiresDate = Licensing.LicensingInformation.Current.LicenseKey?.ExpiresDate;
if (expiresDate != null && expiresDate <= DateTime.Now)
{
notifications.Add(new Notification(NotificationType.error,
new A(Localization.Global.KeyExpired)
{ Href = LicensingOverviewPage.BuildUrl() }
));
}
else if (Licensing.LicensingInformation.Current?.LicenseKey?.LicenseType == ProGetLicenseType.Trial)
{
var days = (int)expiresDate.Value.Subtract(DateTime.Now).TotalDays;
notifications.Add(new Notification(NotificationType.warning,
new A(Localization.Global.TrialWillExpire(ProGetConfig.Licensing.EnterpriseTrial ? "Enterprise" : "Basic", days))
{ Href = LicensingOverviewPage.BuildUrl() }
));
}
else if (expiresDate != null && expiresDate.Value.Subtract(DateTime.Now).TotalDays <= 45)
{
var days = (int)expiresDate.Value.Subtract(DateTime.Now).TotalDays;
notifications.Add(new Notification(NotificationType.warning,
new A(Localization.Global.KeyWillExpire(days))
{ Href = LicensingOverviewPage.BuildUrl() }
));
}
var extensions = ExtensionsManager.GetExtensions();
if (!extensions.Any() || extensions.All(e => !e.LoadResult.Loaded))
{
notifications.Add(new Notification(
NotificationType.error,
InfoBlock.Error(
new A(Localization.Global.ExtensionLoadError) { Href = Pages.Administration.Extensions.ExtensionsOverviewPage.BuildUrl() }
)
));
}
if (WUtil.ShowDockerRestartMessage)
{
notifications.Add(
new Notification(
NotificationType.warning,
InfoBlock.Warning(Localization.Global.ContainerRestartNeeded)
)
);
}
}
var v2Notifications = ShowV2DeprecatedQueriesUsedWarning(context);
if (v2Notifications.Any())
{
notifications.AddRange(v2Notifications.Select(f => new Notification(
NotificationType.warning,
InfoBlock.Warning(new A(ManageFeedPropertiesPage.BuildUrl(f.Feed_Id), $"{f.Feed_Name} is using deprecated ODATA (V2) Queries."))
)));
}
return notifications;
}
Hi @jw ,
Can you try restarting the web application (Admin > Manage Service)? Hopefully it will be resolved after that.
Thanks,
Steve
Hi @husterk_2844 ,
Those are some strange errors and I haven't seen them before. It seems that something is wrong with SQL Server.
What's interesting/notable here is that SQL Server is saying Incorrect syntax near 'GO'
on a the 20 CREATE TYPES.sql
script. That hasn't changed in 10+ years... and it's also unlikely that the latest SQL Server 2022 is failing to parse/execute that script, but I can't think of anything else.
The error occurring on 10 SET DATABASE PROPERTIES.sql
is also peculiar; that script has a TRY/CATCH, so it's not considered failure -- but the expected error is definitely not "ALTER DATABASE statement not allowed within multi-statement transaction."
From here, I would "play around" with different versions of SQL Server and ProGet. We've never seen anything like this, so don't really know where to start.
As for ProGet failing to start... a database failure would definitely yield that behavior, so that's not really an issue. The question really is why these database errors are occurring.
Thanks,
Steve
Hi @506576828_9736 ,
We don't have any templates or wizards for Vue project, but you can follow the guidance on Creating a Build Script from Scratch, which would involve the following steps:
Once you capture the artifact, you can use or customize the Deploy Artifacts to File Share Script Template
Please share your experience, as it'll help future users searching for this as well :)
Thanks,
Steve
Thanks @johnsen_7555! If I can offer some advice....
The workflow you're creating is a bit complicated, and adding in the automation component is "yet another product/process" to own/maintain. On our end, we get support inquiries from confused new administrators who notice "undocumented" behavior (i.e. not on docs.inedo.com) in ProGet.
If you're not "worried" about malicious packages, then the main risks you are mitigating are:
Both licenses and vulnerabilities are only a problem if they go to production, and keep in mind that vulnerabilities need to be monitored after a package is being used, since they are often discovered long after the package is used in your production software.
How about something like this:
noncompliant
packagesnoncompliant
noncompliant
warn
pgutil
in your CI/CD pipeline to prevent unaddressed warn
from going to productionwarn
packages as you have timeNote that, in a future version of ProGet, we intend to add more intelligence to package analysis for OSS packages. For example, we would like to say "this nunnpy
package has 1 version, is recently published, has no GitHub repo, etc., and therefore is noncompliant".
Hi @johnsen_7555,
I'm not really sure I totally understand the automated worfklow you want to create; you mentioned earlier having an approval process?
Are there any gaps with the workflow I mentioned? Basically two feeds (approved, unapproved), which you then use package promotion as the approval action.
We don't recommend using webhooks to automate ProGet itself. This can create some loops that will cause headaches.
Thanks,
Steve
Hi @johnsen_7555 ,
Ah ha, thanks for clarifying that!
This is the expected behavior, and the reason is a bit complex.
Unlike most package repositories, the PyPI Repository API (which a ProGet feed implements) does not provide any licensing information about packages. It's just a very basic listing of names and versions, which means that there is no license information (or description, author, etc). All of that is embedded in the package files.
However, pypi.org has a special API that ProGet queries to provide more information about a package hosted on pypi.org. This way, description and license information can be displayed on remote packages. But this API is only for pypi.org, and the pip
client doesn't use it.
When you connect to another feed in ProGet, the regular API is used. And since the PyPi Repository API doesn't provide package metadata, this information isn't available. It's on our long-term roadmap to use a special API / method for ProGet->ProGet connections, but that's a ways off and requires a lot of internal refactoring.
That said, the workflow we support to accomplish what you want is as follows:
https://blog.inedo.com/python/pypi-approval-workflow/
Thanks,
Steve
Thanks for clarifying @johnsen_7555.
I'm struggling a bit to see what kind of configuration might cause this issue or reproduce the issue. Is your python-accessible
feed connected directly to PyPi.org?
If you go to re-analyze the package, you should get a really long set of debug logs (no need to send them). But after you do that, can you try the download again?
Hi @johnsen_7555 ,
Sounds like you're building a sort of Python Package Approval Workflow, which is great to see.
If the user doesn't have permission to download the file, I would expect a 401
(if anonymous) or 403
if authenticated.
A 400
error is a bad request. It could be coming from ProGet, as ProGet will occasionally throw that message when there is unexpected input. But it could also be coming from an intermediate server that's processing the request before forwarding to ProGet.
In this case, I believe pip
is simply just performing a GET
on the URL in the error message:
.../download/numpy/2.0.0/numpy-2.0.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl#sha256=6d7696c615765091cc5093f76fd1fa069870304beaccfd58b5dcc69e55ef49c1
I'm not 100% sure that's what pip
is doing, but why don't doing a curl -v
against that URL and see if you also get a 400
?
If so, then you should get an error in the message body from curl
. ProGet will write this out to the response stream.
If not, then you'll need to capture the traffic and see what the difference is. Maybe it's a header that's different? I'm not sure what would cause ProGet to yield a 400
.
Let us know what you find,
Steve
Hi @scott-wright_8356 ,
Once connector caching is enabled, the error pattern is not used, so we only have this warning. I added a small change via PG-2726 which will add the connector name. This will appear in the next maintenance release (2024.9), scheduled for this week.
Removing connector caching should reveal the connector name, so maybe that helps you identify it until then
Thanks,
Steve
@jw thanks for clarifying!
The pipelines in ProGet are not really meant to track status. The main reason for the build stages is to control automatic archival, issue creation, and notifications. For example, your stable releases might stay in a "Released" stage indefinitely.
We also had (in the preview feature) threre build statues: Active, Archived, Released. We don't use Released currently, but definitely something we may bring back. You don't want to delete a Released build, but you would probably want to delete an Archived build.
Anyway, I'd check out the pgutil builds promote
command - that way you can keep your "archival rules" in ProGet.
Hi @jw,
We'd love to learn more - why not?
We envisioned that there would be lots and lots of builds in the Build
stage (i.e. created by a CI server), and the ones released might got to a stage like Production
.
Thanks