Thanks @steviecoaster! All that sounds great, especially since it won't require changing anything on our end :)
Much appreciateD!!
Thanks @steviecoaster! All that sounds great, especially since it won't require changing anything on our end :)
Much appreciateD!!
@steviecoaster great thanks! That sounds good to me, I think having a new install option would be great.
Aside from supporting the package (which I'm not really worried about since you built it!!), the main concern I would have is keeping up with versions. We have frequent ProGet releases and pgutil
is basically on demand.
I vaguely remember there was some kind of auto-packaging thing? Or maybe I'm dreaming that?
We could add something to our deployment plan that "does something" as well.
Hi @steviecoaster ,
That's awesome! I just added steviecoaster
as a maintainer of the package, and it says pending approval.
Quasi-related, but I delisted all versions of romp
and upack
since we no longer publish/support those tools, but they are still showing up: https://community.chocolatey.org/profiles/inedo
Not sure if it takes time to reindex or whatnot?
Cheers,
Alex
Hi @mikes_4196 ,
Per the discussion above... can you tell me a bit about "we" (i.e. the size/profile of your organization) and describe how you are using winget in your organization? And why did you use WinGet instead of Chocolatey?
Per the discussion above, it does not look like a tool/ecosystem that is usable at scale. I get that it's built in to Windows, but I haven't seen a case study demonstrating a real deployment -- even within teams at Microsoft.
The "central repository" is just a giant mess of files in a GitHub repository and can't reasonably be "proxied" via a connector like other feeds. They apparently have "private repositories", but there's no documentation/guidance on how to create/use private packages for end users, so I don't think many are doing this.
But open to learning more.
Cheers,
Alex
From an API standpoint, I believe only npm packages support server-side tags. Rubygems might, but in any case we strongly advise against using them: https://blog.inedo.com/npm/smarter-npm-versioning-with-semver/
Server-side tagging is not likely something we will support in the future. Deprecation and Listing are really only there because many of the client APIs support them.
I'd need to see a strong, specific use case. If it's related to quality (e.g. dev, staging, prod), then it's a "hard no" because our solution to quality is prerelease version numbering/tagging with repackaging.
I do know that Sonatype Nexus has always supported tagging, but their repository is more of a "fileserver with server-side metadata" and ProGet is package-based. The only documented use case they have for tags has been quality, and our solution/approach is far superior.
Cheers,
Alex
Hi @kichikawa_2913 @jeff-miles_5073 @martin-helgesen_8100 ,
Good news! We've got a Terraform Feed working on version 2024.20-rc.4
and it seems ready for release:
I'd love to get a second set of eyes on our approach and the docs; this was a really interesting protocol/API to work with because there are no "Terraform Packages" - basically everything is just a pointer to a GitHub repository.
So, ProGet just packages it in a universal package and the Terraform CLI seems to be happy. What's a little unfortunate is that the hostname/feed need to be in the package, but that's also how Docker works. So I guess it can't be helped.
Thanks,
Alex
Hi @zs-dahe ,
The Package Promotion API only be checks for permission against toFeed
- you don't need any other permission on the toFeed
or fromFeed
.
As far as more granular permissions, perhaps setting up a Personal Key for like a builds
user would do the trick? That'll let you reuse the API Key and set up very granular permissions.
We decided not to duplicate that granular permission setting in the API Keys because it's already confusing enough
Thanks,
Alex
Thanks for clarifying @matt-lowe!
some IT managers are very much stuck on the 'Microsoft' rut and are unable to see past it, no matter the cost and time it adds to most IT management
...
TBH the only way I managed to get ProGet installed was for managing PowerShell scripts for a small project and have managed to keep it ticking over with other options.
I've definitely heard that... personally I think this is where vendor/sales can really come in and make a difference.
Sometimes it's a lot harder to sell things internally... it's not so much about "giving a sales pitch" but more that decision makers can have a different relationship with sales execs.
It's one thing to blow off a vendor demo or delay a project... but team morale can get shot if you do that internally, so it's safer to stick to status quo. And of course, sales execs can challenge the status quo in a much more direct way... trying that as an employee might be uncomfortable or feel insubordinate.
Anyway this isn't related to WinGet, but we might be able to help with the bigger picture -- might be worth spending 15-20 minutes just talking through it and seeing if there's any paths to go from there. We're seeing a lot of movement on the WinOps/ProGet/Chocolatey in big orgs, so clearly they're getting something over DIY solutions or WinGet.
Feel free to shoot a note at the contact form, and we'll take it from there!
Hi @matt-lowe
You're the first person to ask about it in three years. I kinda forget it's a thing :)
So since then, feeds are a lot easier to build.. but I also don't want to implement the next Bower feed
I don't remember much interest in WinGet from our WinOps userbase... and I only occasionally see mention of it on Twitterπ. I think Chocolatey has a lot more traction and I get the impression that it's more capable, stable, and mature. And it's very well supported.
What's your impression? Have you been using it? What's better about WinGet over Chocolatey?
I think this somewhat recent Reddit comment captures the general sentiments about Microsoft projects like WinGet:
AFAIK, winget was/is a Microsoft led attempt to offer something instead of everyone using chocolately. Sadly though, it's not a replacement.
Β
That is, Microsoft has an "answer", but not necessarily commitment behind winget. So, unknown how well packages sitting behind winget will be maintained. And historically, it has been under scrutiny. Is it good now? Not sure anyone can ever say for sure.
Β
Now, there will be those that say "fixed", today's winget isn't like the absolute mess it was early on. And that might be true. But how would we (the customer) know? Again, biggest problem with Microsoft is the "not knowing". Or the "deception" of making a project look "better" and then realizing it was all just a ruse.
Β
Up to you. Today's weather report, winget is "good". But can't really say anything about tomorrow. Oh, and thank you Microsoft for making yet another ambiguous thing (not handling its delivery, deployment or long term outlook well).
The fact that their package repository is just stupidly large GitHub repo maintained with pull requests is concerning. The discussion with @sylvain-martel_3976 above - I think they were just considering WinGet, and didn't use it yet. Not sure what ever happened with that.
Anyway let me know your thoughts!
Alex
@davidroberts63 check out 2024.14... added via PG-2783 :)
Also, on the builds page, I'd recommend having a sort and/or filter ability for the Stage.
Oh yes, I could see that helping... actually i'm not sure if it's even listed on the Build page. How many active builds do you have BTW?
@carl-westman_8110 very happy to hear it, thanks for the kind words :)
@carl-westman_8110 it's definitely a nifty and convenient function, but a poor feature from a product standpoint.
Users don't expect functionality to bundle content like this, so they won't use it unless we push it -- and to do that, we need to articulate benefits, show how it's a better solution than what they're doing now, and convince them to take the time to switch.
That's obviously a lot of work, on top of all the code to get it working.
I know a handful of users have found some value in virtual packages (and we use it internally), but being reminded about them now -- with about a decade more of experience behind me -- it feels like the wrong thing to focus on at the time, with limited resources.
That said, it doesn't even come close my top 10 worst product choices... so probably better we built that, than something else
Alex
Hi @carl-westman_8110 ,
It's a Virtual Package; they're kind of a niche solution that, in retrospect, we probably shouldn't have created them.
Alex
Thanks for clarifying @sebastian
So I'm not exactly thrilled by this UI, but maybe this is fine.
What do you think?
This is a kind of "quick and dirty" page that would show up if you clicked on that GPL-2.0
license and the "# projects" number.
Here's one for the packages as well:
Hi @stuart-houston_1512 ,
If you're using almost entirely libraries that are available on PyPI.org and Anacaond.org, that makes the most sense. Users probably won't notice much of a difference.
Once libraries start being pulled through ProGet, you can start setting up policies/compliance rules to restrict packages. Or at least get warned about them.
At some point you can set up a package approval workflow, but I usually don't recommend that from "day one" - it's a bit too restrictive for end users, who are used to any package, any time.
If it's easy for you to identify first-party packages (maybe they are prefixed with MyCorp
or something), then you can bring those in with bulk import. If no one uses (downloads) them after while, you can delete them with retention policies.
Cheers,
Alex
Hi @stuart-houston_1512 ,
I would go with the second option, i.e. migrating packages from your internal repository to ProGet. It should be relatively easy to do this with a bulk file-system import.
The concern I would have with trying to configure ProGet up as a "proxy" to an old, internal conda repository you configured is that ProGet doesn't really operate as a "proxy" (i.e. blindly forwarding requests), but instead aggregates results from multiple sources using an API.
The Conda API isn't very well documented, doesn't provide much metadata about packages, and an old internal server will most certainly have bugs/quirks that ProGet would never be aware of. So your end users will end up with a buggy experience.
Connecting to the official Anaconda repositories is fine, and if there are any issues/bugs (like they change the API or something), we can easily reproduce and fix it.
Alex
@sebastian I like that idea!
That information is readily linked in the database, so it's just a matter of figuring out how to get ProGet to display it.
Did you guys see the "Noncompliant packages" report (i.e. /sca/compliance-report
)?
This is by far the easiest pattern: a non-sortable list of the top 100/500/1000 items.
That means your "show packages with Apache-2.0
licenses" wouldn't show everything, but I can't imagine you'd want to do that anyway. I'm thinking, you'd want to see the 7 packages with Artistic-2.0
instead.
I'd also like to ditch the "License Usage Issues" infobox, or at least replace it with something useful. It made sense with the ProGet 2023 license rules, but with policies we cannot easily query why a package/build is noncompliant.
Good news everyone, ProGet 2024.11 now has support for pub (Dart/Flutter) feeds! I'm going to lock this topic now, but if you run into any issues please start a new topic
cc/ @aaron-bryson_0311 @fabian-droege_8705 @proget-markus-koban_7308 @jensen-chappell_6168 @harald-somnes-hanssen_2204 @bvandehey_9055
Hi everyone,
Quick sneak peak of the work-in-progress:
Cheers,
Alex
Turns out the errors I encountered were entirely related to the "new" SQL Server driver that ProGet 2024 was using. Although this was in beta for well over a year, it was "only" in production for a few months by the time we incorporated it into ProGet. This driver issue impacts anyone who uses .NET on Linux.
As I mention, I think it's pretty clear that Microsoft has effectively abandoned SQL Server. It's one thing to release something this low-quality and untested... and another to have it linger with issues like this in production for months. We will be moving to Postgres, which do not have these endemic problems.
In the meantime, I have instituted the "one year rule" for Microsoft products - meaning, unless it's been shipped in their general release channel for a year, we will not even consider using it.
@sebastian thanks for sharing!
The issue that are describing sounds like more like the "ASP.NET Firehose" problem, and less like this obnoxious driver problem. They're both really obnoxious problems
The firehose problem hit basically anyone who went from .NET Framework to .NET Core, and had a high-traffic application. Basically, Microsoft gutted all of the "request throttling" that was in ASP.NET, which took into consideration a lot of factors, from system resources to processor type to cores, etc., and then queued requests appropriately --- now it's just left for the "user" to handle.
It was on the .NET8 roadmap to get fixed, but I guess they gave us a broken sql driver instead.
Anyway... great to hear the request throttle did the trick. I'm thinking we should just ship with a default of 100 and leave it at that. Maybe "wrap" some of the performance errors and advise the user to adjust the limit down or something.
I wasn't quite ready to write a blog post on this, but I figured I'd journal my thoughts on the forums to start - in the event that anyone is experience Linux performance issues with SQL Server.
I've been "playing around" with low-spec Linux VMs (1-2 vCPU, 2-4 GB) for the purposes of creating "cloud trials" of ProGet. Eventually, I'd like to offer new users the ability to fill out a form on my.inedo.com that would automatically provision a server for them to evaluate.... but since we're paying for the resources, I want to keep our costs low.
We would never recommend such a low spec for production usage, but its fine for testing. That is, until a powerful developer machine hits it with an onslaught of requests during a NuGet or npm restore. This means hundreds of requests per second that need to be proxied to public repositories if the packages aren't cached.
To simulate this, I wrote a basic C# console program that just downloaded 1000's of packages in parallel from NuGet/npm feeds on ProGet with no caching enabled. This means that each request will trigger database queries and outbound requests to nuget.org. It's a DoS program, basically.
My goal was to find out what combination of settings would allow ProGet to not totally crap out during evaluation usage, and maybe discover a way to warn users that ProGet is under heavy load.
WAS: Unexpected Problem: Linux SQL Server Warmup Required
Turns out the errors I encountered were entirely related to the "new" SQL Server driver that ProGet 2024 was using. Although this was in beta for well over a year, it was "only" in production for a few months by the time we incorporated it into ProGet. This driver issue impacts anyone who uses .NET on Linux.
As I mention below, I think it's pretty clear that Microsoft has effectively abandoned SQL Server. It's one thing to release something this low-quality and untested... and another to have it linger with issues like this in production for months. We will be moving to Postgres, which do not have these endemic problems.
In the meantime, I have instituted the "one year rule" for Microsoft products - meaning, unless it's been shipped in their general release channel for a year, we will not even consider using it.
I noticed that if I ran my DoS program shortly after the container started, I would see a flood of messages like this: Microsoft.Data.SqlClient.SqlException (0x80131904): A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 35 - An internal exception was caught)
That was not at all expected.
After researching this a bit, it looks like this is a well-known, multi-year bug in the Microsoft's SQL Server driver for Linux. It's documented in their GitHub repository, and it plagues even Microsoft's own business-line products. There's no ETA and virtually no response form Microsoft on the matter but it's endemic in their driver.
Long story short, low-spec Linux instance of ProGet needs to "warmup". There is some kind of bug in the client that is triggered when you try to open a bunch of connections at the same time. But once the connections are open, it seems to be fine.
In the short term, I don't know to address it. As long as the connections are opened "slowly enough" it's fine.... maybe? But no idea how to control that.
A single, underspec'd machine can only handle so much load, and I wanted to find out what those limits were, and what errors I would get in "new user evaluation" scenario.
On Linux, with my test scenario, this manifests with basically the same, "error 35". I believe it's socket exhaustion, but who knows? In most production cases, we see SQL/network timeout - this is because the database is usually filled with a ton of data, and is taking a bit longer to respond. In my test scenario, it wasn't.
When I added a client-slide throttle - or even spaced out issuing the requests by 10ms - there were virtually no errors. If only NuGet and npm clients behaved that way...
Under Admin > HTTP/S & Certificate Settings > Web Server Configuration, you can set a value called "Concurrent Request Limit". That made all the difference after warming up.
25 Requests worked like a charm. No matter what I threw at the machine, it could handle it. The web browsing experience was terrible, which makes me think we may want to create a separate throttle for web vs API in a case like this.
250 Requests caused moderate errors. Performance was better while browsing the web, but I'd get an occasional 500
error.
I wish I could give better advice, but "playing with this value" seems to be the best way to solve these problems in production. For our evaluation machines, I think 25ish is fine.
I wanted to compare ProGet's built-in limiter with Nginx's rate limited when used as a reverse proxy. I played with it a bunch, and found that the same "25" setting basically eliminated errors on my micomachine.
Here's my Nginx config file:
limit_req_zone global zone=global_limit:10m rate=25r/s;
server {
listen 80;
server_name localhost;
location / {
limit_req zone=global_limit burst=500 nodelay;
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Interestingly enough, it also made the web browsing experience seem much better. I'll be honest, I have no idea how, but it has something to do with that nodelay
keyword, and some algorithm they use.
My conclusion is that Nginx does a better job of appearing to not be as slow, but like I mentioned earlier, having a separate queue for Web vs API requests would probably make our tiny Linux box run a lot better.
Ultimately, I think "playing" with request limiting and being aware of the "warm up" is important. A newly-spun up container may exhibit
This pains me to admit as a lifelong Microsoft devotee, but it's time for us to move to Postgres. This quote from Brent Ozar, one of the world's most prominent Microsoft SQL Server consultants, sums the scenario up nicely:
β[Microsoft SQL Server] shouldnβt be used for most new applications you build today. If you have an existing app built on it, youβre kinda stuck, but if youβre building a new application, you should use Postgres instead.β
My opinion.... Microsoft has all-but given up on SQL Server; this issue in particular has not only been open for two years, but it impacts their own products -- and instead of fixing it, their own product teams simply are moving to other databases
Many years back, we had a Postgres version of ProGet. We eventually dropped it because it's too much of a pain to maintain two sets of database code, and we never built the tools to migrate from one to the other.
That's something we'll have to do when going back to Postgres -- build some sort of database migrator. We'll also need to support both versions for a bit, and eventually say goodbye to SQL Server. No timeline on this, but just something I've been thinking about lately.
Anyway - just wanted to journal my thoughts, and the forums would be a nice place to post them - maybe I'll turn this to a blog or newsletter later :)
Alex
Hi @sebastian,
Thanks for sharing your thoughts on this! Few things to point out...
[1] The "Missing Package Problem" is not as bad in ProGet 2024, mostly because it will only apply when there's a license rule. In ProGet 2023, a "missing package" would happen even for vulnerabilities.
[2] We're working on a new feature/module (tentatively called "Remote Metadata") that is intended to routinely cache/update meatdata from public repos like nuget.org, npmjs.org, etc. This feature enables two usecases:
It works somewhat independently, and basically it'll just show up on the Admin tabs as like "Remote Metadata" and you can configure providers, urls, etc.
I hope to have a prototype in a couple weeks and will post some details on a new forum posts. As an FYI this is something we will limit in the Free/Basic editions and have full-featured in the ProGet Enterprise product.
[3] "Package In ProGet" could be a policy rule to add after RMetadata feature, though it's probably not a big deal if ProGet can detect licenses thanks to RMetadata
Best,
Alex
@fabian-droege_8705 thanks for the insight! I know basically nothing about the platform, so good to hear from someone familiar w/ the ecosystem.
We'll take another look in the coming weeks, and I'll post an update once we have a better idea of roadmap. Just skimming the thread here, it seems that there is a package/archive format, an API, and documentation for private repositories - so that's a huge positive
@proget-markus-koban_7308 over the past few years, a lot of the "chatter" I've seen on social/news over the past couple years seems to have trended negatively towards Dart/Flutter as a platform/technology - and I haven't seen growth. It seems pretty niche still?
And then just last week or so, I saw headlines about Google laying off the Dart/Flutter team.
What do you think of the future of the platform?
Maybe I'm getting old and 14px seemed to small
In addition, we don't display a lot of textual information so the larger font size seemed to fill out the whitespace nicer, particularly in tables (which there are a lot).
I was never really all that happy with how this filled out whitespace (from ProGet 2023):
I'm not thrilled about ProGet 2024, but it felt like an improvement... and most importantly, "something different" than past few years:
That said, for ProGet 2025 I'd love to do a much more notable style refresh (logos? etc?), perhaps even some navigation tweaks. So open to ideas there
@philippe-camelio_3885 thank you, I appreciate that! No plans to give up -- it's a passion of mine, and I feel someday we'll figure out a better product marketing fit :)
We'll investigate/fix this via OT-507 in an upcoming Otter release. I suspect it's related to the SDK changes that came out of BuildMaster 2023; we do not test Git with Otter, so it's not surprising it doesn't work.
I really like Otter, but with each major version too many functions change or broken, it's frustrating
Yeah same here; unfortunately we're struggling with "product market fit" so the product is going through changes and
The first versions (v1/v2) were designed as an improvement on the "desired state" concepts from Puppet, Chef, DCS, etc. But the whole "Infrastructure as Code" market never really "took" on Windows. And obviously "no one" uses IaC on Linux anymore thanks to Docker.
In the next versions (v3), we repositioned Otter to be "PowerShell job/script runner" and "compliance as code". This is how most customers used the product, so it seemed like there was a market there. That broke a lot of DSC stuff that hardly anyone used (sadly!)
Very open to ideas on where to take Otter; I think security/compliance monitoring is maybe the right direction, but if that's the case, we need to figure out how to get a lot more pre-built code/scripts in Otter.
To make this work smoothly, a webhook for SCA events would really be immensely helpful. Is something like that already on the 2024 SCA roadmap?
We do have a webhook notifier for "non-compliant packages found in build" planned, so perhaps this would be on the list!
When a SBOM scan is uploaded, no issues are created initially even though the UI suggests that analysis was done already. One has to run analysis a second time with the issue checkbox set for issues to be populated.
I just published some preview documentation, but the concept/model is slightly changed here:
When builds in certain stages are analyzed, an "Issue" for each noncompliant or inconclusive package will be created. These are intended to allow a human review and override noncompliant packages.
Basically, the idea is that nearly every build will be created through a CI process and ignored until it needs to be later tested. And that happens later in the release pipeline, after the build is promoted to a testing stage.
Our new guidance will be run pgutil builds create
(basically new name for pgscan inspect
) at build time, eactly like it's done now. And the later, when you deploy to a testing environment or otherwise are ready for testing, run pgutil builds promote
. At that point, the issues are created.
We were thinking to have "Unresolved Issues" present on the project overview page, and it'd be really messy if it's mostly just CI builds.
Hope taht helps explain the thought process.
@jw thanks for additional insight!
Unfortunately we simply won't have the opportunity to explore this until well past ProGet 2024, and only after we've gotten sufficient feedback from other early adopters on other gaps. I think there are other important things we need to consider as well, and handlign this is so much more complicated to handle this than it may seem, especially at scale and with how our ProGet is configured in the field.
There are also other mechanisms like policy exceptions built-in that could easily handle System.*
and runtime.*
packages, as I suspect the only thing you would worry about those are vulnerabilities.
As an alternative, I would if you could just write a tool/script to:
That's not optimal, but that is one thousand times easier than getting something liket his working in PRoGet.
Hi @jw ,
Although the released version will be able check for vulnerabilities without needing the package metadata, reading server properties (deprecation/unlisting), checking if it's latest patch version, doing license detection, etc. require having the package metadata.
However, the package metadata should already be in ProGet by the time you upload the sbom. When doing package restores from ProGet, the packages will be cached automatically. If that's not happening for you, make sure to clear your nuget package caches.
Ultimately we designed the SCA feature is designed to be used in conjunction with ProGet as a proxy to the public repositories. It's not a "stand-alone" tool, so it won't work well if packages aren't in ProGet.
The reason is, if the package metadata isn't in ProGet, it has to be searched for on a remote server. In your sample (one build, two packages), you're right.. it's just a few seconds to search that data on nuget.org. But in production, users have 1000's of active builds each with 1000's of packages... and that *currently * takes about an hour to run an analysis.
Adding 100k's of network requests to connectors to constantly query nuget.org/npmjs.org for server metadata would add hours to that time, triggers api rate limits, and causes lots of performance headaches. Plus, this "leaks" a lot of data about package usage, which is an added security concern. This is a major issue with tools like DependencyTrack - they're basically impossible to scale like ProGet.
Thanks,
Alex
Hi @jw ,
First, the reason you're getting "Package not in feed" (which would also happen in the ProGet 2023 feature as an Issue) is because that Sqlite package has not been cached or pulled to ProGet. However, if you just click Download (and thus cache) the package, then it would be in the feed, and this would go away.
When you browse a remote package in the UI, ProGet is querying nuget.org and displaying whatever their API shows. This query/data is not cached or retained otherwise - which is why it's missing when doing an analysis.
In ProGet 2024, "missing packages" wont be issues per se. Instead, an analysis will be "Inconclusive" -- and this means that there's not enough information to complete the analysis. If your policies don't check license rules (or there's an exception for license checking of Microsoft.*
packages), then we wouldn't need the local package to analyze it - and this would be considered compliant.
However, this functionality doesn't work yet. That's just how it will work.
Alex
Hi @andy222 ,
The Git issue should be resolved; that was related to some authentication issues with newer versions of GitHub Enterprise. It broke a lot of tools across the board, apparently. Anyway, I see it just got fixed today. Check out BuildMaster 2023.10-rc.5 if you can
How about pushing a NuGet package instead? That's a much more common scenario (folks migrating from Octopus), and it's one we're going to add some good first-class support for in BuildMaster 2024.
Thanks,
Alex
Hi @andy222,
Very sorry for the frustrations here - it's frustrating for all of us too (me especially) when it doesn't work
Just to give some context here --- we made a huge investment in BuildMaster 2022/23, and one of the areas was a major improvement for how we integrate with ProGet. The feed::FeedName
and directory::AssetDirName
convention is brand new, and it's an improvement/simplification on the "Secure Resource" convention that you discovered. However, both conventions -- as well as directly specifying those values on the operation -- are still supported. There's a lot (too much?) flexibility.
Unfortunately this particular scenario / use case (downloading assets from ProGet) was simply not one that we focused on:
NuGet packages are not uncommon when coming from TeamCity/Octopus model. Regardless, we didn't focus on this use case. We felt most new users are seeking Git, .NET, Maven, Build, CI Import, and Docker, so that's where most of our effort was focused..
That said.... I would love to make your scenario / use case something really easy to set-up, so hopefully that will give you the confidence to continue!
In this case, it's a "trivial problem" where something just didn't get "wired" up the right way. I can clearly see from the code that the ApiUrl
parameter is not being wired-up as it should be.
We'll get this fixed ASAP, just an extension fix. We just need to set up the scenario and make sure it works on our end first.
Cheers,
Alex
Thanks for the request; this is something that is close to requesting a new feed type, so I'll use that rubric to decide.
As we wrote in that link, new feeds can be very time-consuming to research, develop, document, maintain, etc. Like with all software, even estimating the cost is costly - so we can't really even begin the initial research until there's sufficient demand or market opportunity to justify the possible investment.
To be honest, I don't see there being much demand or any market opportunity for this. Time will tell, and maybe someone will comment on this in the future. But for now this seems really niche.
That being said - I took a quick look at the document you linked, and I don't see API docs (i.e. those missing endpoints you mentioned). Maybe it's something as simple as a basic JSON document. Maybe it's an absurdly complex and undocumented API.
However, if you can figure how the API works, and it turns out to be something like a simple JSON/XML index file.... and you can prototype/fake that using a static file inside of a ProGet Asset Directory... then we can likely implement that quite easily.
I know that's how RPM and Helm Chart feeds got started long ago :)
Alex
Hi @sebastian ,
This will all get a pretty big overhaul in ProGet 2024. I'll share the details in the coming weeks, but here is a sneak peak:
This is what it would look like when viewing the MyFeed licensing rules:
The "Scope" refers to the name of a policy, and you can create shared policies, so this would mean shared sets of licensing rules. You can also bulk-edit license rules on a policy:
I think the new features will change your workflows a bit... maybe you'll use "Warn"? Or perhaps maybe you won't block Non-compliant packages? So for now, I'd wait and see :)
Alex
@valeon @miles-waller_2091 @olivier @It-purchasing_9924 @entro_4370
Took a bit, but CRAN (R) feeds have arrived
https://blog.inedo.com/inedo/introducing-cran-feeds-in-proget/
@mrbill @entro_4370 CRAN (R) feeds have arrived
https://blog.inedo.com/inedo/introducing-cran-feeds-in-proget/
@dima-tinte_1260 @rob-leadbeater_2457 @sdohle_3924
Debian (Apt) Connectors are here! Check out this blog article to learn more:
https://blog.inedo.com/inedo/new-debian-feeds/
@shfunke_1795 @jrottmann_6111 @sdohle_3924 @bahues_9728 @appplat_4310
Thanks for insight into this! I'm happy to report that starting in ProGet 2023.22, you can create Alpine (APK) feeds with connectors :)
@paul_6112 said in Do you plan to upgrade JQuery in a future ProGet release?:
This was picked up by nessus on BuildMaster v7
Lol wow - that's ridiculous
As I mentioned before, it's a forked library thus not vulnerable. So I suppose you can continue reporting it as a "false positive" to whoever seems to care, and perhaps we'll also just edit the version number out to appease that the security tool
Thanks @carl-westman_8110 , I appreciate the feedback!!
As someone coming from Azure Artifacts, I'd love to get your impression on our draft ProGet vs Azure Artifacts page - we're slowly starting to try to articulate the high-level differences and benefits to ProGet. But I swear marketing copy about the software is harder to write than the software itself
We also have BuildMaster vs. Azure DevOps comparison page too, though it's quite a bit more involved.
Hi @carl-westman_8110 ,
Thanks for the feature request! I'm afraid this one's a bit too niche to implement as described, and this use case isn't something we'd want to support for Universal Packages.
However, Asset Directories are a good fit for this, and one of the use cases is a Static CDN. So that means you could use it for web assets like docs if you wanted.
You'll still need to publish the docs like you would to the other webserver. And of course you could hyperlink to the document root from the universal package description as well.
Cheers,
Alex
Thanks for the feedback @philippe-camelio_3885 !
The Applications page is "ancient", and was originally designed to show "what build is in what server/environment". That was super-useful at the time, and I suppose still is, depending on the use case.
But with multiple pipelines per application (like you have now) this view isn't so useful. I'm definitely open to redesigning / rethinking some of these dashboard/aggregate pages.
This is something we can think about for v2024 (since v2023 is just a couple weeks away ). I've put a note onto our roadmap planning board, and may jump back or email you directly for some feedbakc/insight
@shayde @sebastian really appreciate the help, we'll get this incorporated ASAP !!
Hi @jchitel_9895 ,
Thanks for the additional info! We "moved" your new topic back to this one, since we link these on this page in the docs and want to keep everything in one place: https://docs.inedo.com/docs/proget-feeds-other-types
Keep in mind that feed types are a significant initial and ongoing investment (it's a product in a product), and at first glance, Homebrew doesn't seem to make any commercial sense.
First and foremost, there doesn't seem to be a market here. Homebrew itself isn't commercialized. They tried a Kickstarted from 2013, but it seems to remain a hobby-type project. Compare that to Chocolatey (which may be a little older, and also did a kickstarter I think) -- they now have a decent size fulltime staff now.
But secondly (and on a technical level), there isn't a "Homebrew Repository" or "Homebrew Server" - as you mentioned, it's Git-based - which means all it's doing is cloning Git repositories, and probably using tags and specific repo layouts to determine packages.
Cheers,
Alex
Thanks for clarifying @philippe-camelio_3885. I see the issue in the code now. I think it's been this way for quite a while
On the View Page, there seems to be some special handling for credentials;
return "ssh-rsa " + Convert.ToBase64String(cred.PublicKey);
I'm just going to delete the "ssh-rsa "
bit, since apparently that can be incorrect. That's an SDK change actually, so it'll take a bit to be reflected in the products.
On the View Secret Fields page, we're just coercing the value to a string:
return InedoLib.UTF8Encoding.GetString(bytes);
That doesn't seem right either, but I'll just leave that as is. I know we redid that page in BuildMaster and eventually will bring to Otter.
There probably should really be a special page altogether for this type of credential, instead of using the generic "Edit credentials" page. Not a big priority but perhaps some day :)
@hwittenborn awesome! I'm not sure if you saw it yet, but we have a new API called the Common Package API; I hope to fold in Promotion, Repackaging, and Deployment under this so it can be consistent