There was a regression for the PowerShell agents that caused this to behave like this in some cases, but it was fixed in Otter 2.2.5 (released today). I should be fixed
Let me know if this resolves your issue!
Welcome to the Inedo Forums! Check out the Forums Guide for help getting started.
If you are experiencing any issues with the forum software, please visit the Contact Form on our website and let us know!
There was a regression for the PowerShell agents that caused this to behave like this in some cases, but it was fixed in Otter 2.2.5 (released today). I should be fixed
Let me know if this resolves your issue!
The first example you mentioned (i.e. the one to /api/management/feeds/create
) is using the Feeds Management API; it looks ok to me on first glance... can you share the error message you got when invoking it? You should be able to see logged request/responses in the Admin > API Keys as well.
The second example you mentioned (Feeds_CreateFeed
) is using the Native API, which we don't really recommend if there's an alternative available. It is basically a wrapper around stored procedures and the database. But in this case, it looks mostly correct, but the FeedType_Name
is wrong; if you look at the Feeds
table in the database, you'll see a universal feed is actually called ProGet
in the database.
Anyways, please use /api/management/feeds
because it's easier to use and won't change if we update the database or stored procs.
Quick update: there's a bug we identified with some WsMAn connections that are causing different errors, but it might be related. We're going to fix this in Otter 2.2.5, shipping Friday.
Otter 2.2.2 doesn't seem to exhibit this behavior.
Long story short, your workstation is overwhelming your server with network connections.
Remember that the NuGet.org not only runs on a massive web farm with dozens of load-balanced servers, but it's a static-file based index that's done mostly with CDN-based files.
Each request you make to ProGet, on the other hand, needs to be authenticated, checked for vulnerabilities, licenses, sent to connectors, etc. And I would be surprised if your server is more powerful than your workstation.
There are some features in ProGet Basic like metadata caching that will help, but ultimately when you scale to more developers you ought to invest in better server hardware and eventually load balancing. See https://blog.inedo.com/proget-free-to-proget-enterprise
This is because Get-Http
is an execute-only operation, which means it will only run if configuration changed.
To force execute-only operations to run in configuration plans, you need to specify the execution directive to be always execute, as follows...
with executionPolicy=always
{
...
}
Hopefully we can better document this in the future; it's buried in the formal specification.
@knitvijay_7631 said in During build getting error (unable to clone from github):
Clone failed: unknown certificate check failure
I did a quick search on this message, and there's lots of advice on how to get this working. The problem is coming from Git, and BuildMaster is just reporting the problem. I think your best bet will just be to use HTTPS instead of SSH. It's a lot easier to configure...
BUt here's a post that seems to be quite popular that gives lots of tips and tricks on resolving this..
That's not surprising; as I mentioned, the problem lies with your configuration. Either you're using the wrong name and password in Otter, or the WSMan endpoint on the remote server isn't enabled.
I recommend you to use to Inedo Agent.
We hope to include this ability in a future release, but it's a bit more complicated to get the details worked out. For now, just uninstall, then reinstall (pick version you want) will work.
Here is the underlying error message:
Can not connect to Windows servers with WSMan endpoint. Try to use credentials FQDN and Netbios, result - exception.
Basically, this means that your username/password is not being accepted. It should be something like DOMAIN\username
or username@domain.local
It could also be that WSMan isn't configured to allow these connections; this can be controlled at the domain. I recommend you use the Inedo agent, it's a lot easier to set-up and get working
@valeon fantastic, thanks so much! This will really help us explore; it doesn't look "too bad", and is "somewhat similar" to how Debian manages it's packages.
We'll try to start hacking around with a POC in the coming weeks, hopefully; i'll update when I can!
Just as an update, we will be doing this:
https://inedo.myjetbrains.com/youtrack/issue/PG-1555
"Proxy npm audit requests to npmjs.org (experimental)"
Unfortunately, npm audit
is a totally undocumented endpoint and based on past experiences, npm's API frequently changes is nontrivial to reverse engineer. Moreover, npm, Inc does not permit or support third-party access to the API that's used by npm audit
.
When they change that underlying API (whether to enforce the no third-parties rule, or to do something from the client), ProGet will once again be broken (or worse, provide incomplete/incorrect results). At least now, you know that this is the only supported way to handle it...
Do you have an npm enterprise license? This might be something to work with through their support channel.... they don't have a partner program at this time, so getting permission or insight into how we can access this API is difficult.
ProGet does not have the concept of "public" or "private" feeds, you can instead grant the "Anonymous" user certain permissions, including viewing and publishing packages.
ProGet Free Edition does not allow feed-level permissions, only system-level permissions. So you can grant "Anonymous" whatever rights you want, just not at a feed-level.
If you set Warn, will it automatically advance?
One idea as well... how about also setting a build variable using Set-ReleaseVariable
called IsValidBuild
, and then using a Variable Value Promotion Requirement (IsValidBuild = true)?
hi all, thanks for the interest/comments; I decided to write-up a page that details this on the docs.
http://inedo.com/support/documentation/proget/feeds/other-types
I'm hoping we can use this public thread to maintain the discussion on technical detail; otherwise it'll get stuck in my email, or somewhere else, and we can get everyone to chime in this way.
That said, @M-W if you've got any insight into how R/CRAN works please do share :)
The Postgres container has had a lot of performance problems at scale, and neither our customers nor engineers could figure it out. A regular instance was fine, but our customers wanted containers.
But in the long term, maintaining two separate code bases doesn't make sense. And now that Sql Server is available, it makes sense to do it.
Hi, sorry on the slow reply!
I think you've got a good undersatnding of the situation, but a couple of comments:
do not share the Temporary files (ServiceTempPath, WebTempPath); these should be kept on the same server as the BuildMaster web/service app. These are only used during the runtime of those applications.
on the back up server, make sure both the Web Application and Service are set to DISABLED, or configured with a bad username/password such that they cannot be started easily; having two identical BuildMaster instances pointing to the same database and the same agents will cause that multi-master problem you dont' want to deal with :)
we have had some customers put our products in a Windows Container, but largely the support on the Microsoft side isn't so great, and it's more trouble than it's worth; we are moving towards DotNetCore so we can have BuildMaster run on Linux (and linux containers)
Does that make sense? Let me know how it goes.
And by the way, I'd love to document this better... would you be interested in helping me with this, especially once you have it running in your set up? I think it would really help the community :)
Thanks Scott; just an FYI, we spent sometime investigating this and it actually looks simpler because it separates the package metadata from the file metadata, and uses XML as the serialization.
We'll keep digging; since it's addidirive, I don't see why we couldn't get it in a maintence release if it's as easy to do as, say, Helm or something. Just don't want to get stuck in a rabbit hole like PHP Composer :(
This is also being discussed here; long story short, this is a bug in Microsoft's PowerShell Gallery (and in the packages) in that they have two conflicting version numbers, and sometimes one is used, sometimes the other is used.
The docs have been updated for now: https://github.com/Inedo/inedo-docs/commit/10c42c3ad546ac2feb4748b7736db06567c7a6d6
Ugh, this sounds like a mess :(
Your comments on that GitHub issue seem spot on, and I'm not so keen on introducing quirky behavior to workaround their bugs. We made that mistake on the NuGet feeds and it made life worse for everyone, especially when the NuGet team silently fixed them (and introduced more weird behavior).
So I think we should just handle this via docs for now. I've updated the docs, feel free to suggestion more changes :)
To be honest, "SCM Triggers" are in a "quasi-legacy" state, because they use the "Legacy Providers". As such, we don't list it as a feature, and we hide the button on installations without SCM Providers configured. This was done slowly over the course of two years, in various v5 versions, to gauge user reaction.
So, your inquiry is good feedback. That being said, we have not yet put a lot of effort into properly redesigning this feature. The reason for this, is that the "general direction" has been moving towards post-commit hooks; i.e. triggering the build from the SCM server, once a merge request (or something?) happens. Even on dedicated CI tools, the preference is shifting towards this route. I think this is because the branching logic has been too complicated to follow (even for really advanced CI servers like Jenkins and TC).
So I wonder, have you looked into post-commit hooks?
Otherwise, we are definitely considering a general-purpose poll/trigger feature: basically, a poll will periodically occur, and if a condition is met, trigger some sort of event. This would be shared across multiple products, and the usecases might vary based on the product.
One motivation to develop this feature is that it will allow us to really feature the "BuildMaster is from source to production" story, which is becoming more popular again. Most people didn't want that, but now the pendulum is swinging back to it, now that source code tools (like Git) are integrating (totally inadequate) build/release features.
In any case, none of this is all that "immediate", but I wanted to share reasoning to you about this, and solicit your feedback.
Of course, if you need to create "legacy components" we can help you do that as well. We don't plan on removing those from BUildMaster, just hiding them so no new users can access them.
It seems like NuGet.org now finally supports it! So we will too; I filed PG-1246 to add it to a future maintence release.
Good question.
Regarding Vor Security, that was a recent acquisition by Sonatype, and it's being transitioned into a new service called OssIndex. Sonatype plans to keep this going for the foreseeable future, and we have verified this with Ken Duck (formerly of Vor Security, now Sonatype employee). ProGet will continue to support it (we are renaming it as well).
Moreover, we are planning to work with Sonatype to better integrate their broader services (vulnerability scanning) with ProGet. We are also investigating Blackduck integration, though we're not entirely sure how it would work with ProGet.
Regarding "developing our own"... broadly speaking, there are two types of vulnerabilities scanning:
We don't believe that static analysis has a place in a package manger; there are a handful of tools that can scan your codebase directly for this.
As for repository/databases, it's not really bout "finding" vulnerabilities in software, it's more about "aggregating databases" and then translating those into machine-readable formats. This is what Sonatype, Whitesource, etc., do, and we think more vendors will continue to innovate in this space.
But the "repository" and "scanning" are two different problems, and you should pick the best of both problems; it would almost be like saying "Microsoft makes Office, may as well use Visual Studio and .NET".
ProGet has the extensibility support for this already, so we should be able to integrate with new providers as they come up,
The behavior your describing is to be expected; basically the API acts as an impersonation token, meaning... if you supply an apikey, and its associated with a user, then it's as if you logged in as that user.
So in this case, try this:
Assuming myFeed doesn't allow anonymous access (you will need to restrict this from permissions page), then you will need to provide an api key or username/password to access that feed.
In this case, just use the key you created.
Hope that helps!
It doesn't, but I think we should add it to the Release & Package Deployment .. so I added BM-3149
So, it will come in a future maintenance release, since it's additive and seems to pose minor risk.
There isn't currently, but there will be soon! Please see PG-1221
I can't imagine any reason at all this wouldn't be done, and it can go it the next maintenance release assuming it passes code review etc!
Thanks much for the specific suggestion!
Mostly it's goign to be the PlanVersions table. Plan_Bytes is UTF8-encoded , so you can do "CAST(Plan_Bytes as VARCHAR(MAX))". The ConfigurationFileInstances table may reference it, if you use configuration file assets. IssueSources are another destination.
Note you should never directly update the database.
Hey Clint;
This was intentional in the infrastructure sync; credentials are a bit trickier, because of the encryption key and fact that some credentials aren't supported.
That said, this is definitely on our roadmap and will (likely) come in the form of a new (free) product that manages multiple instances of our tools:
In the mean time, it's possible to do with a database script / simple tool that just updates rows from one database to another (ResourceCredentials table). We can certainly help with that if needed.