Thanks for the update! I've noted this in the docs, and linked to this discussion :)
https://github.com/Inedo/inedo-docs/commit/d24087911584bbda833314084a58c2ae1ff41c39
Thanks for the update! I've noted this in the docs, and linked to this discussion :)
https://github.com/Inedo/inedo-docs/commit/d24087911584bbda833314084a58c2ae1ff41c39
Hello
Howto raise an error in Buildmaster for a failed SHEXEC ?
I have an ansible playbook which failed (RC=2) but the Pipeline step is OK while it should be in error
I can't figure out how to manage this.
Here is the log:
...
INFO : 2020-05-03 13:46:43Z - fatal: [127.0.0.1]: FAILED! => {"changed": false, "msg": "Source template/site.conf.j2 not found"}
INFO : 2020-05-03 13:46:43Z - to retry, use: --limit @/tmp/bm/extranet-monespace/ansible/main.retry
INFO : 2020-05-03 13:46:43Z - PLAY RECAP *********************************************************************
INFO : 2020-05-03 13:46:43Z - 127.0.0.1 : ok=4 changed=2 unreachable=0 failed=1
INFO : 2020-05-03 13:46:43Z - Exit code is: 2
DEBUG: 2020-05-03 13:46:43Z - Script completed.
DEBUG: 2020-05-03 13:46:43Z - Deleting temporary script file (/tmp/otter/scripts/d2f7283d77c94cfb89151ca2e3004212)...
DEBUG: 2020-05-03 13:46:43Z - Temporary file deleted.
DEBUG: 2020-05-03 13:46:43Z - Cleaning up...
DEBUG: 2020-05-03 13:46:43Z - Deleting /tmp/otter/_E174799 on VM111021...
DEBUG: 2020-05-03 13:46:43Z - /tmp/otter/_E174799 on VM111021 deleted.
DEBUG: 2020-05-03 13:46:43Z - Cleanup complete.
Any ideas ?
What do you think about it? We use GitLab for our internal projects, but would like to understand the choices between GOGS/Gitea and the "big players" like GitHub/GitLab.
I know we are very interested in helping organizations use as many different source control platforms as possible, and GOGS seems pretty interesting! So keep us in the loop, as far as how your integration goes with GOGS, it might be worth us building a specific integration. I've also heard of Gitlea, but really not sure about their popularity in organizations, or if it's worth investing/investigating how to build better/first-class integrations to our products
Aww, that's too bad. Unfortunately Repository Webhooks aren't exposed in the SDK at this point, so it's not nearly as easy to try building/testing it on your own. We'll definitely consider adding this in a future SDK release however.
Thanks @kvasudevan_8753 - that definitely will do it :)
@nik-reiman_6009 package interaction is done through the package-specific, third-party apis; we only provide minimal documentation for these APIs, as they are generally either already documented elsewhere. We simply implement those API.
So in this case, if you search for how to delete a nuget package from the api, the first result is from Microsoft which details how to Push and Delete packages from a NuGet API
What I mean by "republish" it is to republish it to your ProGet feed. This local package will then "mask" the connector package, and ought to behave fine.
In this case, since it's just one package behaving like this, and there's such an easy work around (download package, edit metadata, republish), it's really not worth the investment to even investigate how easy/difficult it would be to fix. If it becomes more widespread, we'll investigate.
This has to be done on NuGet and PowerShell feeds for some old and quirky packages (like ones that have a 1.01.1 and 1.1.1 version). It may be trivial, but it could introduce the exact regression you described (people using build metadata in their package.json), or other (unknown) regressions, and it's just not worth the risk to our users.
The Webhooks are GitLab and GitHub specific, but if Gogs mocks/uses the GitHub pattern it should work.
Thanks, that makes sense where it comes from.
Since this is the only package with a problem, I wonder if you could just "republish" it with the build-metadata number missing?
We're a bit hesitant to make change that could impact this. For all we know, this works in older versions of the npm client, or newer versions of the npm client, or the yarn client, etc., and users are relying on this behavior.
Hello;
Sounds like there's a lot to unpack...
First, we've got some major changes in ProGet 5.3 that should help out quite a bit, but I've included a lot of resources in this post to consider as well. Consider that, as a private registry, ProGet is designed to host "private images" that you manage (which can be build from "public images"), and it's really not designed as a "registry mirror/cache". These are two different usecases.
The authentication issues, that might be different problem and might be worth a new thread?
From But here is our current thoughts/documentation on repository mirroring in Proget:
There is the ability to "mirror" hub.docker.org, but this feature was really only designed to enable Docker, Inc. to host public registries in sensitive regions like China (so you can mirror to registry.docker-cn.com instead).
Unfortunately, the Docker client will only use the configured "mirror" for building and pulling; images are always pushed to hub.docker.org.
Docker, Inc. does not seem to be interested in enabling this, and we aren't particularly keen on forking the Docker client to enable this. Thus, it seems there's no sense in ProGet supporting this anytime soon
On Dockerhub, there are a lot of problematic images that cause a new breed of "worked on my Docker" problems in companies. openjdk:11 is a great example of this problem. The connector "problems" in ProGet are a symptom of the problem, but we strongly recommend you to not use "third-party" container image directly, but instead follow our "base image" development pattern.
First, and foremost, openjdk:11 currently references what's called a "fat" manifest, which is a sort of "pointer" to other manifests that will be used depending on the run-time environment of Docker. There are currently four different environment-specific images that this "fat" manifest points to, so when you build an image on top of it (e.g. FROM openjdk:11), your image will be "locked" to whatever environment configuration the build machine (or even workstation) had at the time the command was run. This may be different than the machine you run the image on.
Even more problematic, however, is that 11 is a time-sensitive tag, and what 11 refers to keeps changing. Today, it points to four different versions of four different images (as of 7 days ago), but tomorrow... it could point to 3 totally different images. Who knows? It's a repository maintained by a third party, and 11 means what they say it means, at their discretion.
The Dockerhub is meant to be the "authority" on this repository, and this is where connectors are a challenge. ProGet caches certain informations about these images, but when the meaning of these things change, it defeats the purpose of caching. If you disable caching, it should work... but then you've already , but what's the point then? You're downloading huge image files constantly then.
So instead, here's how we recommend you develop a practice around third-party base images.
docker pull openjdk:11
docker tag openjdk:11 proget.mycompany.com/dev-images/openjdk-linux64:11.0.6-rc.1
docker push proget.mycompany.com/devimages/mycomp/openjdk-linux64:11.0.6-rc.1
Even better would be to use a Dockerfile (e.g. FROM openjdk:11) that lets you customize the image as needed, but this a good start. Then, after you/developers are happy with that version of that image, then you can "re-tag" from 11.0.6-rc.1 to 11.0.6. In ProGet 5.3, these operations are made easier, and "virtual" tags are automatically generated. But the important thing is that you treat "tags" as immutable, and provide consistent images for your users.
Note that, the repository name is mycomp/openjdk-linux64. It's important to not use only openjdk-linux64, since that name is identical to _/openjdk-linux64 and library/openjdk-linux64. The library/empty/_ namespace are intended only for "Dockerhub-certified" images
Docker is supposed to solve "works on my machine", but unless you treat tags as immutable, you'll keep getting "works on my Docker when I ran it," and that's even harder to debug. ProGet is "exposing" the underlying problems with these machine-specific, time-specific tags, but I guess it's better to discover it early on, then in Production.
Hello, you can definitely delete packages form the API!
But please note, from the API Docs page:
API for Third-party Packages and Feed Types
In order to support third-party package formats types like NuGet, npm, etc., ProGet implements a variety of third-party APIs. We only provide minimal documentation for these APIs, as they are generally either already documented elsewhere. However, you can generally find the basics by searching for specific things you'd like to do with the API, such as "how to search for packages using the NuGet API" or "how to publish an npm package using the API".
hi Ryan,
Can you try Active Directory (New)? The Active Directory with Multiple Domains will be deprecated in 5.3, and the code behind the scenes is totally different anyways. So, it might behave differently.
Regardless, based on what you wrote, it seems to be a problem of NETBIOS mapping.
Basically, the LOGON_USER string contains DOMAIN\username, and ProGet needs map DOMAIN to the actual domain. Sometimes the automatic querying doesn't work (permissions, not configured on domain, etc), so you can specify it in an advance property in the Active Directory (New) provider.
So please try this if it doesn't work once switching to Active Directory (New).
Hmm, it's strange. Is this the only package and version that's acting like this? The "+" implies a sort of build-metadata in SemVer, but i'm not sure if npm supports this?
Anyways the npmjs.org version doesn't seem to have the text 4.12.20 anywhere, at all; any idea where that's coming from?
OK; at this point, I think it's going to best to schedule some one-one-one review time with our engineering team.
I will start the process on this end, but can you please open a ticket at here - https://my.inedo.com/tickets/new
You can write "6.2 UPGRADE" in the "How can we help", and include any other emails you'd like us to be in touch with in the form as well. Then we will work and get stuff scheduled!
Hello;
Given the problems with the existing server, here is my recommend:
dbupdater command from the manual install)Manual install guide for the database: https://docs.inedo.com/docs/buildmaster/installation-and-maintenance/installation-guide/manual
Backup and Restore instructions are here: https://docs.inedo.com/docs/buildmaster/installation-and-maintenance/backing-up
Glad to see you made your own templates!
So, there are two types of templates:
Setup Templates, which are the JSON-based templates that give you access to quick settings
Application Templates, which contain plans, pipelines... and setup templates
I just want to confirm, when you say "Apply Template", do you mean the latter (Application Template)?
If you can let me know about the error, such as a stack trace or log file, I can investigate!
That error message looks like it's the PipelinePage, not the Extensions page...
I think you've got a severe configuration problem on the server still. Between multiple databases, and "impossible" errors, I think you should try this...
Start with a totally new server.
Install BuildMaster 6.1, fresh installation
ensure it works (run the tutorial application)
Restore your database to that server.
From there, go to Admin > Extensions, and install what extensions you need.
All of the steps look correct!
I assume that, global::Proget BuildMasterTemplate contains a domain username.password with access to the feed?
The first thing I would try is to (temporarily) change global::Proget BuildMasterTemplate to your name/password. If that doesn't work, then grant Anonymous access to the feed.
If it still doesn't work, then I think the problem is that Integrated Authentication is enabled on IIS. Even though it's not enabled in ProGet, IIS will still force authentication.
I'm sorry, I misunderstood, I thought your instance was working. Now that I see older messages, I see not.
OK ...
What happens when you visit /applications/4? The error message you've shared appears to be from /all-applications.
Great, you've got 6.1 running.
So, BuildMaster v6.2 removes all legacy features, and the upgrade is blocked until you run the legacy feature detector and it says no features are detected. See the BuildMaster 6.2 Upgrade Notes.
I'm concerned about the state of your server. How about making a new server for BuildMaster 6.2, and then migrating your applications as per the guidance in the kb article? Then, you won't have more server problems in the future.
As for that script, I'm a little concerned with it. If it failed to run, then it means the BUILDNUMBER and RELEASENUMBER columns in the database may be too short (new length is 50 characters, old length was ... I forgot... but shorter). It's not a huge problem and would likely go undetected. Just woudl give an error if you made a build number that was between the old length and 50 characters.
Anyways, if you follow the plan of making a new BuildMaster 6.2 server, then migrating your applications per that article, then you won't have to worry about it.
It sure is! They use the same directory structure, and each product ignores what the other product doesn't use.
FYI: in our future vision, we want to make UPack-based rafts, so you can deploy a raft to a PRoGet instance, and download/manage versioned, read-only versions of it.
Can you try this instead? select top 10 executed_date, Script_Guid, Script_Name from __BuildMaster__DbSchemaChanges2 order by Executed_Date DESC
Can you try to restart the web application (in IIS, or restart the Integrated Web Server), then visit the home page? I think something might be cached... that error doesnt' make much sense.
How about browsing to /applications/4/ (since I saw Application with ID = 4 in the database screenshot).
So, if I understand correctly, using Email for username and PAT for password causes a problem. I'm guessing, this is happening during a Bearer token request.
Would you be able to run ProGet through a proxy server, so you can see the requests that are being made, and reproduce those requests? Alternatively, could you use the code I provided to request a Bearer token?
It's possible that you have an intermediate proxy server that's generating that 400 error, or it's a bug on Microsoft's end. The Microsoft documentation could also be wrong. But let's try to see if we can work-around or get information to Microsoft to fix it.
Can you try...
dbupdater.exe Update . /conn="Data Source=WIN-JG
8E2BQKINK\BUILDMASTER;Initial Catalog=BuildMaster;Integrated Security=SSPI;" /init=yes
Looking at dbupdater, the arguments are dbupdater update <script-path> <connection-string>, so I think the "." will include the current path.
Hmm... that's odd. I don't know. Could be a regression in the Legacy BuildMaster agent, or an unrelated problem.
Can you just upgrade it to the Inedo Agent?
https://inedo.com/support/kb/1039/comparison-of-buildmaster-agents
Can you try running inedosql instead of dbupdate?
Seems, an old link; here it is https://my.inedo.com/buildmaster/versions
Thanks; based on this, there's only one application? Is that correct? Seems like it might be off...
Anyways the database definitely wasn't upgraded. I'm guessing, during upgrade, it was pointed to a different database? Hard to guess...
You can manually upgrade the database:
https://docs.inedo.com/docs/buildmaster/installation-and-maintenance/installation-guide/manual
There may have been an error as well (look in the __BuildMAster__DbSchema2 table).
Hello; thanks to your package, we were able to reproduce this, and it's going to fixed as PG-1695 in the next maintenance release. Thanks!
Hello, these still function in the same manner, so I think it's configuration related.
Can you check to make sure the agent's temp directory is...
Thanks
FYI; this has been released already as a main feed type, no longer a pre-release :)
I'm going to lock the thread from here to make it easier to submit new issues if anyone has them
Something's definitely not right. I'm guessing, some manual adjustments were done on the server, perhaps moving around some files or something, at some point. It's probably not in a good state, and it's a bit strange to see two sQL servers installed, anyways.
From here, your best bet is to figure out what state your BuildMaster server is actually in: BuildMaster is a standard ASP.NET application and Windows Service, and you can ensure it's configuration is correct here:
The Agents installed on the remote servers are probably Inedo Agents, and if not, they should be:
Your best bet might be to rollback your server at this point.
Oh I see, then this actually just seems like a bug to me :)
I thought the problem was that BuildMaster didn't have the feature of "Dependent Roles". Anyways, I've logged the bug as BM-3588, and we'll try to fix this in an upcoming maintenance release.
In your example, TEST should have GSRole, SRole, and FRole in BuildMaster.
The sync should still work... are you getting an error? If you go from Otter -> BuildMaster, then it just won't copy over the role dependency relationships, but you should still have the roles and role variables, etc.
My guess is that you also have SQL Server 2005 installed? It's possible to have multiple instances of SQL Server installed, and an instance named INEDO is installed by default w/ the installer. you'll want to upgrade that instance.
BuildMaster doesn't support role dependencies; that's an Otter feature only.
The upgrade process itself is quite easy. But do note that for v6 you'll need to also update extensions; the admin UI will guide you on doing this. It's documented here: https://inedo.com/support/kb/1163/buildmaster-6-1-upgrade-notes
You can just download the Inedo Hub, then click Upgrade; you won't be able to select 6.2 from the list of versions to upgrade, so just install the latest 6.1 (which will be the default).
Neat! Would you mind sharing it?
We are trying to build up content libraries that show you to do stuff like this... such as this BuildMaster and Terraform content, which does use Modules, but also establishes a nice CI/CD pattern.
Getting an idea of how to do this w/ playbooks would be nice :)
Hello; BuildMaster 6.2 is a "really big" upgrade (perhaps, "biggest ever"), so please take care when upgrading. Most users have had no problems.
First place I'd start is here:
https://docs.inedo.com/docs/buildmaster/installation-and-maintenance/legacy-features/62-migration
Here's a more detailed info about the upgrade: https://inedo.com/support/kb/1766/buildmaster-6-2-upgrade-notes
Long story short, just upgrade to v6.1 first, make sure you're not using legacy features, then you should be ready to go to BuildMaster 6.2 :)
This isn't supported through an API; you could, however, just set-up a retention policy that automatically deletes the cache as space is needed.
Hello;
The Native API is for low, system-level functions, and it's "all or nothing". If you give someone access to Native API, you are effectively making them an administrator, as they can also change permissions and grant admin privileges. So, I don't think you want this. Instead, you'll want to use the Debian API endpoint that we implement.
It's a third-party API format
In order to support third-party package formats types like NuGet, npm, etc., ProGet implements a variety of third-party APIs. We only provide minimal documentation for these APIs, as they are generally either already documented elsewhere. However, you can generally find the basics by searching for specific things you'd like to do with the API, such as "how to search for packages using the NuGet API" or "how to publish an npm package using the API".
So in this case, I recommend to search "how to view and download apt packages".
Thanks, just confirming it was received, and I've attached it to the internal ticket. Stay tuned!
Hello; we transfer would be great. I can download it, and attach to the internal ticket for this forum post! I don't have a local python environment, so best to just get files and manually upload.
Can you try the 5.2.29 docker image? There seems to have been a configuration error.
Thanks!! We'd love to learn more.
By the way, With Otter, we have a general plan to make UPack-based "rafts", and allow users to dowload them from a feed. This way, we can make a community feed of rafts. Easier to build and work-with than extensions, I think.
We do have a lot of users who configure Linux servers, but their usage doesn't seem much more involved than ensuring packages, files, and directories. You may be ablet o get a lot accomplished with just doing that? I'm not sure... happy to learn and help though!
The first error is a known issue, it's related to browsing for packages in the UI, and it's something we've already addressed. I'm not sure about the second error.
As for the package, can you provide a package file that we can actually upload to a test instance of ProGet?
Basically I want to take a package file, then upload it from the UI, then try to download it from the UI. By not involving Python tools, we can eliminate a lot of problems and simplify finding solution.
Ah, thank you very much for the additional information. So, then it seems like setting it to AuthenticatedUsers is a bug. OK, that makes sense. So, I changed it.
Can you try it and let me know if it's going to work?
Instructions on installation of new extension: https://docs.inedo.com/docs/proget/administration/extensions#manual-install
Pre-release of AWS Extension: https://proget.inedo.com/feeds/PrereleaseExtensions/inedox/AWS/1.0.4-RC.3
Then, if it's ok, we can release it.
Thanks.
Alana
Hello;
A 500 error should be logged in ADmin > Diagnostic Center, so if you can check there and find it, that should help us identify what could be causing it.
Is it only that package, or all packages? If it seems to be package-specific, please share the package to us (or a version that still breaks, but has sensitive information removed), then we can try to reproduce the bug and fix it.