Hi @chris-f_7319,
I think Dan was looking at the upcoming ProGet v2022 file structure. Can you please try:
docker exec -it proget exec /usr/local/proget/service/ProGet.Service resetadminpassword
Thanks,
Rich
Hi @chris-f_7319,
I think Dan was looking at the upcoming ProGet v2022 file structure. Can you please try:
docker exec -it proget exec /usr/local/proget/service/ProGet.Service resetadminpassword
Thanks,
Rich
You will want to specify the "Domain to Search" as gcloud.dom,LDAPuser
. For the secure credential, you will want to use just a username and password, unless the user logs in with a different suffix other than @gcloud.dom
.
I think the issue is with the binddn. BuildMaster will connect to LADP/AD using the root OU. If you require a CN and OU to be specified, that will not work out of the box. Are those needed to connect to your domain controller?
Thanks,
Rich
To specify a username/password to use to communicate with you domain, need to:
ADDomainCreds
)Active Directory (LDAP)
)Specific List
<DOMAIN_SUFFIX>,<CREDENTIAL_NAME>
(ex: kramerica.local,ADDomainCreds
)kramerica.local
), but if not, enter the IP address of your domain controller.Please let me know if that works for you.
Thanks,
Rich
Thanks for sending over the information. I have identified the issue, OT-472, and we plan to release a fix in Otter 2022.3 that is due out next Friday. I'll let you know if anything changes.
Thanks,
Rich
Can you share your OtterScript for that operation? Or does this happen when you try to add that OtterScript script as an operation?
Thanks,
Rich
We released a new version of our scripting extension, Scripting 2.0.1. You should be able to update the extension or upgrade Otter to 2022.02 and this should fix the issue for you.
Thanks,
Rich
If you navigate to Administration -> Raft Repositories, then click "browse" to the right of your Git repository, does your raft load or do you get an error?
Also a couple of notes:
Please give these a try (including browsing in your git raft) and see if that works for you.
Thanks,
Rich
Hi @kichikawa_2913,
Can you show me what your extensions page looks like?
Also, when you start your container, if you watch the output, do you see any errors when loading the extensions?
Thanks,
Rich
Hi @fabian_7019,
Thanks for bringing this to our attention. I was able to recreate the error and we will get this fixed in the next version of Otter, 3.0.25, which is due out next Friday. This looks to only be an issue when there is only one raft repository setup in Otter. As a workaround, if you create a second raft repository under Administration -> Raft Repositories, you will then be able to create new folders in the default raft.
The fix is being tracked in ticket OT-461.
Thanks,
Rich
@gurdip-sira_1271 said in Help with Git raft in Otter:
Could I not just do a git commit to the repo and then use the scripts in Otter?
Yes, you can commit your scripts directly to Git by adding them to the Scripts folder. You can also modify your scripts directly in your Git repository as well or you can use the editor directly in Otter. We also just added a new text editor based on Monaco (the same editor as VS Code) and a new visual editor for OtterScript available as a preview feature in the latest version of Otter (3.0.24).
We don't have direct end-to-end steps on adding scripts directly in Git because that has not been the most typical way users have used this feature. Typically, a user adds the script via Otter and then will edit them in git.
The other way git rafts are used is with git branches. A raft is created for each git branch where the editing and testing of scripts are done in one branch and the production scripts are stored in another branch. This can become tricky though when calling scripts for different rafts in OtterScripts.
Hope this helps!
Thanks,
Rich
I think the safer option would be to upload your scripts on the "Scripts" page in Otter. If you navigate to the "Scripts" page, click "Add Script", and then select "Upload Scripts & Assets". You can then select your Git raft and then bulk upload all your script files. This way, they all get put in the correct folder automatically.
Thanks,
Rich
Just as a follow-up on the solution. This error came from a Git Repository Monitor. The Docker Image for Build Master does not include git installed out of the box. This will require you to either install git on the running container or use a BuildMaster Agent/SSH server to be added and to run this on there. The other issue was that the repository monitor was using a secure resource from a specific application, so the repository monitor needs to specify the application it uses as well.
Hi @luke_1024,
It looks like the error happens when trying to decompress your control file in the deb package. Something that cargo-deb is doing compresses and attaches that differently than dpkg-deb does. I took a quick look through cargo-deb's docs, but couldn't find anything to specify the compression. Something to try would be using the --fast
flag when running cargo deb
.
Please let me know if --fast
fixes it. If not, I will set aside some time next week to debug through these packages more.
Thanks,
Rich
Thanks for confirming that for me. How often do you see this issue? Is it always for the same application or does it happen on any application?
Thanks,
Rich
Do you have anything that could be blocking connections to my.inedo.com? I'm curious if there is something preventing the connection that would cause that to hang. Also, if you open up your browser's dev tools, do you see any errors show in the console?
Thanks,
Rich
Can you please try to restart your container and then attempt to activate again? Sometimes I have seen that the key is cached and it does not clear correctly, so a simple restart fixes it.
Thanks,
Rich
Can you please tell me what version of BuildMaster you are running?
Thanks,
Rich
Hi @luke_1024,
Could you also try using gz compression instead of xz compression on your control and data archive? I don't know cargo-deb, but what I see in their docs, it looks like you need to do this:
deb_contents.push(compress::gz(&control_archive, &control_base_path)?);
deb_contents.push(compress::gz(&data_archive, &data_base_path)?);
Thanks,
Rich
Hi @luke_1024,
Would you be able to share the .deb file created by cargo-deb and the one created by dpkg-deb -b
? I think I see where the issue is, but that would allow me to confirm the issue. You can share these via email by sending it to support@inedo.com
with a subject of [QA-816] Package File Examples
.
Thanks,
Rich
Hi @luke_1024,
I just wanted to let you know that we are currently researching your issue and we should have more information for you tomorrow.
Thanks,
Rich
Hello @nuno-ildefonso_8876,
Can you please tell me what version of BuildMaster you are using and what version of the GitLab extension you have installed?
Thanks,
Rich
Hi @pariv_0352,
The "FeedCleanup" job will do two main things; clearing the Multipart Upload Temp files and running retention rules. For Docker feeds, it will also run the deletion of blobs, but only for Feed specific blobs. If you are using common blob storage (which is enabled by default in feeds created in ProGet 5.3+), then that is where the "DockerGarbageCollection" job comes into play. That one handles cleaning up/deleting the blobs that are shared across multiple feeds.
Hope this helps!
Thanks,
Rich
Can you please tell me what version of BuildMaster you are using and what version of the BuildMaster agent you have installed? Also, if you change the BuildMaster agent to a Delayed Start instead of an Automatic start on your remote server, do you see the same issue?
Thanks,
Rich
Hi @NUt ,
Thank you for bringing this to our attention. This bug, PG-2126, will be fixed in ProGet 6.0.12. Going forward it will allow you to configure everything and even test it via the "Test User Directories" button, but it will only allow you to login using the Built-In user directory and the username/password login option when using ProGet free.
Thanks,
Rich
I think I found the issue. I have fixed it as part of OT-458 and it will be released in Otter 3.0.22 on Friday, April 8th.
Thanks,
Rich
Thanks for sending it over. Please give me a bit of time to review it and I'll let you know what I find!
Thanks,
Rich
Would you be able to send us your Job Template JSON for us to review? You can get the JSON by:
You can send it to support@inedo.com with an email subject of [QA-805] Job Template
and please let me know when you send it.
Thanks,
Rich
Glad to hear it! Please let us know if you have any other questions.
Thanks,
Rich
Could you please try to add a new Template Variable and let me know if you can edit that? It will tell me whether I need to focus on the variable or the template itself for the null error.
Thanks,
Rich
Is this happening for all template variables in all job templates? If you add a new variable, does it work then?
Thanks,
Rich
Hi @mcascone,
Thanks for following up and letting us know this worked!
Thanks,
Rich
Hi @kichikawa_2913,
Can you please try clearing the NuGet cache on your computer and see if the problem persisists? You should be able to click the "Clear All NuGet Cache(s)" button in the screen shot you sent to do this.
Thanks,
Rich
I believe we have identified the issue, PG-2121, that is causing your issue. This fix will be included in ProGet 6.0.11, which is due out next week.
Thanks,
Rich
Hi @kichikawa_2913,
Could you please open another topic for the Selenium.WebDriver.ChromeDriver issue? I think that is unrelated to the authentication issue.
Thanks,
Rich
Hello,
Sorry for the delay on this. Would you be able to send me the JSON from:
Just change the domain name of your TFS server and change packagename_1 to be the name of your package.
Thanks,
Rich
Are you able to see the log from the error on the first time you upgrade and it failed? That log would most likely show us the script with the error.
The other option is you can download InedoSQL and run the command:
inedosql.exe errors --all --connection-string="Data Source=(local);Initial Catalog=ProGet;Integrated Security=False;User ID=ProGet;Password=xxx"
That will tell us what scripts failed and their error.
Thanks,
Rich
It looks like this didn't get posted to the Support category and it was missed. Yes, ProGet can be moved to Azure. We actually have a guide on how to move ProGet to Azure. Please let us know if you have any other questions!
Thanks,
Rich
Is your Azure DevOps NuGet feed using the v2 or v3 API endpoint? If v3, can you send me the inex.json for your feed? You can send this to support@inedo.com with the subject of "[QA-786] Azure DevOps Null Reference" to keep it private.
I first want to make sure that your on-premise version is not missing one of the endpoints that ProGet uses.
Thanks,
Rich
Hi @janne-aho_4082,
Thank you for providing the extra details. When you click the Clear Cache option in the tasks, that clears the permissions cache, which will mean that calls to AD will need to be made again for the impersonated user and their AD groups. There is a timeout that can also be set in the Advanced Settings on how long to cache users. The default is 60 minutes, but you may want to make sure that the value was not changed. The cache is also cleared anytime permissions/restrictions were added or removed.
As for your testing environment, is your test environment in the cloud? If so, is it in the same region as your active directory server? This can sometimes cause the performance of AD to degrade. Also, you mentioned about 20-30 seconds later in the test environment, it seems to lose the cache. What is your application pool's setting for recycling and Idle time-out? If those are too quick, that will also cause the permissions cache to reset as well.
How long did it take previously in production to run these tests?
Thanks,
Rich
Hi @janne-aho_4082,
Thanks for providing us with the stack trace. It looks like we may have found the issue. Would you be able to install the InedoCore 1.31.1-RC.10 pre-release extension and see if that fixes your issue?
Thanks,
Rich
Hello @nmorissette_3673,
Can you please tell us what version of ProGet you are running?
Also, when you navigate to the package in the UI and get the 404 error, do you see any error messages show up in the diagnostics center?
Thanks,
Rich
Hi @cronventis,
I just wanted to let you know that I'm still researching this. Is it possible to tell me what version of Kubernetes and Rancher is installed?
Thanks,
Rich
Hi @cronventis,
I took a look through your tables and there is definitely something a bit odd going on with your Kubernetes output. Could you tell me what version of Kubernetes you are running?
Would it be possible to see the output from your Kubernetes API? Again, this is something you can send us via suppor@inedo.com with the subject of [QA-729] Kubernetes API
. To get the image list, you can simply run this PowerShell against your Kubernetes API:
$uri = [System.Uri]'http://localhost:8080/api/v1/pods?limit=999'
$response = Invoke-RestMethod -Method GET -Uri $uri.ToString()
$response | ConvertTo-JSON | Out-File "C:\temp\response.json"
Just change HTTP://localhost:8080 to your Kubernetes API host and port.
Thanks,
Rich
Hi @cronventis,
Thanks for sending this over to us! I can confirm we have received it and we are currently looking into this. I'll let you know when we have more information.
Thanks,
Rich
Hi @mcascone,
Sorry about that. I'll get that backported to ProGet 5.3 also. This will be tracked in ticket PG-2074.
Thanks,
Rich
Hi @Michael-s_5143,
Glad to hear that post helped you out. We updated our documentation to include information about the ASPNETCORE_URLS information.
Thanks,
Rich
Hi @pwe,
Thanks for updating us that you were able to figure this out. I have updated our documentation to include information about changing the internal port number.
Thanks,
Rich
Hi @cronventis,
Would you be able to send over your Docker tables from your ProGet database to us? They won't contain any of the actual layers of the images, just the digests and names so we can see where the disconnect is happening.
The easiest way to do this would be to :
pgexport
SELECT * INTO [pgexport].[DockerBlobs] FROM [DockerBlobs]
SELECT * INTO [pgexport].[DockerImageLayers] FROM [DockerImageLayers]
SELECT * INTO [pgexport].[DockerImages] FROM [DockerImages]
SELECT * INTO [pgexport].[DockerRepositoryTags] FROM [DockerRepositoryTags]
SELECT * INTO [pgexport].[ContainerUsage] FROM [ContainerUsage]
pgexport
and send it to usIf the backup is pretty small, you can just email it to support@inedo.com with the subject of [QA-729] Database Export
or I can create a one drive share link you can upload to.
Would you be able to do that?
Thanks,
Rich