Hi @brett-polivka,
Is only the health check failing? Are you still able to pull or search for images?
Also, do you see any errors in the Diagnostics center?
Thanks,
Rich
Hi @brett-polivka,
Is only the health check failing? Are you still able to pull or search for images?
Also, do you see any errors in the Diagnostics center?
Thanks,
Rich
Hi @RobIII,
We received the email, please give us a little bit of time to review this and we will get back to you soon!
Thanks,
Rich
Hi @paul-reeves_6112,
No problem! Always happy to help.
Thanks,
Rich
Hi @paul-reeves_6112,
That's ok. It went through our internal testing and everything looked good so we pushed it to production with the last release. Thanks for verifying this for us!
Thanks,
Rich
Hi @paul-reeves_6112,
Thanks! Always happy to help. Please let us know if you find anything else!
Thanks,
Rich
Hi @harald-somnes-hanssen_2204,
Thanks for sending this over to us. It is very helpful! We will definitely be discussing this further as a team!
Thanks,
Rich
Hi @rbenfield_1885,
It looks like you are using the integrated webserver. It looks like this error is happening when it is attempting to register the port on Windows. Please make sure that the user is an administrative user and that the port that you are using for ProGet is not already in use.
Can you share the error that you are getting when you are trying to upgrade to ProGet 5.3?
Thanks,
Rich
Hi @rbenfield_1885,
Are you able to share the error you are seeing in the event viewer after upgrading to 5.2.32? Is that InedoHub error when you upgraded to 5.2.32 or when you tried to upgrade to 5.3.34?
Thanks,
Rich
Hi @paul-reeves_6112,
I think I have found the issue. I have created a pre-release extension, InedoCore 1.12.3-RC.1, that includes the fix. Could you please try to install this version and see if it fixes the issue for you?
You can follow our manual extension installation guide to either manually install the extension or to change BuildMaster to use our pre-release extension feed.
Thanks,
Rich
Hi @paul-reeves_6112,
Thanks for following up. We will take a look at this and see what is happening. Hang tight!
Thanks,
Rich
Hi @pumin_0299 & @jndornbach_8182 ,
This issue has been resolved and will release later this week in ProGet 5.3.34.
Thanks,
Rich
Hi @maxim_mazurok,
I'm glad to hear that repulling the package fixed this. Do you have an anti-virus installed on your ProGet server? Is it possible to check that to see if the package was quarantined?
Thanks,
Rich
Hi @maxim_mazurok,
Is this a cached package from a connector or did you upload this package directly? Has this version always been a problem or is this new? I think the best option may be to reupload this package or delete it and repull it from your connector (if you are using a connector).
The error you are seeing can happen when a file is also a 0 byte tgz file as well. My guess is that either the file failed to download/upload to ProGet or the zip file contents were quarantined from your local anti-virus.
Thanks,
Rich
Hi @paul-reeves_6112,
Thanks for taking a look. I corrected the color and improved the spacing in BuildMaster 7.0.7.
Thanks,
Rich
Hi @paul-reeves_6112,
Good catch! We will be fixing this as part of BM-3725 and it will release in BuildMaster 7.0.7.
Thanks,
Rich
Hi @internalit_7155,
After updating your license in ProGet, did you restart the web server (or application pool if using IIS)? If not, could you please try to restart your web server (or application pool if using IIS)?
Thanks,
Rich
Hi @liuhan907_8630,
I don't believe the health check is the reason for the error. Is it only gcr.io that is having this issue? Does this happen on all images for the gcr? Are you able to pull the image directly from gcr?
Also, has this image been pulled before? If so, there could be a bad image cached. You can delete the cached image by hovering over the button in the upper write and click Delete Cached Image
and try again.
Thanks,
Rich
Hi @internalit_7155,
I know you have attempted the upgrade already, but in the future you can go to https://my.inedo.com/downloads and click the Upgrade Guidance & Change Notes
button. That will allow you to enter in the versions you are upgrading from and to and view the upgrade path. Here is the result of that for your upgrade, ProGet 3.7.6 to 5.3.33. The upgrade path for this is to first upgrade to ProGet 5.2.32 then to 5.3.33.
Looking through your error, I have found this to be a known issue when upgrading a very old version of the database to our new tools directly. We actually recently fixed this bug in inedosql.
The best way to resolve this is to restore your database back to the 3.7.6 version and use the traditional installer to upgrade to ProGet 5.2.32 first. This will also allow you to log in and convert your NuGet feed to the new format. Once you upgrade to that version and convert your NuGet feed, then upgrade to ProGet 5.3.33. For ProGet 5.3.33, I would recommend using Inedo Hub to perform the upgrade. We have deprecated the traditional installer and it will no longer be generated in ProGet 6+.
Hope this helps!
Thanks,
Rich
Hi @paul-reeves_6112,
Thanks for bringing this up to us. I have fixed the styling and it will be updated in Build Master 7.0.6 that is expected to be released later this week.
Thanks,
Rich
Hi @paul-reeves_6112,
Thanks for checking this for me. I was able to get this resolved and it will be released later this week in BuildMaster 7.0.6.
Thanks,
Rich
Hi @paul-reeves_6112,
Thanks for bringing this up to us. Let me see what I can do about that. Quick question though, when you shrink the window, is it still 4 and 5 that have the issue or does it start shifting the problem down?
Thanks,
Rich
Hello @paul-reeves_6112,
I just pushed it to RC now. Can you please give it another try?
Thanks,
Rich
Hi @kichikawa_2913,
Thanks for the information. I have updated our Docker Troubleshooting Guide to include details about root-less containers. We are also discussing internally if we want to change the default port to be > 1024 going forward.
Thanks,
Rich
Hi @kichikawa_2913,
That's great to hear! Just to make sure, you left in the --expose=8080
correct?
For the LDAPS thing, that makes complete sense. The SSL certs are at a root level and there is nothing we can do to change that. But I will note that there is an undocumented feature I added to bypass certificate validation for LDAPS. If you navigate to Administration -> Change User Directory -> Advanced -> Active Directory (NEW) and then select Use LDAPS
and Bypass LDAPS Certificate Validation
, that will allow you to use LDAPS and it just bypasses any certificate errors in the process.
Thanks,
Rich
Hi @kichikawa_2913,
I have an idea on how to accomplish this. It looks like Docker allows you to expose ports in the run command. Here is what I'm thinking should work. The first thing to do is to map a volume to /usr/share/Inedo/SharedConfig
. In my example, I'll map to SharedConfig
.
So here would be the steps to try:
echo '<?xml version="1.0" encoding="utf-8"?><InedoAppConfig><ConnectionString Type="SqlServer">'"`$SQL_CONNECTION_STRING"'</ConnectionString><WebServer Enabled="true" Urls="http://*:8080/"/></InedoAppConfig>' > SharedConfig/ProGet.config
podman run -d --userns=keep-id -v proget-packages:/var/proget/packages -v `SharedConfig:/usr/share/Inedo/SharedConfig` -v /etc/pki/ca-trust/source/anchors:/usr/local/share/ca-certificates:ro --expose=8080 -p 8080:8080 --name=proget -e ASPNETCORE_URLS='http://+:8080' -e SQL_CONNECTION_STRING='Server=SERVERNAME;Database=ProGet;User ID=USERNAME;Password=PASSWORD' -e TZ='America/New_York' -i -t proget.inedo.com/productimages/inedo/proget:5.3.32 /bin/bash
Can you please give that a try?
Thanks,
Rich
Hi @kichikawa_2913,
Thanks for following up with that information. I was doing a little more research on podman this morning and it looks like this is a problem with the internal port. I have actually found quite a bit of people who claim that port 80 inside a container will not work on a root-less image, but looking at the podman document, their example uses port 80 inside the internal container. Let me look if there is an easy way to change that port binding on Linux and I'll get back to you shortly.
While I'm checking on that, can you try something for me? It looks like podman also needs a protocol to bind the port to. Could you try the following and let me know if that works?
podman run -d --userns=keep-id -v proget-packages:/var/proget/packages -v /etc/pki/ca-trust/source/anchors:/usr/local/share/ca-certificates:ro -p 8080:80/tcp --name=proget -e SQL_CONNECTION_STRING='Server=SERVERNAME;Database=ProGet;User ID=USERNAME;Password=PASSWORD' -e TZ='America/New_York' -i -t proget.inedo.com/productimages/inedo/proget:5.3.32 /bin/bash
Thanks,
Rich
Hi @kichikawa_2913,
I'm not very familiar with Podman root-less containers. Does it expose all ports that are internal to the container externally?
I think the issue that may be happening is the changing of the internal port which we have configured to be port 80. I do not think the ASPNETCORE_URLS
will actually change the port that the site runs on internally because we are actually hosting the kestrel server within our service so we tell it to use port 80. Have you tried running the command as this:
podman run -d --userns=keep-id -v proget-packages:/var/proget/packages -v /etc/pki/ca-trust/source/anchors:/usr/local/share/ca-certificates:ro -p 8080:80 --name=proget -e SQL_CONNECTION_STRING='Server=SERVERNAME;Database=ProGet;User ID=USERNAME;Password=PASSWORD' -e TZ='America/New_York' -i -t proget.inedo.com/productimages/inedo/proget:5.3.32 /bin/bash
After a bit of research, it looks like you will get that error when you are trying to bind to a host port that is already in use. So make sure nothing else is using port 8080 on the host.
The messenger endpoint URL is a URL that is only used internally in the container. You should not run into any conflicts with the default port that is used since it is not exposed externally. If you want to test using a different port, you can connect to your SQL database and run the following stored procedure. Make sure to update to the port you want to try.
EXEC [Configuration_SetValue] 'Service.MessengerEndpoint', 'tcp://127.0.0.1:6001'
Please let me know what you find!
Thanks,
Rich
Unfortunately, NPM does not include any parameters you can pass to uncache the install results. You can try delete %appdata%\npm-cache
before each run and see if that helps. I have also seen that this can be caused by the NPM verifying the hash with the one stored in the package-lock.json as well. But that would only break if an anti-virus or something deleted a file out of the zip.
Have you tried upgrading your NPM client recently? How long has this issue existed?
Thanks,
Rich
Thanks! I received the log and took a look. The only errors I see before EPERM error is an error saying warn tar ENOENT: no such file or directory, open for the core-js
module. What I think I happening is the node_modules
directory is becoming corrupted, which is a pretty common occurrence. Typically you just run rm -rf node_modules && npm install
and that fixes it.
As for the EPERM error, this really looks like something is locking the files. Do you have any sort of anti-virus or anything watching that directory? Can you try ignoring that directory and testing again to see if you get the error still? Does this happen every time or just sporadically?
I know you mentioned that this didn't happen when switching to npmjs.org, but this could also be an issue with the npm cache and how it links to the installed files. When switching repositories, npm will create a different cache folder and link from a different location. I think it happens to just be a coincidence and you will most likely hit a similar issue after using npmjs.org for a bit.
Thanks,
Rich
Typically when you see an EPERM error that is caused by the OS. You can find quite a bit about it on Google and Stack Overflow. Typically users experience that issue on windows that have their drives indexed. Although, I want to believe that the npm client is your issue, I have a feeling that something is happening first and then the EPERM error is happening on the cleanup of the file. Especially since you are saying that npmjs.org does not have the problem. I noticed at the end of the error, it says "A complete log of the run exists at", would you be able to send that over to us?
If you don't feel comfortable attaching that to this post, you can send it to support@inedo.com and put [Support QA-609]
in the subject. Just please let me know that you emailed it so I can keep an eye out for the email.
Thanks,
Rich
Hi @hwittenborn,
Could you please share a screenshot of what you are seeing? When I click on the package version, I see a Delete Package button when I hover over the Download Package multi-button. See below:
Thanks,
Rich
Hi @Fred and @hwittenborn,
You can actually configure docker to use insecure HTTP registries. As it states in our documentation, you can register a host and port as an insecure registry which will then tell your docker client to use HTTP instead of HTTPS. A good way to rule out the ProGet container would be to configure your Docker daemon to use insecure registries pointing to the HTTP port of your ProGet container and try to push it that way. For example:
If you have your ProGet container running HTTP on port 80
and host proget.domain.local
, add this to your Docker daemon (or settings in Docker Desktop on Windows and Mac):
{
"registry-mirrors": [],
"insecure-registries": [
"proget.domain.local:80"
],
"debug": false,
"experimental": false
}
Then if your repository name would be: proget.domain.local:80/my/imagename
and your push command would look like:
docker push proget.domain.local:80/my/imagename:tagname
That will then push the image over HTTP vs HTTPS.
Is this only an issue in Visual Studio? Have you tried to push your image using the command line?
Thanks,
Rich
Hi @hwittenborn,
Thanks for the extra information. It is very much appreciated!
Are you saying the file fails to get added if you save it under the filename
{feed-name}.pub
?
I actually used it as a gpg file extension, but honestly, your solution much better. The file extension doesn't actually matter that much. Once it is added using apt-key, it just extract to contents and stores them in their keyring. You can see this by running sudo apt-key list
.
I was more stating that in order for it to work for me on Ubuntu 20.04, I had to have everything named as the lowercase feed name in order for it to work for me. Although, I'm not sure if the lower case part matters. For example, for a feed name defaultdebian
, I ran this:
wget -qO - http://proget.localhost/debian-feeds/defaultdebian.pub | sudo apt-key add -
echo "deb http://proget.localhost/ defaultdebian main" | sudo tee /etc/apt/sources.list.d/defaultdebian.list
sudo apt update
As a test I changed echo "deb http://proget.localhost/ defaultdebian main" | sudo tee /etc/apt/sources.list.d/defaultdebian.list
to be echo "deb http://proget.localhost/ defaultdebian main" | sudo tee /etc/apt/sources.list.d/proget-defaultdebian.list
and that actually wouldn't work for me. This caused a bunch of warnings when I ran sudo apt update. For some reason this only mattered in newer versions of apt though.
All in all, I will update the documentation to correct your command (adding the missing '-' ) and to include the updates to the naming. Hopefully, this will be clearer going forward for other users.
Thanks for all of your help!
Thanks,
Rich
Hi @hwittenborn,
I recently ran into similar problems in the latest version of apt on Ubuntu. This didn't actually seem to be an issue in the older versions of apt. I determined that naming is very important when using Debian and this is how I was able to get this to work:
wget -O "{feed-name}.gpg" http://{proget-server}/debian-feeds/{feed-name}.pub && sudo apt-key add "{feed-name}.gpg"
echo "deb http://{proget-server}/ {feed-name} {component-name}" | sudo tee /etc/apt/sources.list.d/{feed-name}.list
Basically, I had to add download the pub as a {feed-name}.gpg
then add that key to apt. Then when registering it in the sources.list.d
I had to name it {feed-name}.list
. In my experience, if they are not all named the same way, when you try to run sudo apt update
you will get a bunch of warnings and the packages will never actually show up as available to install.
I'm not sure if this is the same thing you were seeing, but I was waiting on a couple of customers to confirm that running the commands this way it all worked before updating the documentation.
My guess is that using wget -qO - http://{proget-server}/debian-feeds/{feed-name}.pub | sudo apt-key add -
, it added the key with the name of {feed-name}.gpg
or whatever extensions apt converts it to. When you ran echo "deb http://{proget-server}/ {feed-name} {component-name}" | sudo tee /etc/apt/sources.list.d/{proget-deb}.list
what did your use of {proget-deb}.list
? Was it your feed name?
Thanks,
Rich
Hi @Fred,
The message I'm pulling out of this error is The plain HTTP request was sent to HTTPS port
. This indicates either the docker client is trying to push a non-SSL request to an SSL port (like HTTP://proget.com:443 where 443 is bound to SSL) or you have a bad forward of the host and port in your NGINX file. I recently did some testing on this and this was the NGINX file that I tested and worked: https://docs.inedo.com/docs/https-support-on-linux
Thanks,
Rich
Hi @hwittenborn,
Thanks for the additional information. We talked about this in our team meeting this week and we decided we are going to clean up when restarts are needed and we are hoping to give better messaging to the user when a container needs to be restarted. Some of the restart requirements are leftover from previous requirements that have since changed. So hopefully we can make this process a bit easier in the future.
Thanks,
Rich
Hi @hwittenborn,
I took a deeper look into this and it looks to work as expected. When it comes to Docker containers, we can't automatically restart the website like we can on Windows. So anytime that a setting is modified in the Advanced Settings or an extension is installed or removed, the docker container for ProGet will need to be restarted. In older versions of the image, we attempted to display a message to restart the container, but that implementation did not always work as designed so it was removed. I have forwarded this over to our products team to review any possible solutions that we could make in ProGet 6 to make this more obvious. I'm also working on ways to update our documentation to make this a bit clearer.
Thanks,
Rich
Hi @hwittenborn,
Let me do some testing on my side and I'll let you know what I find. What OS is your Docker engine running on?
Thanks,
Rich
Hi @hwittenborn,
Am I correct to say that the 502 errors are not with the NGINX proxy in front of it as well? Is this only when you are uninstalling the extensions? If not, what other settings are you changing when you see this issue?
Thanks,
Rich
Hi @Stephen-Schaff,
In newer versions of ProGet, we improved our performance by caching our user's privileges. This was especially necessary when using Active Directory based user directories. This can cause issues similar to this one where it can take some time before the privilege is updated. This is especially noticeable in ProGet HA clusters. The caching timeout is configurable in the Administration -> Advanced Settings by altering the Web.PrivilegeCacheExpiration
value. By default, we set it to 30 minutes. There are areas that should refresh the cache on save, but those will only work in single instances of ProGet and I'm guessing it did not get triggered in this case.
Hope this helps!
Thanks,
Rich
Hi @Fred,
I think I may have found the issue. Can you include proxy_set_header Host $http_host;
in your location node and see if that fixes your issue?
Thanks,
Rich
Hi @Fred,
It definitely can be settings in your Nginx settings, but nothing is jumping out at me. I am by no means an Nginx expert, but everything looks normal. I know we have quite a few users using Nginx with ProGet, so we know this is a working combination.
Just to get the easy Docker nuances out of the way first, I just want to verify that your certificate is not a self-signed certificate or generated by an internal certificate authority (there are things you need to set in the Docker client to get that to work). Also are you able to push images to ProGet using the command line? If not, are you able to send over the output of the CLI?
One other thing to try is to set the Web.BaseUrl
in Administration -> Advanced Settings to your HTTPS URL (ex: https://proget.xxx.com).
Thanks,
Rich
Hi @kichikawa_2913,
Sorry, this is actually expected. When we release our products, they include the extensions that were released at the time of the product release. In this case, I released this version of InedoCore after we did the product release.
If you look at our documentation for upgrading your docker image, the command includes --volumes-from=proget-old
. This will auto migrate the previous volumes created from the previous version of ProGet and that will keep the updated extension (as long as the previous extensions is newer than the included extension version).
Also, in Administration -> Advanced Settings, you can change Extensions.ExtensionsPath
to a mapped path and that will also do the same thing (if the version in this directory is newer than the included) and give you easier access to the extension files.
Thanks,
Rich
Hi @kichikawa_2913,
I'm sorry for not including that in my previous comment. You are correct 1.10.7 is the new version of the InedoCore extension that includes these fixes for LDAP and LDAPS on Docker.
Thanks,
Rich
Hi @kichikawa_2913,
I just released this extension to production now. Please let me know if you don't see an official release.
Thanks,
Rich
Hi @kichikawa_2913,
Did those users still experience the long initial login? Or did that go away?
Thanks,
Rich
Hi @kichikawa_2913,
You should just be able to remove the custom port and uncheck LDAPS then restart your container and that should remove LDAPS from your AD instance.
Thanks,
Rich
Hi @kichikawa_2913,
Thanks for the information and I'm glad you can log in now. Let me dig in and see what I can find on these Anti-CSRF errors. Normally these happen because a reverse proxy is not properly forwarding headers. Do you have a reverse proxy (like Nginx or apache) sitting in front of this container? Also, do you know if once they get logged in if ProGet seems to run fine, or does each page request take a while to load?
Would you be willing to test this without LDAPS so we can see if it is an LDAPS issue or not?
Thanks,
Rich
Hi @kichikawa_2913,
I'm going to attempt to recreate your error on my system as well and see if I can find this issue. LDAPS on Docker has not been a popular option with our customers so far due to the complexity in managing the AD certificates. I really appreciate your patience in working through this with us.
While I try to recreate this, could you try using an incognito browser and see if you are able to load and login? Also, can you try to restart your container again? Also, can you please verify the LDAPS connection is still working with the openssl command again?
Thanks,
Rich