Hi @aneequaye_1868,
Would you be able to run an ls -l
to check your folder permissions? I have seen some recent issues that linux reports back some weird errors when a folder doesn't have the proper read/write/delete permissions.
Thanks,
Rich
Hi @aneequaye_1868,
Would you be able to run an ls -l
to check your folder permissions? I have seen some recent issues that linux reports back some weird errors when a folder doesn't have the proper read/write/delete permissions.
Thanks,
Rich
Hi @msimkin_1572,
What feed types are you using? Some of our feed types have a built-in process to verify this for you.
Thanks,
Rich
Thanks for clarifying that for me. The issue did exist in 5.2.32, but the scenario to recreate it was different. Now that I understand that the blob is not in the database (I apologize for missing that before), I don't think the clear cache issue caused this. Just to verify, when you searched for the blob digest in the database, did you limit it only to that FeedId? Or did you just search for any blobs with that digest? Could you send me the SQL query you used? Also, can you check to see if this feed is configured to use shared storage? You can find that by navigating to the Manage Feed page and selecting the Storage & Retention tab, then click the Configure link to the right of Blob Storage
.
Thanks,
Rich
I'm sorry for the confusion. If the clear cache was ran in 5.3.12 or lower, that may have removed the shared blob from your storage location, but left the link in the database. Upgrading to 5.3.13 removes the possibility of deleting a blob that is still in use from your storage location. Can you verify if 2622b3cbec4c3f908fde9a413e48eca0145887f5c0719a4384d2e862978270b0
exists in your storage location? You can also see by going to an image that has that blob layer and looking at the 'Layers` tab. If it has a red exclimation mark next to the layer, then it was most likely removed from the storage location.
Thanks,
Rich
Hi @viceice,
That error is safe to ignore. It is currently a known bug and we are looking to fix that in an upcoming version of ProGet. The ticket tracking the fix for the log message is PG-1841.
Long story short, the ProGet service correctly detected that the product wasn't activated, and then logged that message. But it was doing it every time it accessed license information, which is on every connector health check, replication run, etc.
Activation happens automatically as soon as someone visits the Web application, and re-activation is required after upgrading certain versions.
Thanks,
Rich
I think I see the issue here. There was a bug, PG-1832, in the clear docker connector cache that was patched in ProGet 5.3.13. The issue that was happening is that the clear cache would fail when a blob was shared between a chached image and a local image. The side affect is that it would delete the blob from the storage location, but it fail to remove the reference from the database. This causes our API to think the blob exists but then fails to find it on the disk.
This is why a new feed will work because you are pushing the new layer up that doesn't already exist in the feed. If you copy the blob named 2622b3cbec4c3f908fde9a413e48eca0145887f5c0719a4384d2e862978270b0
from the new feed storage location to the old feed storage location, then most likely the push will work. You can find the storage location by checking the Storage.PackagesRootPath
, Storage.DockerRepositoryLibrary
, and Storage.DockerBlobStorageLibrary
in Administration -> Advanced Settings. You may also need to check in the Manage Feed settings for a registry level storage or if your registry is using Shared Storage.
Thanks,
Rich
The docker login issue has been scheduled to release in ProGet 5.3.14 on October 30, 2020. I will let you know if anything changes.
Thanks,
Rich
Are you using any docker connectors on this feed? If so, have you cleared the cache on these recently? Also, if you create a new feed, then push the image, does that error happen?
Thanks,
Rich
Would you be able to switch to the progetmono
image and see if you still have this issue? I would like to rule out .NET Core as a cause.
Thanks,
Rich
Thank you for all the extra information. I believe I have found the culprit, but I will need to work with the team internally to determine the proper fix. It looks like the call to /v2/
that tells docker it needs to authenticate is not returning the proper headers. This is a .NET Core only issue.
The reason that Docker is the only feed affected now is because of how the docker client/api works. It basically walks through the process of an API looking for the API to return very specific errors and headers. That is how the client knows what to do next. In the case of auth, it will first hit /v2/
and look for a 401 unauthroized and some specific headers to tell it to redirect to /v2/_auth
and to pass an Authorize
header. That is not happening. I will update when we have scheduled this for release.
Thanks,
Rich
Currently, the only way to transfer all docker images automatically between feeds is by using feed replication, which requires an enterprise license.
One manual option is to promote the image from your old feed to your new feed. You would need to do that on each image manually though.
Is your ProGet instacnce installed on Windows or are you using a Docker image based version of ProGet?
Thanks,
Rich
Just to verify, you are running the command line as an administrator correct? Can you also please verify the user you are trying to run the service as has the correct password and is not locked out on the domain?
Thanks,
Rich
Does this happen for all images or just the one image? Do you see any errors in the console from docker?
Thanks,
Rich
Is this the error from ProGet or from the Docker client. Do you see any other errors in the ProGet diagnostics center when you try to push an image?
Thanks,
Rich
Hi @viceice,
Thanks for confirming that for me. I ran into a few issues on my test server with SQL 2014 and TLS1.2 support. You may want to make sure that TLS 1.2 is enabled on that VM. SQL Server 2014 and Windows Server 2012 R2 both had some unique issues with teh TLS 1.2 conversion (both needing updates to be able to enable them) . Although this did not cause an issue with hanging, it did cause my authentication to fail. You may want to check that.
Thanks,
Rich
Hi Phillipe,
I created an issue on GitHub, #8, to track adding this to the extension. I'll reply back once there is a CI version released of the extension.
Thanks,
Rich
Hi @viceice,
I'm having trouble recreating this issue. I tested this by setting up a SQL Server 2014 instance with updates installed to bring it to version 12.0.6372.1. I have tried both the proget
and the progetmono
images at version 5.3.12. Both of them boot up and run all the scripts completely fine. I have even tested using your exact docker command and only changing out the SQL connection string. So far I cannot get it to hang. Is there anything else unique about your network configuration? Do you have just a standard install of SQL Server? Is there any proxy or VPN that requests are routed through to connect to your SQL Server?
Thanks,
Rich
Hi @p-bruch_5023,
Thanks for bringing this to our attention. I have created a ticket, PG-1838, to track a fix for this. It will be realeased this Friday as part of ProGet 5.3.13.
Thanks,
Rich
The .NET Core version is definitely more effecient than the mono version. One of the biggest issues with mono is how it manages web requests. Mono reimplemented its web client and connection pool from scratch due to the complexity and internal dependencies of the .NET Framework. This is probably part of the reason you see the mono version tax the server a bit. As for NuGet querying all 40 packages in ProGet, we don't have much control over that. NuGet queries all sources in the list until it finds one. It will do custom sources first then fallback to NuGet.org last. There is an interesting ticket on NuGet's GitHub page discussing the priority (https://github.com/NuGet/Home/issues/3676).
Thanks,
Rich
I was finally able to recreate the issue. We created a ticket, ILIB-98, to track the issue. It will be released on Friday with ProGet 5.3.13.
Thanks,
Rich
You can reduce the logging by adding some environment variable parameters to your docker start command. The environment variable paramters you will need to add are:
-e 'Logging__LogLevel__Default=Warning'
-e 'Logging__LogLevel__Microsoft=Warning'
-e 'Logging__LogLevel__Microsoft.Hosting.Lifetime=Warning'
You can also set them to Error
if oyu wnt even less logging.
Thanks,
Rich
Have you tried uploading a new NuGet package and tried downloading it? I would just like to rule out the directory and file renaming. I'm running the core version locally and I'm not currently able to recreate the authentication issues in my tests.
Thanks,
Rich
Hi @viceice,
Nothing is sticking out to me that there is an issue. Your hardware looks fine (without knowing everything else running on the server/kube pods).
It looks like it is hanging when trying to load the current database schema version. The code for both mono and .net core does not differ for this component. Is there anything that is different from your progetmono and progetcore start scripts in kubernetes? I would put some enphasis around the networking piece in kubernetes.
Thanks,
Rich
If you upload a new package, can you download it then? That would be the best way to verify the casing change did not affect the authentication.
Also, what user directory are you using? Are you using the built-in user directory, Active Directory, or LDAP Legacy?
Thanks,
Rich
Hi @jim-pg_1173,
I was able to identify the issue. I created a ticket to track the fix, PG-1831, this will be released in the next version of ProGet. ProGet 5.3.13 will be released this Friday, Oct 9th, 2020. I will reply back if anything changes.
Thanks,
Rich
Hi @jim-pg_1173,
Thanks for posting this here. The GitHub ticket is very helpful for this. Let me do some investigating and I'll get back to you about this.
Thanks,
Rich
I tested the scenario where I setup anonymous to view only and a user to download only and it successfully authenticated and downloaded the package for me. Would you be able to share the custom tasks you are using for the anonymous user and the user that has download only?
Here is what I setup:
View Only
Download Only
If you give the user View & Download rights, does it work?
Thanks,
Rich
Hi @viceice,
There shouldn't be any issue with your SQL Server version, but I will do some testing on that version to see if I can recreate the issue. SQL Server 2014 is technically deprecated by Microsoft, but we are not using any functionality that wouldn't work with it. I'll let you know if I find anything on that front.
As for the 100% CPU usage, I'm going to reach out to my colleague internally to see if there are any extra debug settings that can be enabled, but I do not think there is. Do you have the docker host throttling what hardware can be used with that container? If so, what are the restrictions for it? Also, are you able to share what hardware specs are for your Docker host?
Thanks,
Rich
Hi @viceice,
I'm having trouble recreating this issue. Could you try using the IP address as the SQL Server data source instead of db02? There isn't any difference in the startup between the dotnet core version and the mono (outside of the image base that is). If you let it run, do you get any timeout or cannot find server error?
Thanks,
Rich
Hi @viceice,
Can you tell me how many feeds you have and what types those feeds are? Also, does your ProGet database user have db_owner on the ProGet database or just the ProGetUser_Role?
Thaks,
Rich
Here is a knowledge base article for migrating feeds. Although the majority of this is probably not helpful, the very last section talks about how to migrate a Maven feed. You could probably use that to migrate any Maven feeds to Blob storage. It may be easier for Maven to:
Pulled from our Feed docs
Renaming a feed will also change its API Endpoint URL. As of ProGet 5.2.19, you can create "Alternate Names" for a feeds by going to Manage Feed > Rename, then clicking the Set Alternate Names link in the warning dialog. Alternate feed names essentially provide multiple endpoint URLs for a feed, and are useful when renaming feeds to keep backwards compatibility with old names.
That should cover all the feeds except Docker. Do you have any docker feeds configured? Is there any other feeds you are having trouble migrating?
Thanks,
Rich
Please see our documentation for cloud storage. There is a subsection for migrating a feed to cloud storage.
Thanks,
Rich
Thanks for testing this out! I will get it released as a production version today!
Thanks,
Rich
Yes, that is what I'm referring to. Just to confirm, that is the same URL that your connectors use?
Thanks,
Rich
Hi @mikhael_3947,
I have updated our Docker documentation to include this information about using a proxy with ProGet. I have also included more information about insecure registries and using self-signed certificates with Docker registries,
Thanks,
Rich
Hi Mike,
For the connector issue, can you try setting the Web.BaseUrl
in your Advanced Settings to your DNS name? Please let me know if that works.
For the errors logging the license violation, I'm going to need to dig in a little more on that. I'll let you know when I have more information.
Thanks,
Rich
I did some searching around and it seems like there are a bunch of users that have the same problem in Windows 7. It looks like the FTP server is not using the system's TimeZone and I could not find any way to set it. Unfortunately FTP does not give us a way to determine what timezone use dynamically.
I have created a new CI build, 1.0.1-CI.4, that gives you the ability to use the current date and time as the file modifed date when BuildMaster cannot parse it from the FTP server. You can enable it on the FTP operation by going to the Advanced
tab and set the property of Use current date on error
to true.
Could you please update the extension, set the Use current date on error
property, and give that a try?
Thanks,
Rich
The best way is to just adjust the date/time on each and tell me what timezone is selected in each?
Thanks,
Rich
Are you able to pull successfully using npm and ProGet? Also, does your API Key have the Feed API right enabled or if you are impersonating a user, does that user have the ability to publish packages?
Also, when you set your NPM auth using:
[~]$ npm config set always-auth=true
[~]$ npm config set _auth={ENCODEDAPIKEY}
Are you base64 encoding your API Key using the format api:{APIKEY}
. For example:
If my API key is FakeApiKey
, I would want to base 64 encode api:FakeApiKey
would be YXBpOkZha2VBcGlLZXk=
. So the commands to run would be:
[~]$ npm config set always-auth=true
[~]$ npm config set _auth=YXBpOkZha2VBcGlLZXk=
Alternatively, you could use npm adduser
to login. Here are some examples:
If you ran the command to make ProGet your default repo: npm adduser --always-auth
If you are using multiple repos: npm adduser --registry=http://progetrepo/feedname --always-auth
If you are using scoped repos: npm adduser --registry=http://progetrepo/feedname --scope=@inedo --always-auth
This way uses a username and password. If you want to use an API key, use API as the username as the API Key as the password.
Hope this helps!
Thanks,
Rich
Are the server BuildMaster is running on and the FTP server using a different culture? For example, is Buildmaster en-EU
and the FTP server en-US
? It looks like the FTP server is sending the date as M-d-yy
, but BuildMaster is expecting d-M-yy
.
Thanks,
Rich
Hi @msimkin_1572,
It looks like your application pool user does not have read/write access to C:\ProgramData\ProGet
. Could you please verify access to that folder and its child folders?
Thanks,
Rich
Hi @msimkin_1572,
Looking at the Azure DevOps documentation here, you should be able to generate a personal access token (PAT) and connect to it using a username as anything and a password as the PAT. In ProGet 5.2, only NuGet v2 API's are supported, so make sure to follow the instructions for connecting it to a NuGet v2 client. In ProGet 5.3 and later, we have added NuGet v3 support.
Could you give that a try?
Thanks,
Rich
Did you download the extension and copy it to the Extensions.ExtensionsPath
folder manually or did you update Extensions.UpdateFeedUrl
to point to the PreRelease Extensions URL? In either case, try restarting BuildMasters site and service. If you did the manual copy way, then upon restart, you should see the version change. If you updated the feed URL, then you should see an update available for your Ftp extension.
Thanks,
Rich
I'm not seeing anything that would cause this. I created a CI version of the FTP extension that includes more logging around the Date parsing. Would you be able to install version 1.0.1-CI.2? You can follow our documentation for installing extensions manually to install this.
Once you have this installed. Could you please repost the error output?
Thanks,
Rich
Hi @dilshaat_6115,
Glad you got it working! Just some extra information for you. The component name is dependent on the component name of the package you updated. In my case, I uploaded a package to ProGet using the main
component. So I had to use deb http://192.168.55.103:8624/ hms-ubuntu main
. Here is an example of how my package looked in ProGet.
Thanks,
Rich
Let me dig into this a bit and see if I can see anything that could be going on.
Thanks,
Rich
Hi @marcin-kawalerowicz_5163 ,
Do you have a proxy set up in front of ProGet? If so, please check the request lengths in your proxy.
Also, what version of ProGet and what version of Docker are you using?
Thanks,
Rich
Hi @bvandehey_9055,
The URL you are using looks correct. IF you click on the Download button and the package actually downloads, then that verifies that it successfully connected. If it was failing to connect to WhiteSource, you would see a page that looks like this:
If you want to verify that ProGet is communicating with WhiteSource, I would just put in a bad value for WhiteSource and attempt to download the package from the ProGet UI. If you get a similar error to above, that verifies the communication to WhiteSource.
Are you using the Product Name or the Product Token in the Product field in the configuration? I would try to use the product token first.
If all of that is setup, then it is most likely an issue with the rules set up within WhiteSource.
Thanks,
Rich