Hi @jblaine_9526 ,
I can't find anything about SAML group claims on our internal roadmap... is there a ticket/forum post about it that I missed?
Cheers,
Alana
Hi @jblaine_9526 ,
I can't find anything about SAML group claims on our internal roadmap... is there a ticket/forum post about it that I missed?
Cheers,
Alana
Hi @inok_spb ,
There won't be any issue in disabling that trigger. It's basically like a "foreign key constraint", and just checks for data validations. However, I suspect its where the problem is, so please give it a shot and let us know.
We haven't had any other reports of this, tried to reproduce on our own, or fix it.... so it's not surprising if the issue is still there
Cheers,
Alana
Hi @bushman_3007,
Can you clarify the request some more, i.e. why are you wanting to delete soft-deleted directories?
I don't know the reason directories are soft deleted, but I suspect it has to do with preserving versioning history.
Thanks,
Alana
Hi @mcascone ,
This is a bug, thanks for the report!
ProGet "soft deletes" items, but it seems when you go to recreate a directory it's not set to "not deleted". We'll get this fixed in the upcoming maintenance release of ProGet 2022.4, scheduled for August 12: PG-2173 FIX: Deleted asset directory items cannot be created
Cheers,
Alana
Hi @inok_spb
I think you're right, something must be deadlocking.
There was a database change in ProGet 6.0.16 that basically involved creating a large transaction to handle a race condition.
https://inedo.myjetbrains.com/youtrack/issue/PG-2140
We haven't heard of any other reports of this deadlock (particularly from our paid users)... and unfortunately deadlock issues are really hard and time-consuming to reproduce and track down (especially with Docker using shared layers).
So as a free user, we'd really appreciate any other info / help you can provide to track this down and fix :)
The issue would most certainly be in database code, which is viewable/editable in SSMS.
One quick thing to try -- can you just disable TR__DockerRepositoryTags__ValidateImage
? That might be the culprit... and if so, we can always add it to the stored procedures as well.
Cheers,
Alana
Thanks much @brett-polivka !
That's a great tip! That just might be it, and I can see the code here just uses Uri
, as opposed to a Sas URI:
https://github.com/Inedo/inedox-azure/blob/master/Azure/InedoExtension/FileSystems/AzureFileSystem.cs#L91
For that error, can you see what ProGet logged in Admin > Diagnostic Center?
Hi @a-diessl ,
To simplify things, I'd recommend just hosting ProGet in IIS and using ACME/LetsEncrypt... if you don't already have HTTPS.
I can't answer why the domain is being set, that's just something the hosting framework seems to do, and it hasn't been an issue (for Linux users) until now.
We'll definitely consider changing if it becomes more of an issue...
Cheers,
Alana
Thanks for the detailed investigation!
You're right, the cookie domain seems to be the problem.
We are now using .NET6 on Windows, and I'm guessing that's how you're hosting this? This would have behaved the same on Linux (which was using .NET5 for a long time).
The cookie domain comes form the hosting framework, and doesn't use Web.BaseUrl
. In general, we don't recommend using that anymore, and instead prefer the X-Forwarded-*
headers.
In this case, can you try setting a header value on your reverse proxy to use X-Forwarded-Host: proget.example.com
? Then it should work.
I haven't tested it, but since this is what's generally done on the NGINX proxies, I suspect it's not a problem
Hi @brett-polivka ,
In ProGet 2022, we upgraded to a completely new version of the Azure SDK, since the old version was long since deprecated. So, it's not unexpected to see issues... though we didn't see these in our testing :)
I'll do my absolute best to help! Our code doesn't really deal with authentication or access rules; that's all handled by the Azure SDK, which uses that connection string.
I searched "CannotVerifyCopySource", and a lot of SDK users (including tools like AzCopy) repot this issue when switching to the new SDK/API. I don't know Azure well enough to undersatnd, but some said that it had something to do with "storage account firewall".
Beyond that, I'm not totally sure how to troubleshoot, but the first thing that comes to mind is the connection string. I'm not sure what to look for, but I know those can get quite complex... and maybe you're using a option that behaves differently in the old vs new sdk? Maybe you need to specify something in the connection string now?
Here's some docs on connection strings:
https://docs.microsoft.com/en-us/azure/storage/common/storage-configure-connection-string
The second thing I would try is to create a totally new Azure Blob Container, using the default settings, and a very basic connection string. That should work (that's what we do), and then try to compare/contrast the differences between the containers.
Please let me know what you find!
Cheers,
Alana
I'm not sure what else to change... how about changing the ".NET CLR Version" to "Integrated" or "No Managed Code".
The Inedo Hub should automatically do that for you.
Thanks,
Alana
In this case, you'll want to switch to IIS hosting:
https://docs.inedo.com/docs/various-iis-switching-to-iis
You may find it easier to uninstall/install as well (note: the database will remain the same, and nothing is deleted).
Cheers,
Alana
This is configured outside of Otter, and is handled by the web server - either IIS (Windows) or nginx (Linux/Docker).
Getting HTTPS on Windows is "relatively easy" - you can use something like WInACME to install a LetsEncrypt certificate, or you can install one issued by your organization. We don't yet have step-by-step documentation on how to do this , but it's something we're writing.
Here is instructions for Linux:
https://docs.inedo.com/docs/https-support-on-linux
Is that helpful? Please let me know :)
Cheers,
Alana
Hi @Panda ,
That sounds like a great idea; I'm afraid we won't have the resources to add this in the coming weeks (we have some really cool stuff planned for BuildMaster 2022 that we're working on now).
The easiest way to do local-development on the extension is to get the source code from GitHub, then build it into dll. Then, construct a new version of the Subversion.upack file with your DLL, and replace the one in the existing extensions folder. From there, restart the web/service, and your version should load instead.
If you add properties to SvnRepositoryMonitor.cs
that look like the other properties, they should show-up in the UI and be configurable.
Let us know what you come up with, and we can definitely merge in the code and release a new version with your changes.
Cheers,
Alana
Hi @bbrown2_8761 ,
It's a bug in Visual Studio's NuGet client I'm afraid and it's not something we can realistically work-around
You could post it in the NuGet issues, and they may fix it in a future version; https://github.com/NuGet/Home/issues
It's really easy to reproduce, and they've definitely fixed issues like this in the past.
Cheers,
Alana
Hi @Panda ,
I'm not really familiar with SVN, but I'll do my best to help :)
It looks like our code just assumes that there is a /branches
directory, and that's why it's crashing.
The repository monitor will enumerate the branches to find the most recent revision numbers, and then trigger builds as appropriate.
The code that enumerates branches seems to do the following call svn ls --xml "«repo-url»" "branches/"
, and then parse the results. But it's failing, because svn
is erroring with E200009
.
So it would require some sort of code change to get this working. Perhps a flag on the repository monitor, or something like that? If you have any ideas, maybe it's something we can try -- it's easy to get pre-release versions out there.
Cheers,
Alana
Hi @bbrown2_8761 ,
I was able to reproduce this issue, though it's not happening 100% of the time.
So far as I can tell, it's a bug in the NuGet client that .NET Framework uses. Maybe it's a result of the "bad version" in the feed?
We can see that Visual Studio is requesting the registration index, and then complains the package isn't found. However, it's clearly in the index.
The registration index lists all versions of the package, and you can find it on this URL:
https://(redacted)/nuget/approved-nuget/v3/registrations/selenium.webdriver.chromedriver/index.json
You will see 103.0.5060.5300 in that list, and there's no reason NuGet should say he's not found.
There is one key difference between ProGet's registration index, and NuGet.org's index is paging. Here's NuGet's index:
https://api.nuget.org/v3/registration5-semver1/selenium.webdriver.chromedriver/index.json
When the registration index is paged, then the client will look for the appropriate page. I'm guessing he will crash if there's a bad version in there, or something?
Anyways... I don't think we can do much to work-around this issue, and I guess you have a suitable work-around: don't use the Install-Package command. That seems to be the only thing broken.
Cheers,
Alana
Hi @mcascone ,
Thanks for the feedback, that's great to hear. We'd love to hear what your download instructions look like, so we can share examples of how to use it.
We didn't have support for this, but it will come in the next (first!) maintenance release of ProGet 2022, currently scheduled for July 8:
https://inedo.myjetbrains.com/youtrack/issue/PG-2157
Cheers,
Alana
Hi @Chester0 ,
That screenshot appears to be the place where you configure environment variables? There's just one environment variable, and it's SQL_CONNECTION_STRING
. The -e
indicates to docker client that it's an environment variable, and isn't part of the environment variable name.
The rest are other configuration options that instruct the docker client to work in one way or another. They're not environment variables. This is kind of complex to configure use unless you really know Docker inside-and-out, and know how to apply concepts to another engine.
You may wish to use Lightsail and Windows. The costs are about the same, and the Inedo Hub is much easier to work with:
https://docs.inedo.com/docs/proget-how-to-install-on-aws-lightsail
Cheers,
Alana
Hi @Chester0 ,
I'm really not familiar with AWS Fargate, but it looks like this is how you're supposed to pass environment variables: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/taskdef-envfiles.html
It's likely you'll need to specify other configuration differently. Our documentation breaks down what each of the configuration parameters are: https://docs.inedo.com/docs/installation-linux-docker-guide
Once you get up and running, we'd love to hear you got things set up :)
Cheers,
Alana
This error message is likely related to some sort of proxy server; you can adjust proxy settings under Admin > Proxy.
Cheers,
Alana
Hi @bbrown2_8761 ,
That package should be fine; are you sure it's in your approved-nuget
feed?
If you navigate to /feeds/approved-nuget/Selenium.WebDriver.ChromeDriver/103.0.5060.5300
in the Web UI, then you should be able to see and download the package.
When you try to download from the UI, what happens?
Thanks,
Alana
Hi @dward_2542 ,
Thanks for finding the specific package.
The issue is that ProGet does not consider 96.0.4664.1800-beta.2
to be a valid NuGet package version, and gives an error message when trying to access it. See the NuGet Package Versioning documentation to see what's supported.
Most NuGet tools and versions of the NuGet client will not support that version number either.
The easiest way to resolve the issue is to use a newer version of that package.
Cheers,
Alana
Hi @arozanski_1087 ,
Fortunately, sacrificing an albino goat won't be necessary
Simply deleting the package using the UI will suffice to remove it from a feed. Unlisting or running retention rules are not necessary.
I'll share some information to help troubleshoot. Basically, there are three types of packages: Local, Cached, Remote:
Remote and Cached packages have the radio-tower icon.
Local and Cached packages are stored on the feed, and even if you remove the connector, cached packages will still be in the feed. This is by design.
Remote packages come from the connector associated with the feed. They cannot be removed (you could Filter then out if you really want). The metadata is not cached (unless you configure that on the Connector), which means that the data is always "live". When you remove all connectors from a feed, there will be no Remote packages.
Based on all this, it sounds like that package is coming from one of your connectors to Mainline. I would just remove the connectors from the feed until you find which one it's in.
Cheers,
Alana
Hi @john-selkirk ,
Another customer has more details on this... it seems that newer versions of dotnet nuget
silently ignores publishing symbols unless we have certain metadata in the feed index: https://github.com/NuGet/Home/issues/11871
We may have a fix for this via PG-2154, which is scheduled for July 8's maintenance release. Not sure if it will fix it, but let's hope!
In the meantime, the PUT
will work fine though.
Alana
Hi @john-selkirk , hi @alansbraga_8192 ,
If you haven't seen it already, we've done our best to capture how to configure all this:
https://docs.inedo.com/docs/proget-feeds-nuget-symbol-and-source-server
Unfortunately its not as intuitive as we'd like. In general, we recommend doing the "embedded" approach for symbols, to keep things simplified.
The dotnet nuget push
command has a few bugs with regards to symbol packages and ignoring the symbol-source
argument. Unfortunately this is beyond our control, as it's maintained by Microsoft.
If you run a tool like Fiddler, you can clearly see it's not even trying to PUT
the package. Other times it will try to push it to symbols.nuget.org
, but silently fail.
nuget.exe
seems to be more reliable, but you can also just use curl.exe
or PowerShell to PUT
the package using the NuGet API
PUT https://proget.local/nuget/feed-name/package
Cheers,
Alana
I wasn't able to reproduce the error using the UI, but I could figure out what's causing it and reproduce he error state using the API.
Basically, you have two pipelines named "Standard «redacted» Service Pipeline". One pipeline may be global, one may be part of the application. But in any case, the validation logic is not allowing this to save because it detects a duplicate.
To resolve this, delete one of the pipelines.
Cheers,
Alana
Thanks, received! We'll debug/investigate this today or tomorrow.
Cheers,
Alana
I think this may be specific to your pipeline data. You should be able to go to the pipeline in "edit view", and share us the JSON document.
You can send that to support at inedo dot com with [QA-874] in the subject (so we can find it) , or also just you open a new ticket. If you do send to support email, please let us know on here that you sent it, since we don't regularly check that email.
thanks,
Alana
Hi @borisov_1556
You say that ProGet doesn't modify files after uploading, only during it. But can ProGet strip and therefore modify packages after uploading in case of triggering re-indexing in a "symbol server" settings?
When you enable any of the "strip" options, ProGet will modify the file during the download process, when it's requested by a client (web browers, nuget.exe, etc). The zip-file is basically rewritten on-the-fly when sending it to the client requesting it. This modified file is not saved to disk or persisted inside of ProGet.
The symbol serving reindexing is unrelated.
do you mean that not only the content of a file, but also the compression type of every file in the .nupkg will be preserved as-is, and ProGet handles that explicitly?
Correct. ProGet effectively just "deletes" the "stripped" files entries from the archive file while it's being streamed. The existing file entries are not modified.
A file entry in a zip archive contains the compression type. So to "change" an entry's compression, you would need to "delete" it from the archive, and then "add" it back to the archive.
So, the point is... something else must be modifying your zip file. ProGet is not. You can verify this by:
MAybe nuget.exe is doing a modifcation? It's really hard to guess, but i would continue the investigation by finding what could be modifying the file.
Cheers,
Alana
The method name is NpmFeeds_SetTagVersion
so I think the error you're seeing is about case sensitivity in the method name.
Hi @borisov_1556 ,
Thanks for the detailed information.
At first, ProGet does not modify NuGet package files after you upload it.
If you enabled any of the "Strip" options, then ProGet will stream a modified archive file during download; this will have a different hash and be a different file. However, this wouldn't alter how existing items are stored in the archive file.... those would be streamed as-is.
ProGet also has a "repackage" feature, which will create a new package altogether. But I doubt you're using this, and you shouldn't use this with public / third-party packages, it's used for CI/CD pipelines.
I'm not sure what's going on, but I would investigate how you're uploading the files? Maybe you're re-signing them? But otherwise, if you just download a package file from nuget.org, then upllad it to proget, then download it again... it's going to be identical file.
Let us know what you find!
Cheers,
Alana
Thanks for sharing that!
Based on the snippet you sent, my guess is that this was originally for a npm package, and someone tried to make it work for a NuGet package? But that would have never worked, b/c NuGet packages can't be tagged... so his is really just a broken / never worked code snippet.
To tag npm packages, I recommend you to just use the client:
https://docs.npmjs.com/cli/v8/commands/npm-dist-tag
If you want to use the native API, then note the parameters are also wrong. NpmFeeds_SetTagVersion uses these parameters
Hi @arozanski_1087,
Do you mean that you are currently tagging npm packages via the API (NpmFeeds_SetTagVersion
), but that you wish to tag NuGet packages?
I ask, because there is no such method NugetFeeds_SetTagVersion
, at least not in ProGet v6 or other versions that I'm aware of.
It's also not possible to tag NuGet packages -- the reason is, the package tags are stored in the manifest file, and you would need to reupload the package file. npm tags, however, are stored as metadata on the server (not in the package manifest file).
The method NpmFeeds_SetTagVersion
is part of the ProGet Native API, and you can find what methods are available in /reference/api
inside of your version of ProGet.
Cheers,
Alana
Just jumping in here, but I think you need the .exe
docker exec -it proget exec /usr/local/proget/service/ProGet.Service.exe resetadminpassword
It's not needed on Windows to run an executable, but I think it's required for Linux.
Oh wonderful!
If you're able/interested to try it out, then we should easily be able to add a checkbox in the UI for it and send you a patch/prerelease to try.
That's how many of the other options got added
Alana
I'm not really sure I totally understand the issue, or how to resolve it... but I'll explain how a GET manifest
request to your URL would work.
Basically, ProGet returns whatever manifest file you uploaded, or whatever was downloaded via the connector. We do not generate the manifest files, or pay attention to the Accept
request header.
I understand that application/vnd.docker.distribution.manifest.list.v2+json
is a "fat manifest", which is basically a manifest that points to other manifests on different architectures. So you don't know exactly what image you'll get until you install it on the machine.
So if you want to inspect layers, you would first need to decide which manifest to use, I guess
Cheers,
Alana
Hi @james-traxler_1560 ,
Thanks for testing that
It will be released in v6.0.16, which is planned to ship on Friday June 10th.
Cheers,
Alana
Hi @marc-ledent_9164 ,
This isn't something that our LDAP/AD integration supports on Linux at this time. It's possible using the Windows-version, which has an LDAP integration that uses different libraries (that only work on Windows).
We do plan on rewriting the LDAP/AD integration with a different library, as to allow this level of customization, but it's not something we can do right away. It'll be later this year.
Cheers,
Alana
@ehsan-bahrami91_9979 great!
In that case, can you use 6.0.16-rc.1
by following these instructions on Windows:
https://docs.inedo.com/docs/desktophub-overview#prerelease-product-versions
Or if it's on Linux, just install the image with that tag.
After installing that version, it should do the trick. Please let me know :)
@ehsan-bahrami91_9979 thanks for confirming that
There was a regression/issue with recording this information, but it will be resolved via PG-2143 in 6.0.16 (shipping next Friday). If you'd like me to prepare a patch/prerelease, let me know!
We'll definitely keep that in mind; we currently use the AWS SDK, which means we don't work at the HTTP-level and can't easily control the requests.
Basically our code does this:
new CopyObjectRequest
{
SourceBucket = this.BucketName,
SourceKey = this.BuildPath(sourceName),
DestinationBucket = this.BucketName,
DestinationKey = this.BuildPath(targetName),
CannedACL = this.CannedACL,
ServerSideEncryptionMethod = this.EncryptionMethod,
StorageClass = this.StorageClass
},
If you know of any way to configure the SDK to send a different request, we'd be happy to try that out!
Cheers,
Alana
Can you tell me what version of ProGet you're using? It should be in the bottom-right corner.
Thanks,
Alana
I'm glad you could solve the issue! I'm not very familiar with Ceph, but it sounds like it works with the AWS API, which we use to talk to AWS. I added a brief notes to the docs about Ceph/RGW usage, but if you have suggestion to improve, please let me know :)
As for the "Use server-side encryption" option, the checkbox sets the option on the API Request (e.g. CopyObjectRequest) to use AWS Server-side Encryption. ProGet does not encrypt the data.
Cheers,
Alana
Hi @chris-f_7319 ,
If you follow the instructions to use the reset tool, it should reset and solve the problem:
https://docs.inedo.com/docs/various-ldap-troubleshooting
(proget-installation-directory)\Service> .\ProGet.Service.exe resetadminpassword
Cheers,
Alana
This error is happening at the SSL-level, which is managed by the operating system (platform). There's nothing you can configure in ProGet to resolve this, so you'll need to make some changes to the operating system.
Can you tell me, are you using ProGet for Windows, or on Linux (Docker)? In either case, you'll need to trust the root certificate, but the ways to do this are a bit different.
In Windows, the easiest way is to install the certificate using the UI (Certificates-Current User > Trusted Root Certification Authorities into Certificates (Local Computer) > Trusted Root Certification Authorities). You can also verify that the installation was successful by trying to navigate to a URL in your S3 bucket.
In Linux/Docker, it's a bit trickier; in general you'll want to copy the .crt
into /usr/local/share/ca-certificates
and then run /usr/sbin/update-ca-certificates
. There's a few ways to do this, but one common way is to build a docker image on top of our image. You can also SSH into the running container and handle this as well.
Cheers,
Alana
Thanks so much for verifying all of this. I talked to the team about this, and wanted to share an update, and further the conversation. We'd love to get your opinion.
This is very clearly a "pretty big bug" that Microsoft hasn't fixed after very many years, which is surprising to see. It doesn't seem to be reported or discussed much.
At this point, we're really unclear on the future of PowerShell DSC. From our research, it never got widespread usage in the community because of the complexity of Pull servers, MOF, etc. Ultimately, most Windows admins just chose to use regular PowerShell scripts to set-up servers because it's simpler to understand and use.
Microsoft seems to have demoted it to a "community" project as well. It looks like they have some effort they plan to invest in it (based on a 2021 blog post), but there hasn't been recent updates or activity.
It seems that the PowerShell team is really focused on PowerShell Core now, and DSC isn't so much a fit? Maybe they will fix the remoting bug in Core? But who knows
There are several alternatives:
Ultimately, you (and our users) want to use automation to solve problems, and PowerShell DSC is just one tool to get there. It's convenient because there's many resources.
We can invest in working-around this bug, but it's not trivial. In the same time, we could improve a lot of Otter operations and write/document a lot of PowerShell scripts.
from a marketing/user standpoint, we don't know how many people are using PowerShell DSC , and if they would even find Otter to add value.
So getting your ideas/opinoins would be appreciated :)
Cheers,
Alana
Thanks for all of the info!
We are using ProGet Version 5.3.28 (Build 16), and calling the API directly.
Ah, that explains it. We really only test this with the Docker client, which wouldn't trigger this issue.
Usage scenario: removing or renaming tags for a manifest. As there is no specific route for this use case, we are DELETE-ing the manifest through the v2 docker API and re-adding (with PUT) the new/remaining tags.
Is this a Docker API limitation? We were thinking of adding a API for tag management for our own CI/CD platform (BuildMaster), but just worked-around it, and no one ever asked.
we are concurrently executing 6 PUT requests:
This is probably where the issue is. Looking at the database code, there is an opportunity for this to happen between a DELETE and INSERT statement.
It's pretty easy to fix, can you give it a shot? If it works in your test, then I'll commit the change. But note this will be for v6.0.15, so unless you upgrade to that or later, then you won't have the patched code.
https://inedo.myjetbrains.com/youtrack/issue/PG-2140
If you download the file attached to the above link, you can run it against your SQL Server database.
Thanks,
Alana
Hmm, that's really weird; the next thing I would try is to just restart all the machines (including the BuildMaster server). That's easy to do and the issue might go away. It could be a software error in the Windows networking stack
It's possible that there's some intermediate network device that's interfering. Router, QoS device, etc. That's harder to diagnose, and would require some tracing ... starting by making sure that the RST
packet is actually being received on the BuildMaster server (that's usually what indicates the error message), and then figuring out what device is sending it, and why.
Our code doesn't work at the packet-level, but if the remote server (Inedo agent) is where the RST
packets are coming from, it likely has to do with some obscure Windows setting... maybe even firmware on the network card. But at least something to look at.
Cheers,
Alana
Hi @james-traxler_1560 ,
We can help with this...
In general, the error makes sense if you're trying to re-add an image that's already there... but can you describe how you're doing that?
I.e. are you using the Docker client to push an image, or using the API directly? execute the PUT
statement?
What is the version of ProGet that you're using?
Can you give a full stack trace (not sure if there's an error)?
Thanks,
Alana