Hi @nselezneva_7646,
Am I correct that you had that exact issue with the example packages you sent over to us? Would you be able to send me just the upack,json files from the packages you are testing with?
Thanks,
Rich
Welcome to the Inedo Forums! Check out the Forums Guide for help getting started.
If you are experiencing any issues with the forum software, please visit the Contact Form on our website and let us know!
Hi @nselezneva_7646,
Am I correct that you had that exact issue with the example packages you sent over to us? Would you be able to send me just the upack,json files from the packages you are testing with?
Thanks,
Rich
I took a look at your example and I was able to push the packages to my feed without issue. If I tried to push the package a second time, I then I would get that Feeds_OverwritePackage" error, but that is expected.
Are you getting this error when you try to push it initially? Are there other packages in the feed? If there are other packages in the feed, are there any with the same name but a different group name?
Thanks,
Rich
I can confirm that we have received your email. Please give us a bit of time to review it and we will get back to you soon!
Thanks,
Rich
Hi @Justinvolved,
That message that says "currently contains 0 items" is just telling you that the folder that the artifact is deploying to contains 0 files. After that, it will deploy the artifact files. If you enable verbose logging (setting Verbose: true
), you will see all the files transferred from the artifact to the working directory.
Thanks,
Rich
Hi @Justinvolved,
We can definitely improve the error message when trying to deploy a config file when an instance is not found. Would you be able to share your OtterScript with us for how you are deploying the config files? Also, how do you have your deployment target configured (single server, group, environment, etc...) on your stage?
Thanks,
Rich
Hi @justin_2990,
We are actually in the process of developing dedicated npm operations, but we do have anything ready as of yet. The easiest way to call npm commands is to use the Exec
operation in OtterScript. Due to how the npm CLI writes it's output, you need to add ErrorOutputLogLevel: Warning
to the Exec
operation. Here is an example of the npm install and npm publish commands:
set $NpmPath = C:\Program Files\nodejs\npm.cmd;
set $NodePath = C:\Program Files\nodejs\node.exe;
# Install Dependencies
Exec
(
FileName: $NpmPath,
Arguments: install,
WorkingDirectory: ~\Source,
ErrorOutputLogLevel: Warning
);
# Publish Package
Exec
(
FileName: $NpmPath,
Arguments: publish Source,
WorkingDirectory: ~\,
ErrorOutputLogLevel: Warning
);
When it comes to ProGet::Scan, it should work with all npm packages. It just reads the package-lock.json and records the dependencies in ProGet. You can see our implementation on the pgscan GitHub repository. If that doesn't work, you can always use a tool like CycloneDX to generate an SBOM and upload it to ProGet via the SCA API which has an endpoint for importing an SBOM file directly.
One last thing, you mentioned that you are using ProGet. You can create an OtterScript module to register ProGet as your package source for npm. I do this with the following:
ConfigureNpmRegistry OtterScript Module
##AH:UseTextMode
module ConfigureNpmRegistry<$NpmPath, $ResourceName, $CredentialName>
{
set $ProGetNpmRegistry = $SecureResourceProperty($ResourceName, ServerUrl);
Exec
(
FileName: $NpmPath,
Arguments: config set registry $ProGetNpmRegistry,
WorkingDirectory: ~\,
ErrorOutputLogLevel: Warning
);
set $AuthToken = $SecureCredentialProperty($CredentialName, Token);
PSCall Base64Encode
(
Text: api:$AuthToken,
EncodedText => $AuthKey
);
Exec
(
FileName: $NpmPath,
Arguments: config set always-auth true,
WorkingDirectory: ~\,
ErrorOutputLogLevel: Warning
);
Exec
(
FileName: $NpmPath,
Arguments: config set _auth $AuthKey,
WorkingDirectory: ~\,
ErrorOutputLogLevel: Warning,
LogArguments: false
);
Exec
(
FileName: $NpmPath,
Arguments: config set email support@inedo.com,
WorkingDirectory: ~\,
ErrorOutputLogLevel: Warning
);
}
I also had to add a PowerShell script to handle the base64 encoding of the credentials:
<#
.SYNOPSIS
Base64 Encodes a string
.PARAMETER Text
Text to be encoded
.PARAMETER EncodedText
Encoded text string
#>
param(
[Parameter(Mandatory=$true)]
[string]$Text,
[ref]$EncodedText
)
$Bytes = [System.Text.Encoding]::UTF8.GetBytes($Text)
$EncodedText =[Convert]::ToBase64String($Bytes)
I then call this using:
# Setup registry
call ConfigureNpmRegistry
(
NpmPath: $NpmPath,
ResourceName: global::ProGetNpmRepo,
CredentialName: global::ProGetNpmCredentials
);
These are all operations we plan to build into the npm extension, but these are currently the workaround until we get that extension up and running. I hope this helps! Please let me know if you have any questions.
Thanks,
Rich
Can you please try upgrading your upack CLI to the latest version and see if that resolves the problem for you?
Thanks,
Rich
Can you please tell me what version of the upack CLI you are using? You can find that by running the command upack version
.
I tested by creating a new Universal Packages feed with a feed usage type of PrivateOnly and the upack CLI 3.0.1.3 and I was able to push without issue. Can you also please answer a couple of other things for me?
Thanks,
Rich
Hi @jimthomas1_7698,
Thanks for bringing this to our attention, we will get that updated!
Thanks,
Rich
Hi @22marat22_9029,
So turns out that we had a compatible library from our Conda implementation that I was able to use for zstd compression with Debian. I have created a ticket, PG-2242, to track the fix and it will be released in the next release of ProGet, 2022.14.
Thanks,
Rich
Hi @22marat22_9029,
Currently, we do not support the compression format of zstd (tar.zst
) for the control or the data file. We only support .tar
, .tar.gz
, and .tar.xz
fir the data and control files with the addition tar.bz2
and tar.lzma
for the data file . This is currently a limitation of the third-party package we use to read tar files. Hopefully, this is something we can add support for in the future.
Thanks,
Rich
We have created a ticket, PG-2233, to fix this issue in ProGet. We are not exactly sure what is causing the issue as of yet, but we are able to recreate the issue and working on a fix. We are currently targeting Nov 18th for a release date on the fix.
Thanks,
Rich
I think you are looking for the npm search API. You would make a call to your npm feed using http://{proget-server}/npm/{feedName}/-/v1/search?text={Package}
. It also looks like http://{proget-server}/npm/{feedName}/-/all
will show you all local packages (it does not include remote packages from connectors) stored in ProGet. Please also note that the search API returns results in a paged fashion, you will need to use the from
query string parameter to offset the results to get the following pages.
I hope this helps! Please let us know if you have any other questions.
Thanks,
Rich
Thanks for giving us an update and letting us know that fixed your issue.
Thanks,
Rich
Hi @torgabor_4445 and @sdohle_3924,
I dug deeper into GitLab connectors and it looks like there are two ProGet issues here and a configuration issue.
I have added fixes for PG-2210 and PG-2211 and they will release tomorrow in ProGet 2022.9. Once you upgrade to 2022.9 and make the configuration updates, you should be able to pull GitLab packages via a connector.
Thanks,
Rich
Hi @torgabor_4445,
Can you please try something for me?
https://gitlab.com/api/v4/projects/12345/packages/npm/:_authToken
If that does not fix it, I have another idea, but I will need to check a few more things offline.
Thanks,
Rich
What version of BuildMaster do you have installed? Also what extensions are you trying to update that are currently giving you that error?
Thanks,
Rich
Hi @tkolb_7784,
If you are hosting on a windows machine, the easiest solution right now is to migrate your server to use IIS and then add an SSL binding to your site. If you do not want to purchase a new certificate and the self-signed certificate is too much work, you can use Let's Encrypt and configure it via winacme.
If you do not want to use IIS, then you will need to use a reverse proxy to handle SSL connections. Any reverse proxy can be used and a pretty simple one to configure is stunnel. Most reverse proxies can also be used with Let's Encrypt.
If you are hosting via Linux (Docker), then you will need to use a reverse proxy to handle SSL connections. We have a documentation page for different Linux-based reverse proxies including an example for setting up NGINX. These reverse proxies also support Let's Encrypt also.
Please let me know if you have any questions.
Thanks,
Rich
Hello,
This is related to a known issue that's been addressed in ProGet 6.0.19 and ProGet 2022.5. So, your best bet is to upgrade and the issue will become resolved :)
This is related to a few packages that have exceeded 2.2 billion downloads:
If the upgrade is impossible/difficult immediately, you can disable the connector as a workaround. Alternatively, you could block those packages with a connector filter and then upload them to your feed so that the counts won't come through the connector.
Thanks,
Rich
Hi @mike_2282,
Thank you for the feedback. I have made some changes to the documentation to address most of the issues you have brought up. Thanks again for the feedback!
Thanks,
Rich
Hi @mike_2282,
Thanks for alerting us to this. I'll get that added to our documentation. Please let us know the other issues you find and hopefully we can get those addressed in our documentation also.
Thanks,
Rich
Hi @v-makkenze_6348,
Can you please try to install the .NET 6.0 Web Hosting Bundle and see if that resolves your issue? Inedo Hub typically handles installing this, but I'm guessing that it did not detect your site in IIS.
Thanks,
Rich
Hi @pariv_0352,
I was able to fix the issue, PG-2160, and it will be released next week on ProGet 2022.2. If you would like to use the fix earlier, you can install a pre-release version of PrGet by installing ProGet 2022.0.2-CI.7 or higher. We have a walk-through on how to install prerelease products in our documentation.
Please let me know if you have any questions.
Thanks,
Rich
Hi @pariv_0352,
Thanks for all the information. I have been able to recreate the issue and I'm currently looking into it. I'll have an update for you soon.
Thanks,
Rich
I did some more research on this and I think I have been able to reproduce this issue. I have created a ticket, OT-477, to track the fix for this issue. I expect to have this fixed within the next couple of versions of Otter (2022.5 or 2022.6). It is possible that this is an issue with the git extension directly, but I will update you when I have a release date for the fix.
Thanks,
Rich
Would you be able to share a screen shot of your Git repository so we can see how the files and folder structure? You can send them to support@inedo.com if you do not feel comfortable sharing on here. If you do email it to us, please include [QA-882] in the subject.
Thanks,
Rich
There are a couple of things that can cause these to go out of sync. If you navigate to Administration -> Raft Repositorites and then click browse to the right of your Git raft. Do you see the scripts showing up in there? Also, which Git provider (GitHub, GitLab, etc...) are you using?
Thanks,
Rich
Hi @aries66_2180,
I just set up a clean ProGet 6.0.16 docker instance and tested this and it seems to be working for me. Can you send me an example image you are trying to push that fails? Also, would you be able to use a tool like WireShark or Fiddler to record your requests when trying to push an image and email it over to us at support@inedo.com with a subject of [QA-878] Docker push error
?
Thanks,
Rich
Hi @aries66_2180,
If this happens on a clean Docker feed that is not set up the use common blob storage, then this is likely an issue with the image itself. Do all images have this issue or just that specific image? Are you using the Docker CLI to create this image or a different third-party tool?
Thanks,
Rich
Hi @aries66_2180,
We typically have seen this when there is a corrupted or orphaned blob in the database. Could you create a new temporary Docker feed, disable common blob storage, and attempt to push the image to that feed?
Thanks,
Rich
Hi @aries66_2180,
Can you please try running the "DockerGarbageCollection" scheduled task. This will do the same thing as the feed cleanup task, but for the common blob storage blobs and then attempt the push again?
Thanks,
Rich
Hi @aries66_2180,
Can you please check if you have common blob storage enabled for your Docker feed? You can do this by navigating to your Docker Registry and click the "Manage Feed" button in the upper right and then select the "Storage & Retention" tab. On there, you should see a block indicating if it is enabled or not.
Thanks,
Rich
Hi @chris-f_7319,
I'm sorry about that, I didn't realize the version of ProGet you are running was based on mono (we switched to .net 5 later in the lifecycle of ProGet 5.3). The command you will want to use is:
sudo docker exec -it proget mono /usr/local/proget/service/ProGet.Service.exe resetadminpassword
If mono doesn't work, try using /usr/bin/mono
instead of mono
.
Thanks,
Rich
Hi @phillip-t_2200,
Azure SQL Managed Instances should work fine. We don't have any direct test cases for this, but based on our SQL implementation, that should work just fine.
Thanks,
Rich
Hi @chris-f_7319,
The command that you show is different than the command I sent. Can you confirm you tried:
docker exec -it proget exec /usr/local/proget/service/ProGet.Service resetadminpassword
It has a slightly different syntax than you provided. Also, what Docker host are you running?
Thanks,
Rich
Hi @chris-f_7319,
I think Dan was looking at the upcoming ProGet v2022 file structure. Can you please try:
docker exec -it proget exec /usr/local/proget/service/ProGet.Service resetadminpassword
Thanks,
Rich
You will want to specify the "Domain to Search" as gcloud.dom,LDAPuser
. For the secure credential, you will want to use just a username and password, unless the user logs in with a different suffix other than @gcloud.dom
.
I think the issue is with the binddn. BuildMaster will connect to LADP/AD using the root OU. If you require a CN and OU to be specified, that will not work out of the box. Are those needed to connect to your domain controller?
Thanks,
Rich
To specify a username/password to use to communicate with you domain, need to:
ADDomainCreds
)Active Directory (LDAP)
)Specific List
<DOMAIN_SUFFIX>,<CREDENTIAL_NAME>
(ex: kramerica.local,ADDomainCreds
)kramerica.local
), but if not, enter the IP address of your domain controller.Please let me know if that works for you.
Thanks,
Rich
Thanks for sending over the information. I have identified the issue, OT-472, and we plan to release a fix in Otter 2022.3 that is due out next Friday. I'll let you know if anything changes.
Thanks,
Rich
Can you share your OtterScript for that operation? Or does this happen when you try to add that OtterScript script as an operation?
Thanks,
Rich
We released a new version of our scripting extension, Scripting 2.0.1. You should be able to update the extension or upgrade Otter to 2022.02 and this should fix the issue for you.
Thanks,
Rich
If you navigate to Administration -> Raft Repositories, then click "browse" to the right of your Git repository, does your raft load or do you get an error?
Also a couple of notes:
Please give these a try (including browsing in your git raft) and see if that works for you.
Thanks,
Rich
Hi @kichikawa_2913,
Can you show me what your extensions page looks like?
Also, when you start your container, if you watch the output, do you see any errors when loading the extensions?
Thanks,
Rich
Hi @fabian_7019,
Thanks for bringing this to our attention. I was able to recreate the error and we will get this fixed in the next version of Otter, 3.0.25, which is due out next Friday. This looks to only be an issue when there is only one raft repository setup in Otter. As a workaround, if you create a second raft repository under Administration -> Raft Repositories, you will then be able to create new folders in the default raft.
The fix is being tracked in ticket OT-461.
Thanks,
Rich
@gurdip-sira_1271 said in Help with Git raft in Otter:
Could I not just do a git commit to the repo and then use the scripts in Otter?
Yes, you can commit your scripts directly to Git by adding them to the Scripts folder. You can also modify your scripts directly in your Git repository as well or you can use the editor directly in Otter. We also just added a new text editor based on Monaco (the same editor as VS Code) and a new visual editor for OtterScript available as a preview feature in the latest version of Otter (3.0.24).
We don't have direct end-to-end steps on adding scripts directly in Git because that has not been the most typical way users have used this feature. Typically, a user adds the script via Otter and then will edit them in git.
The other way git rafts are used is with git branches. A raft is created for each git branch where the editing and testing of scripts are done in one branch and the production scripts are stored in another branch. This can become tricky though when calling scripts for different rafts in OtterScripts.
Hope this helps!
Thanks,
Rich
I think the safer option would be to upload your scripts on the "Scripts" page in Otter. If you navigate to the "Scripts" page, click "Add Script", and then select "Upload Scripts & Assets". You can then select your Git raft and then bulk upload all your script files. This way, they all get put in the correct folder automatically.
Thanks,
Rich
Just as a follow-up on the solution. This error came from a Git Repository Monitor. The Docker Image for Build Master does not include git installed out of the box. This will require you to either install git on the running container or use a BuildMaster Agent/SSH server to be added and to run this on there. The other issue was that the repository monitor was using a secure resource from a specific application, so the repository monitor needs to specify the application it uses as well.
Hi @luke_1024,
It looks like the error happens when trying to decompress your control file in the deb package. Something that cargo-deb is doing compresses and attaches that differently than dpkg-deb does. I took a quick look through cargo-deb's docs, but couldn't find anything to specify the compression. Something to try would be using the --fast
flag when running cargo deb
.
Please let me know if --fast
fixes it. If not, I will set aside some time next week to debug through these packages more.
Thanks,
Rich
Thanks for confirming that for me. How often do you see this issue? Is it always for the same application or does it happen on any application?
Thanks,
Rich