Hi @shenghong-pan_2297 ,
What version of ProGet are you using? There is no ProGet 5.8 :)
I tested this in ProGet 2023 and there's no issues
Alana
Hi @shenghong-pan_2297 ,
What version of ProGet are you using? There is no ProGet 5.8 :)
I tested this in ProGet 2023 and there's no issues
Alana
Hi @sebastian ,
What setting do you have for unassessed vulnerabilities? I.e. under SCA > Vulnerabilities> "Vulnerability Download Blocking Configuration" - I'd like to see Global rule and Feed-specific rules (if they exist).
Also, the "Manage Vulnerability Sources" is kind of confusing.
Multiple vulnerability sources are definitely a little weird/confusing, esp if you're familiar w/ ProGet 6 and earlier...
Feeds still need to be associated a vulnerability source, but we now call this association "download blocking".
Hi @sebastian
There is no plan to add user-configurable scheduled job capabilities to ProGet, and it's unlikely we would consider that since they are really hard to support. We do have our Otter product that's designed for that 
However, in ProGet 2022, we considered a periodic "check" for packages in a feed against the source; the use case was "is a newer patch version available" - and if so, then an issue would be raised about using an out-of-date package. We obviously didn't implement that.
But it seems we could take a similar approach and then also check for unlisting/deprecation as well. This might be something that comes up in our ProGet 2024 planning.
But in either case, it still involves lots and lots of web calls to check each package against the source - so I would start with a a script and see what you find out.
Thanks,
Alana
Hi @rmusick_7875 ,
Unless you build your own GUI client, I don't think what you're doing is going to be possible or feasible to implement; dependencies need to be be in the same feed.
I suppose you could try "Unlisting" the packages, but I don't know if the Chocolatey GUI client uses the Listed indicated to determine if a package should be listed.
Cheers,
Alana
Hi @dan-brown_0128 ,
It's hard to say exactly what's going on without seeing the specifics, but I think I might know what's going on.
In ProGet, Projects & Releases are not associated with feeds; only package IDs. This means that, if you have the same package in multiple feeds that have SCA Features enabled, ProGet will pick one of those "at random" and link to in the UI - and I guess this selection is wrong in your case? That is, if you navigate to another feed with that package, it will show the vulnerabilities you are seeking?
If you disable the "SCA Feature" on the Feed Management page, then it should link correctly.
Thanks,
Alana
When text is written to the stderr stream, Otter will interpret this as an error. Unfortunately a few tools (including git) like to write to the stderr, even though it's not an error, and will use the exit code to indicate an error instead.
There are a handful of ways to deal with this:
ErrorOutputLogLevel in the SHExec operationtry/catch/force normal in OtterScriptRedirecting is strongly recommended, and you can do it with 2>&1 in your script. Then you can test the exit code of the tool, and write to the error stream so Otter can pick it up as an error.
Cheers,
Alana
There's no difference between Enterprise and Basic edition with regards to feed behavior like this.
I'm not totally certain on the inner workings of --skip-duplicate in NuGet, but I believe that it simply suppresses/ignores errors related to pushing. You would have to reveiw the HTTP traffic using a tool like Fiddler to be sure.
I would check permissions in ProGet; it's most likely that your credentials (API Key) on one server has the Feed_OverwritePackage permission, and is therefore not throwing an error whe you push the package
Hope that helps!
Alana
Hi @philippe-camelio_3885 , thanks for reproting this; definitely looks like a regression of some kind. We'll get it fixed ASAP via BM-3860 -- but sounds like you found a temporary work-around for now :)
Hi @Justinvolved ,
You should be able to see some kind of error message on Windows Event Logs, but my guess is that the account doesn't have access to the BuildMaster SQL Server database .. you'll need to grant that.
Cheers,
Alana
Hi @bardmorgan_7142 ,
That error seems to be a UI-related error with the Build Script Template editor. Basically that __ahempty value is meant to trigger a validation warning to force you to pick a package source.
When you edit the build script, what do you have set for the package source drop-downs? It should be a ProGet feed, at least for publishing. But it's not required.
Also, if you "Edit As OtterScript", you should be able to see where the __ahempty is being added to the script, and hopefully fix. Seeing taht would help us debug it :)
Cheers,
Alana
Unfortunately I have no idea what format gpg is looking for or how these could be used locally by debian. We mostly know the API/repo format, not so much the client tooling.
We use the BouncyCastle encryption library. We simply use that byte array like this:
var keys = new PgpSecretKeyRingBundle(this.Data.SecretKeys);
using (var output = new MemoryStream())
{
using (var armor = new ArmoredOutputStream(new UndisposableStream(output)))
{
if (!detached)
{
armor.BeginClearText(HashAlgorithmTag.Sha512);
armor.Write(data, 0, data.Length);
armor.EndClearText();
}
foreach (PgpSecretKeyRing ring in keys.GetKeyRings())
{
var key = ring.GetSecretKey();
var signer = new PgpV3SignatureGenerator(key.PublicKey.Algorithm, HashAlgorithmTag.Sha512);
signer.InitSign(PgpSignature.CanonicalTextDocument, key.ExtractPrivateKeyRaw(null));
signer.Update(data, 0, data.Length);
signer.Generate().Encode(armor);
}
}
return output.ToArray().Where(b => b != '\r').ToArray();
}
Beyond that no idea how they work. Probably not very helpful, but just FYI
Hi @hwittenborn ,
For Linux/Docker, it's passed-in as an environment variable:
https://docs.inedo.com/docs/installation-linux-supported-environment-variables
It's possible you don't have one, and in that case, the data won't be encrypted. Note that SecretKeys is just a byte[] stored as base64.
Alana
@hwittenborn there isn't a way to download the keys once created; it's probably not something we'll add to the UI in the near future since it doesn't seem trivial
The keys are indeed stored in the SecretKeys, but they're encrypted using the EncryptionKey stored in your ProGet configuration file.
You can try to decrypt them with this method (note: use EncryptionMode.Aes128)
/// <summary>
/// Decrypts data using the specified encryption mode.
/// </summary>
/// <param name="data">The data to decrypt.</param>
/// <param name="mode">Method used to decrypt the data.</param>
/// <returns>Decrypted data.</returns>
/// <exception cref="ArgumentNullException"><paramref name="data"/> is null.</exception>
/// <exception cref="ArgumentOutOfRangeException"><paramref name="mode"/> is invalid.</exception>
public byte[] Decrypt(byte[] data, EncryptionMode mode)
{
if (data == null)
throw new ArgumentNullException(nameof(data));
byte[]? key = mode switch
{
EncryptionMode.Aes128 => this.Aes128Key ?? throw new InvalidOperationException("Cannot decrypt value; there is no legacy encryption key defined."),
EncryptionMode.Aes256 => this.Aes256Key ?? throw new InvalidOperationException("Cannot decrypt value; there is no AES256 encryption key defined."),
EncryptionMode.None => null,
_ => throw new ArgumentOutOfRangeException(nameof(mode))
};
if (key == null)
return data;
var iv = new byte[16];
Buffer.BlockCopy(data, 0, iv, 0, 8);
Buffer.BlockCopy(data, data.Length - 8, iv, 8, 8);
using var buffer = new MemoryStream(data.Length - 16);
buffer.Write(data, 8, data.Length - 16);
buffer.Position = 0;
using var aes = Aes.Create();
aes.Key = key;
aes.IV = iv;
aes.Padding = PaddingMode.PKCS7;
using var cryptoStream = new CryptoStream(buffer, aes.CreateDecryptor(), CryptoStreamMode.Read);
var output = new byte[SlimBinaryFormatter.ReadLength(cryptoStream)];
int bytesRead = cryptoStream.Read(output, 0, output.Length);
while (bytesRead < output.Length)
{
int n = cryptoStream.Read(output, bytesRead, output.Length - bytesRead);
if (n == 0)
throw new InvalidDataException("Cannot decrypt value; stream ended prematurely.");
bytesRead += n;
}
return output;
}
Hi @Justinvolved,
You can create Global pipelines and scripts, which you can then use across applications.
You can see this in action here, in one of our extensions:
https://buildmaster.inedo.com/applications/18/scripts/all?global=False
Click on "Global (shared)" to see all the scripts. Same is on the Pipeliens page.
Cheers,
Alana
Hi @cole-bagshaw_3056 ,
That message is displayed when the framework (operating system) method Directory.Exists returns false. This means that ProGet cannot access that directory, typically due to permissions or other access problems. No detail is provided to us beyond that.
Unfortunately I don't have any tips/troubleshooting ideas on why a user/application would not be able to "see" a directory.
My guess is that it has something to do with your mounting configuration, and how you've mounting the volumne in Docker. You may need to SSH into the container and see where teh mapped volume actually is, etc. But that's just a guess...
Cheers,
Alana
Hi @guyk ,
Can you bypass the squid proxy and go directly to ACR? I saw a blog post long ago, where someone said something about proxies being an issue:
https://faultbucket.ca/2022/05/aks-image-pull-failed-from-proget/
Thanks,
Alana
Hi @chuck-buford_5284 ,
Thanks for letting me know; can you let me know what queries were deadlocked? We shouldn't see any deadlock, but I guess it'd be possible for SELECT * FROM [NuGetFeedPackageVersions_Extended] to deadlock on itself depending on the query plan sql server uses.
There are a few other things we can try, but we can't repro this at all, even in a test lab that's just hammering the database. Are you using SQL Server Express (i.e. what Inedo Hub installs by default)? It should work the same of course...
Caching packages use that "CreatePackage method", so it's basically the same thing as installing a package I suppose.
Cheers,
Alana
Hi @PMExtra ,
Looks like this didn't make it to the 2023 codebase; I've just merged it in via PG-2388 (shipping this friday in ProGet 2023.8).
Cheers,
Alana
@mness_8576 thanks! We definitely welcome feedback on the UI/UX - this is a new feature, so there's a lot of room to improve :)
Hi @rochishgvv_4077 ,
There is no way to continuously sync to a NuGet feed.
You can create a "Connector" to your GitLab package registry, so that the packages are always on demand: https://docs.inedo.com/docs/proget-feeds-connector-overview
You can download all the packages from a Connector using a Feed Downloader:
https://docs.inedo.com/docs/proget-feed-importing
You can configure GitLab to push to ProGet. We don't have any info on how to do that, however.
Cheers,
Alana
Hi @mness_8576 ,
For now, archiving is the way to do it. Looking at the code too, it doesn't look like there's even an API to do it...
Here is the code that BuildMaster uses, which clearly just sets the archive flag:
/// <summary>
/// Creates or updates the specified release with the specified data
/// </summary>
public async Task EnsureRelease(string projectName, string releaseNumber, string? releaseUrl, bool? active, CancellationToken cancellationToken = default)
{
using var response = await this.http.PostAsJsonAsync(
"api/sca/releases",
new
{
project = projectName,
version = releaseNumber,
url = releaseUrl,
active
},
cancellationToken
).ConfigureAwait(false);
response.EnsureSuccessStatusCode();
}
Otherwise, we don't have automated deletion or retention policies for archived SCA releases; they don't take up much space (relatively speaking), and we didn't want to commit to retention rules so early on in the feature.
If they become a problem (UI, performance, etc.), it's easy enough to delete a bunch via SQL for the time being... and that'll help us learn how to create policies. And we can add to API and all that :)
Cheers,
Alana
Hi @scroak_6473 ,
I'm not sure what the issue is and the errors are very peculiar, I assume that everything works, until you restore the database?
Afte restoring database, there are a few paths I would check under Admin > Advanced Settings:
C:\Program Files\ProGet\ExtensionsC:\ProgramData\ProGet\ExtensionsC:\ProgramData\ProGet\ExtensionsCacheHi @chuck-buford_5284 ,
Thanks; that's exactly what I would have looked for in the file, thanks for sending that;
How often are these coming up? What sort of hardware are you working with? Are you able to reproduce this consistently?
The query pattern is implying that there's heavy usage while uploading or deleting packages, and I know we spotted some potential issues earlier - but wanted to wait to confirm something else.
We have some optimized versions of FeedPackageVersions_DeletePackageVersion and FeedPackageVersions_CreateOrUpdatePackageVersion, but we didn't ship them just yet. Can you try them out? Just run the attached queries in https://inedo.myjetbrains.com/youtrack/issue/PG-2387
Cheers,
Alana
Hi @chuck-buford_5284 ,
I'm surprised to see these on v2023.7, but it's an issue we're working through (it's an entirely new indexing system).
Can you provide us with your deadlock reports?
It should be on your SQL Server, under Management > Extended Events > SEssions > system_health > package0.event_file. Then you can click on Filter (or CTRL-R), and add a filter for Field name = xml_deadlock_report.
Ultimately what we're looking for are the xml files, specifically what two queries are deadocking.
Here is some screenshots on how to do that:
https://www.mssqltips.com/sqlservertip/6430/monitor-deadlocks-in-sql-server-with-systemhealth-extended-events/
Cheers,
Alana
Hi @jimbobmcgee ,
The "Bad handshake" error is occurring (as you probably guessed) while trying to establish SSL tunnel. This basically occurs at the operating system level, and is usually a total mystery. The underlying error seems to be coming from SSPIWrapper.AcquireCredentialsHandle, and is just this:
(0x8009030D): The credentials supplied to the package were not recognized
I have absolutely no idea what that means, but... there's a ton of things I found via searching. A few articles are suggesting that it means lack of access to read the private key 
Maybe it's in the wrong folder (on the Otter server)? I think it needs to be in "Personal" folder of "Machine" certificates, but it sounds like it's definitely something wrong with Otter reading the certificate....
Let us now if you find out more!
Hope that helps,
Alana
Hi @jimbobmcgee,
We don't currently support via the API; it's not terribly complex/risky to do, but also not trivial.
I'll add this to the "Otter 2023 wishlist", and it probably also get pushed to BuildMaster as well, since we want the API to be the same in both products.
FYI - we would most certainly prioritize/implement this for a paying user (*wink* *wink*), but for now we'll keep it on the internal wishlist.
@jimbobmcgee thanks for all the help on this! While not a complicated fix, it might not be trivial - so put this on our internal BuildMaster 2023 board. We're in active development, and will review/fix this there, then look to backport to earlier versions of InedoCore.
You're more than welcome to give it a stab as well...
And feel free to submit a PR if you get something working; lot easier to review/test that way :)
Quick comments...
I assume that fileOps.NewLine uses the platform of the current for server, rather than the platform on which Otter is running
Correct -- it's related to the agent that server uses. For PowerShell Agent, Local Agent, and Inedo Agent, NewLine returns Environment.NewLine. For SSH Agent, it's \n.
However, this probably isn't accurate anymore; I suspect InedoAgent is incorrect if BuildMaster or Otter is running on Linux, and Inedo Agent is running on Windows. But I'm not sure.
public bool RawMode { get; set; }
Good point --- and to be honest I was a little surprised by the fact that these operations even rewrote new lines. Probably came from a support request... my guess is for when you're passing in literals or something :)
In any case, we should probably switch to an enumeration like template operat has (TemplateNewLineMode) -- but a fourth option to the both (None).
Anwyays -- we'll review a bit later if we don't hear back otherwise :)
Hi @jimbobmcgee,
I fixed this via OT-493 FIX; Custom Server Targeting should be selectable if the script type is OtterScript. It was a regression in the job template editor.
Am I right in saying that for server is only expected to work for Custom server targeting, though?
Correct; as of Otter v3 it's no longer supported in other scenarios.
And for server can take a scalar variable argument, such as...
Also correct, so the code you shared should work!
Hi @MF-60085 FYI just sent you an email :)
Let us know when you upload and we'll investigate from there!
Hi @MF-60085 ,
This message is pretty strange, and not quite sure how it's possible.
Are you okay sending us your database? That's going to be easiest.
It's probably too big to attach, but you can send using a "large file transfer" type service. Or we can email you a secure link on our sharepoint server to upload.
Cheers,
Alana
Just to confirm... is the issue just on the "Show Secret Fields" page?
SSH Keys are a little weird, in that they're stored in binary; so I think that page is mistakenly tryring to display that as a string (hence the block characters).
Cheers
Alana
@jimbobmcgee said in Basic arithmetic in OtterScript:
I can only prompt for variables with a job template, but the option for Custom server targeting is not available in job templates.
That doesn't seem right! I'll investigate and report back :)
Hi @jimbobmcgee
To be honest this is pretty old code and I'm not entirely sure how it works :)
I wonder if this is the issue in Apply-Template?
https://github.com/Inedo/inedox-inedocore/blob/master/InedoCore/InedoExtension/Operations/General/ApplyTemplateOperation.cs#L90
I'm guessing the original string must have \r\n in it or something.
Here's the Create-File code, where there seems to be another replacement:
https://github.com/Inedo/inedox-inedocore/blob/master/InedoCore/InedoExtension/Operations/Files/CreateFileOperation.cs#L77
I didn't have much time tot look closely, but wanted to share the code and see if you spotted antyhing!
Cheers,
Alana
Hi @hwittenborn ,
The API should output valid API, but I see there was a typo in the documentation. serviceStatus is either OK or Error, and the status detail will contain the error text.
Cheers,
Alana
Hello, if the ProGet server is running (and didn't crash), and you don't see any errors on the ProGet server... then the issue is between the server and the client.
You can try to trace this using a tool like Wireshare or Fiddler, but to be honest the error could be anything including a bad wifi/network connection. It's not PRoGet or chocolatey specific, so you can search broadly for "how to troubleshoot unable to connect to remote server" for lots and lots of tips on how to resolve such an error.
Cheers,
Alana
Hi @jhaas_7815,
The message "Unable to connect to the remote server" basically means that Chocolatey client can't connect to ProGet at https://xxx:8625/nuget/xxx/'. If you entered the same URL in your browser, I would expect a similar error.
I would check to make sure that the ProGet web site is running. You may need to restart IIS, etc. There's probably some kind of error on the IIS side, but it's really hard to guess what it is.
Cheers,
Alana
Hi @jimbobmcgee ,
You're right, you would have to switch to localhost; however, you wouldn't have to "switch back". In other words, this would work fine:
for server MyLinuxServer
{
set $MyResult = "";
for server localhost
{
$MyResult = $PSEval((5 + 3) * 4 / 3^0.5);
}
}
You could wrap it in a Module, so you could do this:
call MyMaths ( compute: (5 + 3) * 4 / 3^0.5, results => $MyResults);
Otherwise, if you were so inclined, you could create a variable functions in C# that could handle this as you'd expect. But that's quite complex (parsing it, finding a parsing library, etc.), and we're hesitant to explore this any further as a result.
Cheers,
Alana
Hi @philippe-camelio_3885 , there's definitely a bug... it was just hard to reproduce.
The issue was the Otter was looking at the wrong credential to determine if function access was allowed...
Hi @ForgotMyUsername ,
I can't reproduce #1, and I don't have enough details (specific error messages, stack traces, etc) to guess what the issue could be. Same for #2 -- I would need to see specific OtterScript and the errors you're getting to take a guess at where the problem is.
One thing that would be helpful, if you could create step-by-step reproduction information. For example,
TOM + JERRYXYZShort of that, stack traces and errors logs are really important. There are a lot of copmontents working together, and hard to guess based on general descriptions :)
Cheers,
Alana
@philippe-camelio_3885 thanks for confirming that it was broken, I took a closer look and figured it out :)
It will be shipped in the next maintenance release as OT-492 (and also in BuildMaster via BM-3846)
@ForgotMyUsername great news, thanks for letting us know :)
Hi @ForgotMyUsername ,
I'm not sure I'm know where the issue is... can you provide the OtterScript? Or the specific error/logs that triggered that message.
The with isolation is supported, and is a checkbox on the advanced tab of the General block.
Cheers,
Alana
I can't reproduce this at all....
test with Username = myuser, PAssword = mypass, check "function usage allowed"Credtest.otter with contents belowSo it seems to be working as expected. And as Rich mentioned, the code seems fine and hasn't been changed.
Here's the script:
set $CredentialUser = $SecureCredentialProperty(test, Username);
set $CredentialPwd = $SecureCredentialProperty(test, Password);
Log-Information User is $CredentialUser, Password is $CredentialPwd;
Actually we reviewed this for ProGet 2023, but ultimately decided not to it.
The main reasons were,
(1) it wouldn't work well for remote packages, since we don't have the package file locally
(2) different behavior for remote and local/cached packages is confusing to explain and adds a support burden
(3) it was not trivial with the way README was implemented on NuGet and our abstractions
(4) not in demand - we've had one other request (5 years ago) for this, and it was before the spec was finalized
I suppose you could say that "demand has doubled", but probably best to wait if we have more requests for this... given that it's not trivial and the behavior may be quirky.
Cheers,
Alana
Hi @k-gosciniak_5741 ,
We've got no clue what's causing that, but definitely some kind of IIS configuration. Ultimately I don't think the request for that file is being passed into ProGet, but IIS is trying to find the file on disk instead (which it obviously won't be).
I would guess if you put a file on disk that file would be served via that URL. IT's just a guess.
Other things to look at ... MIME mappings, I've seen those do strange things. Maybe there's other IIS modules. It's really hard to guess.
It might just be easiest to switch to the Integrated Web Server; you can just uninstall / reinstall ProGet. That won't use IIS then.
Cheers,
Alana
I don't know where the "extra commas" came from, but Otter/BuildMaster wouldn't have generated them that I can tell? It's not like there's a flag we can pass to our JSON library to say "make invalid format" 
But in any case, the API response looks valid to me, and it seems to be importing okay?
The Property "Value" was not erxpected message is what you would get if you tried to import the "bad" (extra commas) JSON document.
Maybe try syncing again? The sync is really just an automated version of the manual process, and uses the same code base.
Cheers,
Alana
... hmm that's really weird. Those commas aren't showing up on my instance.
Can you go to Admin > Infrastructure > Export; what do you see showing up there?
That uses the same JSON-serialization code as the API.
You can also copy/paste that JSON into Admin > Infrastructure > Import, and do a kind of manual sync.
hI @k-gosciniak_5741 ,
That's weird; I would try to restart the application pool. Then, try switching from "Classic" to "Integrated" (or vice versa) on the application pool. Both should work... but "Integrated" seems to work best.
Cheers,
Alana
Thanks for clarifying!
I'm not sure why that won't work as a "normal" user, but I'm guessing it has to do with the /etc path or something.
But in any case... I don't think it's possible, at least with our knowledge of the underlying technologies. We rely on the "subprotocol" of SFTP for most file-based operations, and we rely on libssh2 to handle SFTP communication. How those protocols/libraries work is kind of a black box at that level.
On the plus side, you should be able to write "ensure" scripts using Bash :)
Cheers,
Alana
I would expect the "could not be loaded as JSON" message if the endpoint was giving some kind of error.
What happens when you query:
https://otter-2023.ocapiat.fr/api/infrastructure/all/list?key=*****
Is there a kind of error there?
Thanks,
Alana