@mcascone thanks for the heads up! It was a quite a big migration effort, but ultimately will be a lot easier to maintain.
Looks like the UPack and Romp documentation got missed during the migration :( We will add it to the new site ASAP
@mcascone thanks for the heads up! It was a quite a big migration effort, but ultimately will be a lot easier to maintain.
Looks like the UPack and Romp documentation got missed during the migration :( We will add it to the new site ASAP
Hello; can you share this file to us?
Feel free to email it to support at inedo dot com, but please add [QA-586]
to the subject, so we can track this internally.
@arozanski_1087 no problem!
And by the way, the pgscan
tool is open source, so if you see opportunities to improve it, or want to devleop something on your own, please don't hesitate to use the sources - https://github.com/Inedo/pgscan
@coskun_0070 glad you were able to narrow this down some more
There's shouldn't be a problem using remote SQL Server instances with the Inedo Hub, and this is definitely a scenario we test and support. If you were to put in a bogus connection string, it should make an error during the "Validating Configuration..." step.
My new guess is that something is blocking the connection later down the line, which is causing the unexpected edge case? I don't know...
@philippe-camelio_3885 I had a chance to look at this a little more closely, but just can't get this to reproduce as a problem. At first, I want to point out that no variable is defined on INTEGRATION.
"environments": [
{
"name": "INTEGRATION",
"variables": {}
},
{
"name": "PRODUCTION",
"variables": {}
},
{
"name": "DEVELOPPEMENT",
"variables": {
"SQLInstances": "@(BOFRCT, ITRRCT, XTRRCT)"
}
}
],
Anyways, I imported your infrastructure (converted everything to localServer
), deleted the variable from the role and added to the INTEGRATION.
On VM008004 (in DEV), I got the expected error: Could not resolve variable %SQLInstanceParams. If I delete the variable (SQLInstances) on the server, that variable is resolved because it's defined in DEV, too.
OP VM008007 (in INT), I got the (different) expected error: Key StaticPorts not present in the map.
When I delete @SQLInstances (which is defined on the server), then I get the error "Could not resolve variable @SQLInstances." ecause it's not defined on the environment
@coskun_0070 that's really strange.
Unfortunately, I don't know what we can do from here. I know it's frustrating to hear that, but you're currently the only one experiencing this, and it's only on your server...
Perhaps it's failing while accessing the local package registry? It's a guess...
@coskun_0070 unfortunately (or fortunately) we only have one user who's experiencing this (you), and we can't invest the one-on-one time to help further.
Can you try the ProcMon route? If you monitor events from the InedoHub process, you'll see files downloaded from proget.inedo.com to a temp folder, and then extracted. If you can monitor what happens to those extracted files, it'll will give a you a clue as to what's deleting those files.
@coskun_0070 unfortunately there's no other code we can add
It's very clear to us where the problem is occurring; the package files (zip files) are extracted to a folder. No error occurs during the extraction. And then a moment later, the files are not there.
Can you use a tool like ProcMon to see all that's happening behind the scenes? You may have something else interfering with the files after extraction
https://docs.microsoft.com/en-us/sysinternals/downloads/procmon
FYI; we're going to investigate this some more, and will try to see why it's happeing here, but not for me
Hi @arozanski_1087,
Whatever consumer-package-name
and consumer-package-source
you set as a parameter to pgscan
is what the packages will be associated with.
Basically, the utility has the effect of doing this...
To get the behavior you want, you may need to call pgscan
multiple times with different "consumer" information, or even modify the tool / cusotmize something that calls the API directly.
Good point about delete button; I added a change (PG-1957) where we'll get that as a UI Addition.
Hello,
This typically comes from having one ProGet Free instance connect to another ProGet instance.
You can clear the license violations here: https://docs.inedo.com/docs/proget/administration/license
Cheers,
Alana
Hello,
I'm not able to reproduce this case at all
The message "This operation is only valid when run against an SSH agent." is logged if you try to run SHEXec
against a non-linux server, and I can consistently reproduce that.
But when I switch to an SSH server, it works totally fine...
I wonder if there's something simpler at play, like the wrong $ServerName
is in context or something? Nothing seems thtat way from the codes you shared, but.... can you try a very simple repro case, like an OtterScript that looks like....
for server myLinuxServer
{
SHExec echo hello world;
}
Thanks,
Alana
Hi @arozanski_1087 , happy to help!
Hopefully we can update the documentation with these improvements.
What actually belongs in the consumer-package-name and consumer-package-version fields?
This is supposed to represent your application. pgscan
will detect the packages that your application uses, like Newtonsoft.JSON
and the version of that package.
How does pgscan handle subprojects and submodules when I call it from the .sln level?
When you point pgscan
to a .sln
file, it will parse the file and look for projects. Under each project, the tool will look for packages.config
(which is the older style project format) and then project.assets.json
(which is the newer style).
How do I remove dependencies from packages once I register them?
This isn't currently supported it seems (I don't see a delete button in the UI), but if you don't mind going to the database, you can just do DELETE [PackageDependents]
and then all the rows are cleared.
Hello;
Thanks for reporting this bug/layout issue! I just made a simple change (PG-1956) to fix this, and it will be available in the next maintenance release (5.3.29)
Hello;
Basically, this is failing very early on during the installation process, during the "package extraction" process.
This would most likely be caused by only one of two things:
It might also be related to temporary file locking, so try rebooting to see if it helps.
Otherwise, check what could be preventing those package files to be extracted; it's typically the quarantine, so check the log files for that.
Please let us know what you find!
FYI; in https://forums.inedo.com/topic/3088/ the user said they disabled Windows Defender and it worked
@joshuagilman_1054 that's really large chocolatey file (nuget package), so you may want to rethink your approach. It'll cause some pain across the board, as you try to download and install that file as well. Instead, perhaps have your chocolatey package download an asset that you've stored in ProGet instead?
In general, large files are tricky to publish over a single HTTP request reliably. This is across the board, even when uploading files to places like Amazon S3; those rely on a chunked uploading process... but the NuGet API doesn't support that.
Otherwise, there's no limit imposed by ProGet itself, and you've found the settings that ASP.NET imposes. There could be some other limitation happening, but it's hard to say where; apparently it varies by operating system version, and it might even be middleware (like a proxy/firewall).
The message "there must be exactly one package" is unexpected; I would instead expect 'request length exceed". In any case, that message just means that no valid files were attached to the request, which can happen if it was suddenly cut off.
All told, when it comes to really large files (even asset directories), a Drop Path approach may be easiest to use.
An ExecuteOperation
is the most simple Operation class available. You just implement the ExecuteAsync
method, and that code by the execution engine when the Operation is invoked in your OtterScript. This (and all operations) have a ExecuteCommandLineAsync
helper method, which is sent to the server in context.
To interact with the server in context, you need to use the Agent
property on the executionContext
that's passed into the ExecuteAsync
method. Because there are a lot of agent types and versions (Inedo Agent, Inedo Agent on Linux, PowerShell Agent, SSH Agent, Local Agent, etc.), you can use the TryGetService
method to see if the agent supports what you want to do. Not all agents support all services. I think you've already seen how this works.
One of the agent services available is IRemoteJobExecuter
. This essentially just performs a long-running task on the remote server, via the agent. For this service to be supported, the agent must support .NET; I think all agents do at this point (even SSH) thanks to .NET core.
A RemoteJob
is the class used to describe what this long-running task is. It contains information about what you want to do, has its own ExecuteAsync
method that will run on the server, and can stream log messages back to BuildMaster/Otter. When defining a RemoteJob
, you need to serialize/deserialize everything on your own.
For example, if your job simply wanted to add two numbers together, you'd need to Serialize
the two numbers, then Deserialize
them, then serialize a response, and deserialize the response. It's a bit complex.
This is where the RemoteExecutionOperation
comes in.
It has a "lifecycle"of three methods:
BeforeRemoteExecuteAsync
(optional, happens on BuildMaster/Otter server)RemoteExecuteAsync
(required, executes on remote server)AfterRemoteExecuteAsync
(optional, happens on BuildMaster/Otter server)Hope all this helps!
That message is basically a result of a bug in error-handling logic; basically, an error is occurring while displaying the error. This can happen due to certain IIS or server settings, and in later version, you should see a more appropriate message.
It's hard to say what the problem is, but if you didn't change anything, I would just reboot, and the problem might go away. Changing App Pool settings can also help (like Classic -> Integrated or viceversa).
You can also try upgrading to see the underlying message. v4.6 is pretty old anyways.
@philippe-camelio_3885 thanks!
How many environments are that server in? Just one environment, or multiple? Can you try it with just one environment, if mutliple?
If it still doesn't work, then I'd like to try to reproduce it. Can you share your infrastructure json file (Admin > Export Infrastructure? If it's sensitive info, then don't worry -- you can just send to support at inedo dot com (instead of public post), with [QA-568] in the subject? I can fish it out of the box then, and attach to our internal tracker.
Thanks @Stephen-Schaff! FYI we are currently investigating this, and I hope to hear back soon. I'm starting think that maybe this is the issue, but I'd like to confirm and fix it first...
Hmmm... I did a quick test for environment-scoped variable resolution, and it seems to work fine for me, but it seems like it can get a bit complex with nested environments and multiple so there might be a bug
Is "INTEGRATION" a nested environment? How many environments is your server in?
Thanks!
@philippe-camelio_3885 thanks for sharing all the logs!
Can you confirm... which version of the Scripting extension are you using? I see @apxltd made a pre-release version (1.10.2-rc.3) but it's not yet released.
Just a guess, but v46 required you to explicitly set an Instance Name in the product, and I'm guessing you didn't do that. This can cause some strange behaviors, and maybe that's what happened here.
From here, I recommend to just reinstall v49 on the server, manually. Alternatively, you could upgrade to v49 first (you can manually enter a URL in Otter, for the installation package).
FYI: in v49, instancing is automatic, based on IP address.
@ashah_4271 can you share your OtterScript? and show specifically what you'd like to do? Happy to help if we can!
Hi @Stephen-Schaff,
An error like that should be better reported by Docker (that XML is their format), but it also should have appeared in the diagnostic center, since it's an unexpected server error. It's likely suppressed by mistake, for expected errors like "tag not found" and the like.
As for why the error occurred, it's hard to say -- but definitely a bug. Do both of your feeds have the same storage type? Under Manage Feed > Storage?
There is some complexity with "common blog storage", so it's possibly related to that. If you can share info, we can try to repro and then fix.
@viceice thanks for reporting that. It's unrelated. The error message I see (in Admin > Diagnostic Center) is implying that the problem is related to our database structure.... perhaps from installing a pre-release version? We use fairly unstable versions for our internal production environments :)
Hi @m-janssen_0802 ,
We've identified a potential fix for this (PG-1942), and have already pushed a code change as pre-release version 5.3.27-rc.15
.
This is installable via the Inedo Hub when you configure the installation source as https://proget.inedo.com/upack/PrereleaseProducts/
- this can be changed with the little [config]
button in the Inedo Hub, bottom left-corner.
Thank
Alana
It's strange indeed. I wonder if it's related to a locally cached version you have?
I can't seem to find any documentation, but I believe PowerShell expects to only receive the latest version available unless a specific version is specified(This is based on the fact that if more than one package is found, it errors). Looking further into ProGet returning two results, we see that it is returning the latest non local version as well as the latest local version. I think if ProGet were configured to only return the latest version that that would work better
Behind the scenes, PowerShell is calling the FindPackagesById()
NuGet API method which is supposed to return all versions.
You should see almost identical results to:
You'll notice the same with InvokeBuild
as well; all versions are returned from the API, as expected. So this is why I think there's multiple repositories (a local one?) or caching, or some other thing with the client. I know you can install a module locally, in powershell, by copying the module file to a directory?
Sorry I'm not really good with degubbing the PowerShellGet client, but hopefully we can figure it out....
Cheers,
Alana
Hi @Michael-poutre_3915 !
I'm not a PowerShell expert by any means, but there's nothing that's "quirky" about the ExchangeOnlineManagement
package or data, so I don't think that's the problem.
What you're doing should work fine, however. It seems very basic usage, and it's just odd to see WARNINGS
about the module being "matched"...
I searched the error "Unable to install, multiple modules matched " and found lots of results
Lots of suggestions, but this error seems to happen if there are repositories configured.... and apparently it's possible to have multiple repositories named ProGet
?
So I'm wondering if this is ultimately some sort of client configuration quirk/bug?
Hi @harald-somnes-hanssen_2204,
Unfortunately this got pretty complex, pretty quick due to the way these are stored (in ranges, per package type).
In order to clean-up vulnerabilities, we will need to scan all feeds of that type, then it's packages, and then do version comparisons. So it would need to be a separate Vulnerability Cleanup job (as you originally suggested), and not a retention rule.
It's unfortunately not an easy engineering task on our end, especially since it could be quite resource intensive, depending on number of vulnerabilities / packages.
All the mass-clicking seems to be really annoying for sure, but we just need to evaluate the engineering costs/effort of this feature, with the benefits alternatives. For now, we should wait to see if anyone else expresses interest or issues with managing vulnerabilities.
I want to also note that... we do have a project on the horizon for a kind of multi-selecting UI table (where you could do bulk operations on selected items), and perhaps that would help here instead
Hi @Stephen-Schaff ,
what you're doing should work, but this appears to be a bug due to the special-handling required for docker image promotion via API - so I've logged this as PG-1939 - we should get it fix in the next maintenance release.
@philippe-camelio_3885 a failure there is rather peculiar, and the error message just looks like a generic "can't connect" error. The agent upgrade hasn't really even started yet, it's just some initial prep of downloading the file.
That message is occurring when making a very basic "direct connection" to the Inedo Agent (as opposed to the more complicated, "pass through" connection to the Otter Agent that the Inedo Agent manages).
The other time a "direct connection" is made is during the "Server Checker" task runner. That runs on service startup, and then every hour, or when you trigger it manually (Admin > Service).
Anyway you could check it there. So, it could just be bad timing? Maybe the server is actually inactive, or agent got off?
Ah ok, I was able to reproduce this behavior; it was unfortunately a 3.0 regression as part of migrating the Legacy ResourceCredentials changes... but it's relatively easy to fix as OT-413 -- it's already scheduled for Otter 3.0.5 (next Friday), but we could make it available as a patch/pre-release version if you'd prefer?
You're right, there haven't been any RPM changes... the error message is implying that ProGet is rejecting the data you're sending for some reason, "Unable to parse package header. The supplied package may be an invalid RPM file".
It might be a .NET5 issue, just based on the version numbers you told me. Since ProGet v5.3.12, the inedo/proget
image is hosted using the .NET Core runtime. Previously, it was hosted using the Mono runtime. The inedo/progetmono
image is available to v5.3.19.
Cheers,
Alana
Hi @Stephen-Schaff,
Docker is pretty wonky, and technically doesn't support multiple registries per host.
A Docker Registry is tied to a host (like myproget.mydomain.net
), not a URL. A Docker Registry's catalog API endpoint is always «host-name»/v2/_catalog
, but I guess your third-party tool isn't so particular, and does a basic concatenation against the field.
This silly implementation makes no sense for ProGet -- so we just work-around that by requiring a "Namespace" on your Docker Registries that starts with the feed name.
So technically, you have a Docker Registry at «host-name»
that hosts a bunch of Docker Repositories named «feed-name»/«repo-name»
. I'm afraid your only option here is to do what Docker wants you to -- which is create a user with restricted permissions to the containers you want.
@philippe-camelio_3885 sorry this has been frustrating
There is a compatibility shim for this that should have picked this up... but after the migration from ResourceCredentials
to SecureResources
and SecureCredentials
, we no longer use the type name to qualify credentials.
Long story short, this should fix it...
set $CredentialUser = $CredentialProperty(myaccount, Username);
set $CredentialPwd = $CredentialProperty(myaccount, Password);
@afd-compras_2365 ProGet implements the NuGet Server API, so you can use ProGet as a private NuGet server; we did not design that API, we simply implement it
Here is the same API call on NuGet gallery with a package that doesn't exist
https://www.nuget.org/api/v2/FindPackagesById()?id='NotRealPackageFakeFAke'&semVerLevel=2.0.0
I'm not sure I understand the dependency question with nuget.exe though... but in a case like this, you can just put those packages in your feeds I believe? Then you won't get update issues
@brett-polivka sorry about that, there's always room to improve the testing, and you just got "unlucky" with a few edge cases that slipped through the cracks.
But not to worry -- you can just downgrade the extension by following the manual installation instructions - the 1.9.0 of Azure will still use the old libraries - https://proget.inedo.com/feeds/Extensions/inedox/Azure/1.9.0
@afd-compras_2365 that's the proper behavior of the /FindPackagesById()
API endpoint; as far as why that's the proper behavior, you'd probably have to track down the folks at Microsoft who designed the API over ten years ago and ask them ;)
That particular API returns a resultset; if the set has 0 items, then there are no packages with that ID found.
@joshuagilman_1054 I don't really know PowerShell myself (today I learned you can do classes )... but behind-the-scenes it's .NET, and that means we can use nullable value types.
I tried [int?]
(nullable shortcut syntax) and [Nullable<int>]
(generics shortcut syntax) but PowerShell isn't so happy with either. So the long way it is...
[System.Nullable``1[[System.Int32]]] $myInt = 0
echo "myInt is $myInt "
$myInt = $null
echo "myInt is $myInt "
$myInt = 1000
echo "myInt is $myInt "
That should do the trick for you, and is close to our JSON Model anyways.
@joshuagilman_1054 said in API expects null instead of 0 for integer values:
The only option I have is to manually check for these edge cases and convert the zeroes into nulls before sending off the JSON which seems rather silly. Is there a specific reason the API won't accept a zero in this case?
Well, the specific reason is due to data integrity rules; the TriggerDownload_Count
is just defined as a nullable, non-zero, positive integer. Using -1
would also yield the same error.
And as a general practice, we don't convert user input to NULL
, or add extra code to make it work in a case like yours -- we just let the data integrity rules raise errors like that.
In our own code, we'll do what you do (and pass a zero or empty string around, for example); so in a case like that, we just have a function like NullIf(value,0)
or Nullif(value,"")
before attempting an internal API call. So that would probably be the best bet.
Hello,
That API will only delete package metadata from the database, not from disk. It's mostly intended for internal use only, and probably shouldn't be exposed to the API. In any case, we don't store the @
with internally, so if you change @myscope
to myscope
it should work.
Note that the NPM doesn't provide a way to delete packages, and we never implemented it. There hasn't been any demand for it to date, as people don't really delete packages programmatically - but you're definitely welcome to submit a feature request and help us understand why it'd be a value (like, the workflow you use that requires deleting packages, etc).
Alana
@Stephen-Schaff ah that's too bad, quite frustrating! Any luck setting up test server, so it doesn't' inconvenience production?
I don't why it didn't work, but it's not what I would have expected.
We had been assuming this was failing in the TryParseLoginUserName method, which is where that NETBIOS mapping occurs. It seems to be working fine, and is surprising to see.
Instead, it seems to be failing in TryGetUserAsync
, which calls the TryGetPrincipal method. The TryGetUser
method is called in a bunch of places (and when it returns null for an authenticated user, you'll get that can't find user
error), but it's also on the "Configure User Directory Page", when you hit the "test get user" button.
You showed that you tested the connectivity using "test search", but there's good reason one query (get) would work, but not the other (search).
That doesn't make a lot of sense to me. I'm thinking, another test from that page is in order.
Here's the (messy) code for /debug/integrated-auth
.
WriteLine($"Id:\t\t{domain.Id}");
{
var messages = new List<string>();
WriteLine("---------");
var ad = WebUserContext.CurrentUserDirectory;
ad.MessageLogged +=
(s, e) => messages.Add(e.Message);
var parsedLogonUser = ad.TryParseLogonUser(context.Request.ServerVariables["LOGON_USER"]);
if (parsedLogonUser == null)
WriteLine("Could not parse LOGON_USER.");
else
WriteLine("LOGON_USER parsed as: " + parsedLogonUser.Name);
var user = await ad.TryGetUserAsync(context.Request.ServerVariables["LOGON_USER"]);
if (user == null)
WriteLine("Username not found.");
else
WriteLine($"Username:\t\t{user.Name}");
WriteLine("Additional messages:");
foreach (var m in messages)
WriteLine(" - " + m);
}
Here's the (messy) code for the "Test" button next to "Test get user" on that page:
var btnTestGetUser = new PostBackButtonLink("Test", () =>
{
var log = new StringBuilder();
try
{
instance = instance ?? (UserDirectory)Activator.CreateInstance(this.Type);
editor.WriteToInstance(instance);
instance.MessageLogged += (s, e) => log.AppendLine($"[{e.Level}] {e.Message}");
var principal = instance.TryGetUser(txtTestUser.Value);
if (principal == null)
{
divSearchResults.Controls.Add(InfoBox.Warning(new P("User ", new Element("code", txtTestUser.Value), " not found.")));
return;
}
else
{
divSearchResults.Controls.Add(InfoBox.Success(
new P("User ", new Element("code", txtTestUser.Value), " found: "),
new Ul(
new Li("Name: ", principal.Name ?? ""),
new Li("EmailAddress: ", principal.EmailAddress ?? ""),
new Li("DisplayName: ", principal.DisplayName ?? "")
)
));
if (!string.IsNullOrEmpty(txtTestUserGroup.Value))
{
if (principal.IsMemberOfGroup(txtTestUserGroup.Value))
divSearchResults.Controls.Add(InfoBox.Success(new P("Member of ", new Element("code", txtTestUserGroup.Value))));
else
divSearchResults.Controls.Add(InfoBox.Warning(new P("Is not member of ", new Element("code", txtTestUserGroup.Value))));
}
}
}
catch (Exception ex)
{
divSearchResults.Controls.Add(InfoBox.Error(new P($"Error: {ex.Message}")));
}
if (log.Length > 0)
divSearchResults.Controls.Add(new Element("textarea", log.ToString()) { Style = "width:500px; height:50px;" });
divSearchResults.Visible = true;
});
Lots of code, but I wanted to share both of these, so we're looking at exactly the same thing, if you need it.
** Can you try testing "get user" again (not "search user") using that page? You will most certainly see the exact same set of error messages. **
If this is the case, then the problem is most definitely related to credentials/permissions, and really doesn't seem to be related to NETBIOS alias, after all.
Next steps.
I hate that last step... but there's no reason on earth why this same, basic query that's run by the same C# code using the same credentials would work in one environment (desktop app on one server) but not another (web app)
Thanks for making the pull request @viceice !!
Well, I can see the issue... 5.3.24.7 != 5.3.24
Okay, so next version it should work. Diagnostics.DbVersion
is set in the build/release process, so we just changed the code that sets Diagnostics.DbVersion.
Next time it should work.
@viceice it was shipped in 5.3.23, so please give it a shot and let us know!
We didn't test it in a Kubernetes deployment (I hope we can get great docs on that some day!), but in the meantime I did add this to the documentation:
https://github.com/Inedo/inedo-docs/commit/358be6d03160ff569d791532f14cc5f05012b2a8
If you have any suggestions, especially on how to improve the docs, please share it or submit a PR to our docs :)
@Stephen-Schaff thanks for clarifying, I misunderstood!
Let me explaim how integrated auth works. Basically IIS/Windows Auth only provides ProGet with something like INEDO\username
. However, INEDO
is not a domain name, it's a NETBIOS alias. To query a directory, you need the real domain name (in our case, it's inedo.local
).
To find the domain name the global catalog for the domain server will be queried to determine any mappings, but this can sometimes fail due to permission errors. This is why you get a "User not found" error. The legacy provider relies on DNS resolution, which was incorrect.
As an alternative, you can provide a list of key/value pairs that map NETBIOS names to domain names may also be specified (one per line); e.g. KRAMUS=us.kramerica.local and if any value is specified, the automatic query is not performed, so all NETBIOS names must be specified.
There's a lot of details on the [Advanced LDAP Configuration] (https://docs.inedo.com/docs/various/ldap/advanced) to consider, but basically the reason it wasn't working was because the NETBIOS mappings could not be resolved correctly.
In any case, that's the field to work with, the NETBIOS Mapping.
@JonathanEngstrom Otter 3.0.3 scheduled for Friday, so please check it out then :)
So first, we do take breaking/deprecating pretty seriously; we put a ton of effort in documenting and helping migrate from BuildMaster Legacy Features for example, and even built-in tools, etc. to do it.
Why did we break it in Otter? Well, we normally wouldn't break something like PSEnsure
-- but this feature saw almost no usage. I think only like 3-4 customers made use of it, and then some community folks like yourself. We already talked to the customers, and figured... early adopters in community might ask ;)
One reason it saw so little usage is that the old PSEnsure
required two scripts (Collect and Configure), and lots of messy parameters. So it was always on our list to do it with a single script.
Our original plan was to make it backwards compatible, but that proved to be technically unfeasible. From a training/documentation standpoint, we wanted to make PSCall
, PSVerify
, and PSEnsure
, all work very consistently, and we just went with PSEnsureScripts
...
If there were more users of the features, we would have been a lot more careful and either automated or carefully documented a migration plan. But in this case, we figured it was a major version change (so breaking things are expected), and we can act reactively (like this) and help migrate as needed.
A couple customers will have a lot of OtterScript Configurations to migrate, but it's just a search/replace of PSEnsure
to PSEnsureScripts
for them...
Thanks @JonathanEngstrom!
Could not resolve variable $ErrorActionPreference.
There could be a regression with Variable resolution? Can you let us know how/where this is configured, so we can try a repro?
PSEnsure that I test with and has always worked
This was a big change. We really liked the name PSEnsure
but hated how it worked, and we don't want people to use it anymore. So we decided to rename it to PSEnsureScripts
and redefine the behavior.
PSEnsureScripts
(
Key: Simple Test,
Value: True,
Collect: >>
if (Test-Path C:\Temp2){$true}else {$false}
>>,
Configure: >>
New-Item -Path C:\ -Name Temp2 -ItemType Directory -Verbose
Write-Output "Make C:\Temp directory"
>>
);
<#
.DESCRIPTION
Verifies that the specified HotFix is installed
.AHCONFIGKEY
Simple Test
.AHEXECMODE
$ExecMode
#>
if $ExecMode -eq "Configure" {
New-Item -Path C:\ -Name Temp2 -ItemType Directory -Verbose
Write-Output "Make C:\Temp directory"
} else {
if (Test-Path C:\Temp2) { return $true} else { return $false }
}
PS Love the new look of Otter 3 :D
Thanks! And it's too bad you didn't try out Otter 3.0.3 (shipping this week, Friday); we now have cute character/artwork in the onboarding steps
And for reference, here are the descriptions of the Added Help values you can use.
.AHDESIREDVALUE
This is what you wish the configuration value to be. When not specified, $true is used
.AHCURRENTVALUE
This is the actual value of the configuration. When not specified, the script's return (output) is used.
.AHCONFIGKEY
This is the "configuration key" used by the script, which is a string that uniquely identifies configuration on a server. It's like a file on disk (a file is uniquely identified by its name), or the name of an IIS Application pool (an application pool is unqiuely identified by its name). Optional. When not specified, the name of the script is used.
.AHVALUEDRIFTED
This is an indicator as to whether the value is considered drifted. When not specified, it's a basic comparison of the desired and current values.
.AHEXECMODE
This is either "Collect" or "Configure", and is only used on PSEnsure operations; it will be ignored (or set to Collect, depending on what's easier to code) on PSVerify. Using a PSEnsure without a .AHEXECMODE will cause an error.
The Additional Help items can be specified as a value or a variable; variables will simply start with a $.
That's strange; we haven't heard this before, and I don't think that ProGet sets the domain in the cookie; can you check other URLS, to make sure the host is actually being forwarded, such as the nuGet package index?
here is some examples to help guide: https://forums.inedo.com/topic/3037/how-to-configure-the-proget-free-with-self-connector/13
@Stephen-Schaff if you're using the legacy directory (i.e. not the one Active Directory (LDAP)
one), then there were no changes to that, or anything that uses that code, that would yield behavior like this
There were a lot of non-functional changes to the Active Directory (LDAP)
, namely that we made a upgrade to which version of the Microsoft libraries we are using. A few users of these libraries with legacy features from ancient (i.e. Windows 2000) domains have reported some problems. Maybe you're describing a such problem?
Because it's obviously impossible for us to reproduce these oddities, we have a debugging tool available, but it requires Visual Studio to compile/run at this time: https://github.com/Inedo/inedox-inedocore/tree/master/InedoCore/AD.Tester
Not sure if that's helpful though.