TIL that PowerShell can use internal CLR generic reference type names like that! But really, please don't do that...
[System.Nullable``1[[System.Int32]]]
[Nullable[int]]
... much easier to read
Welcome to the Inedo Forums! Check out the Forums Guide for help getting started.
If you are experiencing any issues with the forum software, please visit the Contact Form on our website and let us know!
TIL that PowerShell can use internal CLR generic reference type names like that! But really, please don't do that...
[System.Nullable``1[[System.Int32]]]
[Nullable[int]]
... much easier to read
You can ignore this error; for whatever reason, the NuGet client unexpectedly terminated the connection, and the result was that ProGEt stopped writing bytes. Not really anything to worry about.
The diagnostic center isn't for proactive-monitoring, more for diagnostic purposes. So ulnless users are reporting a problem, you don't need to check it.
Hi @jimbobmcgee ,
Thanks for all the details; we plan to review/investigate this via OT-518 in an upcoming maintenance release, likely in the next few two-week cycles.
-- Dean
Hi @andreas-unverdorben_1551 ,
npmjs.org primarily serves static content and runs on massive server farms running in Microsoft's datacenters.
Your ProGet server is much less powerful and does not serve static content. Not only is every request is dynamic (authentication, authorization, vulnerability checking, license checking, etc), but most requests (such as "what is the latest version of package X") need to be forwarded to npmjs.org and aggregated with local data.
So, a much less powerful server doing a lot more processing is going to be a little slower ;)
Running ProGet in a server cluster will certainly help.
Cheers,
Dean
Hi @kichikawa_2913,
We see multiple connectors pretty often, and it rarely presents a problem.
The main downside comes in the overhead of aggregation; for some queries like "list all package versions", each connector will need to be queried and have the results aggregated. So it could cause performance issues for for very high-traffic feeds - at least that's what we see on the support side of things.
However, if you plan on using a package-approval workflow, then it won't be a problem, as your approved-npm
feed wouldn't have any connectors.
Hope that gives some insight,
Dean
@joel-shuman_8427 thanks for the heads up!
I just updated it
https://github.com/Inedo/inedo-docs/blob/master/CONTRIBUTING.md
Hi mwatt_5816,
BuildMaster does support "release-less" builds, though you may need to enable it under the application's Settings > Configure Build & Release Features > Set Release Usage to optional. That will allow you to create a build that's not associated with a release.
It's also possible to do "ad-hoc" builds (i.e. builds with no pipeline), but we don't make it easy to do in the UI because it's almost always a mistake (once you already have pipelines configured). So in your case, I think you should create a secondary pipeline for this purpose.
-- Dean
Hi @scusson_9923 ,
Sorry, I misunderstood; I thought you were doing PSExec
.
In this case you are just executing the pwsh
process, so you need to figure out how to have that process return an exit code.
I don't know if that's the same as powershell.exe
, but a AI told me "To return an error code from a PowerShell script, the exit
statement followed by the desired error code should be used."
So I guess exit -1
or something like that?
-- Dean
Hi @scusson_9923 ,
Exec
(or Execute-Process
) runs an operating system process, so that's where the return code comes in.
If you're doing something like a PSCall
, then you can create output parameters/variables in the script, and then test those values.
-- Dean
Hi mwatt_5816,
BuildMaster does support "release-less" builds, though you may need to enable it under the application's Settings > Configure Build & Release Features > Set Release Usage to optional. That will allow you to create a build that's not associated with a release.
It's also possible to do "ad-hoc" builds (i.e. builds with no pipeline), but we don't make it easy to do in the UI because it's almost always a mistake (once you already have pipelines configured). So in your case, I think you should create a secondary pipeline for this purpose.
-- Dean
Hi @procha_8465 ,
I'm afraid we'll need a bit more information here to help you. There are a lot of changes between ProGet 5.2 and ProGet 2024 and between older/newer versions of the npm client.
If you can put together a reproduction case, ideally on a new instance of ProGet 2024, that'll help us determine what you're trying to do and how to help.
-- Dean
Hi @f-medini_8369,
Here is our documentation on how to use Prometheus:
https://docs.inedo.com/docs/installation/logging/installation-prometheus
-- Dean
If you haven't seen it already, I'd check out How Files and Packages Work in ProGet for Artifactory Users.
Long story short is, you should consider a more modern approach than the Maven-based file/folder that Artifactory uses. Many have found a lot of success with Universal Feeds & Packages.
We don't maintain a TeamCity plugin, but it's really easy to create, publish, deploy packages using the pgutil
command line (i.e. pgutil upack create
); see HOWTO: Create Universal Packages
Hope that helps!
-- Dean
Hi @uwer_4638 ,
The underlying issue is that you're making a "NuGet v2 API" request to your ProGet feed, which ProGet is then forwarding to connectors, and DevExpress does not support NuGet API V2.
So, you'll need to track down whatever is making that request (perhaps you're using an old endpoint URL), or simply just disable the V2 API on your feed. This will cause an error on the client, and should show you pretty quickly what's making that outdated call.
-- Dean
@layfield_8963 great news, that was a really strange error. The UI upload uses some kind of chunking and file appending, so it sounds like that was it.
Hi @scusson_9923,
In this case, you'll likely want to select 5
as the type.
For reference, here are the valid types:
//
// Summary:
// Specifies the type of a raft item.
//
// Remarks:
// All types except BinaryFile and TextFile are "regulated" and only allow well-known
// files; for example,
public enum RaftItemType
{
//
// Summary:
// A role configuration plan.
RoleConfigurationScript = 1,
//
// Summary:
// [Uninclused] A Script with .otter syntax is prefered
OrchestrationPlan = 2,
//
// Summary:
// [Uninclused] A Script with .otter syntax is prefered
Module = 3,
//
// Summary:
// A script.
Script = 4,
//
// Summary:
// An unclassified binary file.
//
// Remarks:
// BinaryFiles cannot be edited in a text editor, compared, etc; they are always
// treated as raw content
BinaryFile = 5,
//
// Summary:
// A deployment plan.
DeploymentScript = 6,
//
// Summary:
// An unclassified text file.
//
// Remarks:
// TextFiles can be edited in UI , may have lines replaced on deploy, and can be
// used as templates
TextFile = 7,
//
// Summary:
// A pipeline.
Pipeline = 8,
//
// Summary:
// [Uninclused] Feature is deprecated
ReleaseTemplate = 9,
//
// Summary:
// A job template.
JobTemplate = 10,
//
// Summary:
// Files used with build tools like Dockerfile.
BuildFile = 11
}
I'm not sure if TextFile (7) will work; in Otter it was intended to be used as a text template, which means lines in it are replacement. You may need to play around and see what works.
-- Dean
Hi @scusson_9923 ,
What is the file you are uploading? What happens when you upload through the UI?
Can you share the PowerShell snippet you're using?
What are you specifying for RaftItemType_Code
?
-- Dean