@lm thanks for letting us know that it worked!
I'm glad you were able to find the issue on GitHub. Looking closer, it is a NuGet client bug; I posted an update on the issue. The only workaround is what you described (setting a separate API key).
@lm thanks for letting us know that it worked!
I'm glad you were able to find the issue on GitHub. Looking closer, it is a NuGet client bug; I posted an update on the issue. The only workaround is what you described (setting a separate API key).
Hi @cshipley_6136,
We aren't trying to pass the buck here, but given the symptoms, it's almost certainly not a software problem I'm afraid. Under the hood, ProGet uses Microsoft's SQL Server driver/components, which uses the operating system's networking drivers/components to communicate to the server. In this case, the error is originating in the operating system's networking components.
Based on the symptoms (intermittent network-level errors), it's most certainly a problem with the network hardware, components, or configuration. Since the problem is intermittent, running nslookup similar commands on the container won't really identify any issues; those commands also communicate on different protocols/ports than SQL Server.
We're not really experts at troubleshooting network problems, but we have seen a few over the years. Have you had a chance to bring this up with your Network/Operations team? What have they tried or investigated so far?
Cheers,
Alana
Hi @lm , thanks for the feedback!
I'm not sure about #1, but I was curious about #2, and spotted a typo in our database updating code.
If you run this against your ProGet database, this will work and should unblock you:
GRANT EXECUTE ON TYPE::[IndexedSymbolEntry] TO [ProGetUser_Role]
This will also be fixed in in ProGet 2022.21 as PG-2287.
Cheers,
Alana
Hi @cshipley_6136 ,
In general, when the database has temporary problems (timeouts, unavailability, etc.), then errors will be shown to users (like the above) when they make a web request. Every web request is a new try, basically, and will cause the same error. But once the database is back up, new requests should work fine.
However, if the application fails during initialization (i.e. the first web request after starting the service), then it effectively requires restarting the application (container/service). This scenario is tricky to work-around.
Otherwise, what was the nature of the user auth issue? If it was an incorrect password, then that would require fixing the connection string (which is passed as an environment variable usually) and then restarting the container.
Hope that helps...
Alana
Hi @cshipley_6136 ,
Thanks for the additional information; we were able to figure out what the underlying issue was with your help. Essentially, it a combination of command caching and some other factors that caused this false-positive behavior. The error should have eventually been triggered after a little while, but it's hard to say.
In any case, we've changed this to use a one minute cache, so this kind of error will be detected much quicker. Either databaseStatus
will say Error
, or the handler will return a 500
status.
This will be fixed in PG-2284 , which is scheduled for this Friday's maintenance release (2022.20).
Cheers,
Alana
Thanks again for these reproduction instructions! I confirmed the behavior very easily, though it doesn't look like a trivial fix (at least to me)
We are targeting PG-2278 for the upcoming maintenance release (Friday), but it might be delayed if we have other priorities until the following maintenance (Feb 24). Of course we can provide a prerelease/patch version as soon as we code the fix :)
Thanks for the additional insight! To clarify... I know very little about how caching works, and I was just reporting what changed recently so we can track down where to look :)
So just to confirm... you're saying that this used to work in ProGet 6.0, but it's not working after you upgraded to ProGet 2022? If that's the case, then it would very likely be the platform change.
@pfeigl said in Assets do not return Last-Modified header (anymore?):
Anyways, I guess our question simply is: Is it reasonable for you to (re-)add this header in a future version? It feels like a simple change, as the asset UI already shows this exact field.
Yes, we just need to track down exactly what the issue is :)
Our platform code does seem to look for an If-Modified-Since
header, and then sends a 304 if the dates are within a minute of each other. So I guess that works.
But then this code, when sending response headers, looks pretty suspicious to me.... I wonder if he should be setting the Last-Modified
header instead of Date
This use case isn't very common (and isn't one we necessarily designed for), so it's not so intuitive to do in the UI. To handle this, you can create a create a second vulnerability source, and then use that source in the UI. Let us know how that goes!
Cheers,
Alana
Hi @Justinvolved ,
Hundreds of configuration files in one application -- definitely too much. I'd probably seek a different solution if that's the case. But having one or a few per application is okay.
There is an Http-Post Operation that you can probably use to make API calls. Conceptually it's similar to the PowerShell Invoke-WebRequest
method.
I'm not sure what the configuration file would look like (array? map variable?), but there's also a variable function, ($Filecontents()), that could read a file, and an $Eval() function that can can convert text into variables.
That said... it might be a bit challenging to do all this in OtterScript. It's not really designed for this. You may be better off writing a global PowerShell script that can process input from a configuration file that you deploy to the working directory. That would also be easier to test as well.
Cheers,
Alana
Thanks @e-rotteveel_1850 , these repro steps will be very helpful to debug/fix the problem.
With platforms we know little about (Python, Conda) figuring out the repro steps is often the hardest part
Please stay tuned; we'll post an update once we identify or fix the problem
Hi @e-rotteveel_1850 ,
I haven't really tested this, just reporting on information I found in our notes :)
The problem that we fixed via PG-2220 was reproduceable as follows:
feeds/MyCondaFeed/ca-certificates/versions
It also gave errors in the ProGet API. But the underlying issue was related to unexpected ("invalid") metadata from the remote Conda repository's API (index files), specifically with sorting (comparing) those leading zeros.
Python specifications give me a headache, but it has something to do with the PEP-440 Normalization Rules not being followed under the hood
Regardless, it sounds like this is a different bug...
Can you identify how we can repro/fix (without using CONDA client)?
Cheers,
Alana
Hi @pfeigl ,
The handler for asset file downloading hasn't changed recently.
The last major changes were in ProGet 6.0, where (among other things), the ability to control client-side caching was added. The change you found (PG-2068) fixed a bug related to UTC/Local time differences in those cache headers.
In ProGet 2022, we changed the overall platform (.NET Framework -> .NET6). The platform is what's responsible for reading/responding to cached/head requests.
I'm not sure what the behavior was prior to ProGet 2022... but if you're finding that caching isn't working as expected, I would inspect the cache control headers and see if you can find what the underlying issue is. So far as we can tell/test, it's working as it's supposed to now.
Cheers,
Alana
Whoops, posted too fast :)
@e-rotteveel_1850 said in Uploading ca-certificates (2023 version) to a ProGet conda feed does not work:
I tested with Proget 2022.0.19, but the file is still renamed to "2023.1.10"
The package store is internal to ProGet, and we don't support accessing or modifying those files directly. The folder structure or naming of the files won't impact usage in the ProGet UI or API.
Hi @e-rotteveel_1850 ,
The underlying problem is that ca-certificates
has invalid package versions, at least according to Conda's own versioning specification.
2022.07.19
is not supposed to be permitted in a repository... and yet it's there. ProGet follows the Conda specification, which says packages with leading zeros should be "normalized" to 2022.7.19
. I guess they treat their specs more like "guidance" than "specifications"
In any case, as you noted, this was addressed in a newer version (PG-2220 in ProGet 2022.10), so upgrading should take care of it.
Cheers,
Alana
This is a known bug (#430) in Find-Package
; basically this only works with NuGet.org, and doesn't work with other repositories.
I would suggest replying to the issue; it's been a few years since anyone commented on the bug, so bumping it miiiiiight get Microsoft to consider fixing it.
Cheers,
Alana
Thanks for clarifying @Justinvolved
I was able to reproduce this issue, looks like it was a regression with "releaseless builds". Easy fix... it'll be in the next maintenance release as BM-3808. The release is scheduled for this Friday :)
Cheers
Hi @OtterFanboy ,
Thank you for the feedback :)
I've added items for [1] OT-480 and [4] OT-481, they seem relatively easy changes.
Can you send a screenshot for [3], so we can better visualize what your environment page looks like? It would be helpful to see this page with real-data, since our test data typically doesn't use nested environments.
Regarding [2], folders were a late addition to Otter 2022, and unfortunately the functionality is quite limited (i.e. move didn't get added in). This omission is annoying, but a lot of times Git-based rafts are used, and it's easy to move files in Git. We would definitely like to add this, but it's not trivial... because we need to do different operations for Git-based rafts. Otherwise, I logged the analyzer error as a bug (OT-482) because they shouldn't issue an error.
I will discuss with the team when we can prioritize these issues. If it's really easy, maybe we can do it in next maintenance release window.
Cheers,
Alana
Thanks much for that providing query @sebastian !
And yes -- the data is "halfway there" in ProGet 2022 (maybe 20% there?), but "Packages" (which are spread across another different tables like NuGetPackages, NpmPackages, etc.) aren't tied to licenses just yet.
But with the database refactoring we are planning in 2023, it's going to be a lot easier to get and display this information, especially for SCA-related things.
There's also going to be vulnerability improvements as well - please stay tuned :)
Hi @lm , just to clarify...
How do you use symbol files? Is it to "browse / step-in to code" using a debugger in Visual Studio?
If so, caching/proxying symbols wouldn't help with those items, since the PDB is really just providing a URL to the public internet.
Cheers,
Alana
Hi @marc-ledent_9164 , it's a little hidden.
If you Edit Pipelline > Edit Details, there will be a "Delete" button in the bottom-left corner of that dialog.
Cheers,
alana
Hi @philipp-jenni_7195 ,
Hmmm that's basically the same thing I tried a while back ....
Can email/send the package files in this example? mypackage-3.0.0.upack
and mypackage-3.0.1.upack
If you email them to support at inedo dot com, with a subject of [QA-976]
, we will be able to find it. Please let me know when you email it, since we don't watch that box
Cheers,
Alana
Hi @Justinvolved ,
That is definitely a bug in the Operation, as there shouldn't be a NullReference exception like that.
Just to confirm... in the first example, you created a build without a release? I.e. you just "directly" selected a pipeline? But then, it worked when you created a build in a release?
Cheers,
Alana
Hi @Justinvolved ,
Ah, thanks for clarifying that! Yes... they should be Testing
to follow everything else. Pipeline templates are a brand-new feature :)
This was a trivial change, and will be updated in next version as BM-3805
Cheers,
Alana
Hi @Justinvolved ,
In this case, I would recommend staging the artifact files first, and then transferring them. Here is the OtterScript you can use for that:
Deploy-Artifact;
Transfer-Files
(
ToDirectory: c:\websites\the-website-root,
Exclude: Images\**
);
Cheers,
Alana
As a free user, we're a bit limited to how much we can investigate this, and it's unlikely we'd be able to see anything from the database or docker config. We've seen this message come-up from time-to-time, and it's been related to the two edge cases I mentioned:
We have yet to see this happen with a newly-created feed.
Perhaps you can set-up something on AWS LightSail, or another very inexpensive hosting platform? Using that, we could investigate much easier.
Hello @itpurchasing_0730,
It sounds like you set things up correctly, but seems like Chocolatey is behaving strangely; if you added a source, then you shouldn't be prompted for the credentials.
In general, you should just need to run choco source add
command:
https://docs.chocolatey.org/en-us/choco/commands/sources
Then you won't be prompted again. I would try removing your sources, and adding them back.
You may also want to use a tool like Fiddler classic to monitor the traffic that Chocolatey is making. Unfortunately we can't tell, from just that diagnostic log.
That will let you see exactly what URls are being used... and the underlying problem might be related to something like a proxy server.
Cheers,
Alana
Hi @philipp-jenni_7195 ,
This is still a mystery to us; since you say you can reproduce it with a new feed, can you provide us with a step-by-step guide using a New Feed and a package file to use, then we will try your steps.
The steps should basically be:
If the server is publicly accessible (or you can create one that is), we can also log-in and attempt to reproduce it on your server.
Cheers,
Alana
Thanks for letting us know @jeff-peirson_4344 ; we'll see if we can better detect and improve this error.
Ah - so it is schema related.
Well, it's definitely compatible with SQL 2019... and we also have no idea why or how that happened. We don't see other users having this issue.
We also tested 00. EnsureDbo.sql
works on just about every configuration we could imagine:
What does SELECT SCHEMA_NAME()
return?
Any idea why the following isn't working for you?
IF (SCHEMA_NAME() <> 'dbo') BEGIN
DECLARE @SQL NVARCHAR(MAX) = 'ALTER USER [' + CURRENT_USER + '] WITH DEFAULT_SCHEMA = [dbo]'
PRINT 'Changing default schema ("' + SCHEMA_NAME() + '") to dbo via: ' + @SQL
EXEC sp_executesql @SQL
PRINT 'Schema changed.'
END
FYI, here is Microsoft's docs on SCHEMA_NAME
:
https://learn.microsoft.com/en-us/sql/t-sql/functions/schema-name-transact-sql
Thanks for sharing the details; from here, I would recommend just working with inedosql.exe
and the SqlSCripts.zip
file that's in the ProGet.SqlScripts-22.0.17 package. You can just run inedosql update
after extracting the SQLScripts.zip
file
The error is from from a while ago (14.07.2020 14:41:44), so I guess it's nothing to worry about. Perhaps it's the same underlying issue?
There's definitely something wrong... I just have no idea what it is. The only guess is schema/permissions, but you've already checked it.
I will describe to you what's happening, and hopefully you can troubleshoot with your database team better, especially since you'll have all the scripts that are run...
The script 0.1.AddStoredProcInfo.sql
does the following:
__StoredProcInfo
__AddStoredProcInfo
__GetStoredProcInfo
__AddStoredProcInfo
Then later, the 1.ApiKeys_CreateOrUpdateApiKey.sql
script is executed. This script starts by executing __AddStoredProcInfo
again, but it's called with the [dbo].[__AddStoredProcInfo]
prefix instead.
So I think the issue is schema-related.
But as I mention, we try to mitigate this earlier with 00. EnsureDbo.sql
. This script works like so:
IF (SCHEMA_NAME() <> 'dbo') BEGIN
DECLARE @SQL NVARCHAR(MAX) = 'ALTER USER [' + CURRENT_USER + '] WITH DEFAULT_SCHEMA = [dbo]'
PRINT 'Changing default schema ("' + SCHEMA_NAME() + '") to dbo via: ' + @SQL
EXEC sp_executesql @SQL
PRINT 'Schema changed.'
END
Anyway, please let us know what you find. We'd love to make these scripts work in more scenarios, and not cause problems like this.
I don't think there's been any changes to the Asset Directory in v2022 that would have caused anything like this.
Do you have a full stack trace of that message? It sounds like an operating system error, and if that's the case - it wouldn't make a lot of sense that the same action works from the Web UI, since it's the same code. So that might mean it's probably unrelated to ProGet, and some weird environmental thing.
It could also be an "error handling an error" that yields an incorrect message, but we'd want to see the full stack trace for that.
Cheers,
Alana
Unfortunately, it looks like there's strange "corruption" with your database. Objects that should exist don't seem to exist.
For example, __AddStoredProcInfo
is a stored procedure that's dropped/created early on in the update process. You should see something like INFO: Executing untracked script OBJECTS/4.PROCEDURES/0.1.AddStoredProcInfo.sql...
, above all the other items.
PackageVersions
is a table that's created in v2022., before running all the stored procedure creation scripts. That is a very simple CREATE TABLE script, so I can't imagine how it failed.
Any ideas? Is there some strangeness with with DB Schemas or anything? The installer should crash if your user isn't part of the dbo
schema, but we've also seen that detection fail because of schema aliasing configuration on the server. That's a very rare, obscure setting.
Did you have past failed installations? You can use the inedosql
tool to find out what errors exist in the database: https://github.com/Inedo/inedosql#errors
You can find an inedosql.exe
within the Manual Install Files for your version:: https://my.inedo.com/downloads/installers
Hi @rob-leadbeater_2457 ,
It sounds like you'll need to grant yourself (UserB) "sysadmin" access. You should probably also grant the local Administrators
group access as well while you're at it :)
The easiest way to do this is with the SQL Server management tools, since it's all gui-driven. That's a free download/install from Microsoft, and I would recommend installing it on the server anyways. But it's also possible to do it with the command line/scripts.
Microsoft has some guidance on how to do this;
https://learn.microsoft.com/en-us/sql/database-engine/configure-windows/connect-to-sql-server-when-system-administrators-are-locked-out?view=sql-server-ver16
Once you've given yourself sysadmin, you should have no issues.
Alana
Hi @k-gosciniak_5741 ,
We plan to implement that in ProGet 2023 - it requires a major change, so we won't be able to do it in a maintenance release.
There is no release date for ProGet 2023 yet, but please check in the coming months :)
It will most certainly happen in the firs half of the year, as we already started planning/development.
Cheers,
Alana
Hi @pmsensi ,
That is the general process you'll want to use... basically just assign the licenses as you need them. In general, as part of a package approval workflow:
https://blog.inedo.com/nuget/package-approval-workflow
There are over 300k packages on nuget.org (5M+ versions), and growing. So it's many packages. ProGet does not download a list these packages, but displays live data from the NuGet.org feed.
hopefully that helps :)
Cheers,
Alana
Hi @pmsensi ,
Yes, you can set up this as a licensing rule - to block packages with unknown licenses.
Reporting & SCA > Licenses > manage license types > Manage Default License Rules
:)
Cheers,
Alana
Hi @pmsensi ,
Oh I see! In this case, I think your "Feed Usage" setting is currently set to "Private". You should set this to "Public" packages- then the licensing will be displayed/configurable.
Alana
Hi Pedro,
There are multiple ways that an author can specify a license on a NuGet package:
https://blog.inedo.com/nuget/nuget-license-expressions
Or, a package author can specify no license at all. If the author chooses "file" as the license type, then ProGet will only be able to "see" this license if the package is in ProGet - either as a Cached or Local package.
For example, the SmartInspect package has a file" type of license agreement:
So in this case, you want to read the "embedded license file", then assign a license agreement code to it.
Note that, if a package file has not been downloaded yet, then it will appear to Proget as having no license at all. This is a NuGet API limitation.
Cheers,
Alana
Hi @jimthomas1_7698 ,
What should target #0 be? (Deployment Target currently says 'Build to localhost', is that what target #0 refers to?)
This message could definitely be clarified; but it basically means that Deployment Target isn't set for the first stage. If you see "Build to localhost" on the pipeline overview page, I'm guessing you didn't "Commit" the changes (save) -- it's at the top of the page. You have to explicitly save the pipeline that you're editing.
Where in the Publish command do I specify the Resource Group, Resource Name and Subscription? Or will BuildMaster pull those azurewebsite publish parameters from the project's Properties/PublishProfiles .pubxml file?
I'm not familiar enough with azurewebsite publish to be honest... but under the hood, the DotNet::Publish
operation calls to dotnet publish
. So if your project is configured to use the PublishProfiles... then maybe it will work?
You can pass additional arguments into DotNet::Publish
(which will get directly passed to dotnet publish
, using the AdditionalArguments
parameter)
FYI: Deploying to Azure Websites is a Deployment Script Template we intend to create later. It's unfortunately a little complicated to do, since it primarily reles on a
Which, if any, of the documentation can I rely on for help?
We put that "Documentation Renovation in Progress" warning on the pages that are outdated; there's not too many of them with that warning... and we're making our way through them one page at a time
In any case, don't hesitate to ask questions - it is often an opportunity for us to improve our software or documentation.
Hi @kenneth-garza_2882 ,
That should be the case, under the hood, ProGet is using this API:
https://learn.microsoft.com/en-us/windows/win32/fileio/creating-and-using-a-temporary-file
According to the docs for GetTempPath
, the first path found will be used:
So it seems there's many way to specify this.
Hi @kenneth-garza_2882 ,
ProGet uses temp files for a number of things, including buffering uploads like this.
This shouldn't cause any space problems, as ProGet will delete these files upon successful use, and Windows can automatically cleanup files that failed to upload. Relatively speaking it's a small amount of temporary space compared to everything else in there.
If you're worried about using a system drive for temporary files, you can change the App Pool User's Profile path, or just the temporary path:
https://www.howtogeek.com/285710/how-to-move-windows-temporary-folders-to-another-drive/
Cheers,
Alana
Hi @philipp-jenni_7195 ,
I'm afraid I can't reproduce this, and we've reviewed this code already. Only packages the exist will require Feeds_OverwritePackage privilege.
If you can provide me with step-by-step guide using a New Feed and a package file to use, I will try your steps.
The steps should basically be
Cheers,
Alana
We have seen a few edge cases that will cause this behavior:
In this case, if you should be able to upload the package from the UI, then delete it, then it should work again.
Cheers,
Alana
@itpurchasing_0730 there is; see Advanced > Web.HideHomePageFromAnonymousUser
Hi @p-pawlowski_8446 ,
I'm sorry but I'm not totally sure what you're asking; we're not familiar with JumpCloud.
Are you looking for how to publish a Chocolatey Package? We have a step-by-step on how to create a priate choco repository, but not on creating packages...
https://docs.inedo.com/docs/proget-howto-private-chocolatey-repository
@itpurchasing_0730 whoops, look like a parameter was missing (-d
to set the database name):
docker exec -it inedo-sql /opt/mssql-tools/bin/sqlcmd \
-S localhost -U SA -P '«YourStrong!Passw0rd»' \
-d ProGet -Q '«sql command here»
That should do the trick I hope!
@itpurchasing_0730 you can find that in /reference/api
in your instance; that's the Native API
Hello,
The Docker API is supposed to only support based bearer authentication, but in previous versions (v5) it also worked with Basic auth.
There's a sample script on this page that shows how you can authenticate:
https://docs.inedo.com/docs/proget-docker-semantic-versioning
Cheers,
Alana
@kaushal141992_6976 what Network error are you receiving? That's a large file to upload, so it's hard to say where the error is.
IIS has a hard-coded limt of 4gb, but the integrated web server or Docker does not.
Hi Ryan,
If you're unable to login to ProGet as any user, then the issue is cookie-related.
After successfully authenticated on the log-in page, ProGet will send your browser a cookie with an authentication ticket, and redirect to the home page. If you're still "Anonymous" after logging-in, then your browser is not sending ProGet the cookie back.
Make sure you're disabling privacy/cookie blockers.
Cheers,
Alana