Thanks Alana,
I think I've developed a workable pattern for the vendor's versioning system. Are any of the fields searchable in the User Interface, whether pre-defined or custom?
Thanks Alana,
I think I've developed a workable pattern for the vendor's versioning system. Are any of the fields searchable in the User Interface, whether pre-defined or custom?
I like the setup of Universal Packages, however, the vendor installer packages I deal with (spread across .zip, .iso, .exe, and .msi) do not have friendly version names - certainly not semantic versioning.
Their software is typically in the format of this: 2024 SP3 R2 P1, where the base version would be something like 2024, and then as the software is changed, but no major version is released, the service pack, revision, and patch numbers increase. I could account for this if the semantic versioning allowed 4 segments, but it only allows two. Is there a better way to store their files than using Universal packages (I could use the Asset feed, but the Universal looked cleaner)?
I do like the Universal packages because I was going to bundle the PDFs with the installers.
Hey Alana,
No worries. I have a similar issue with an Ansible deployment in Kubernetes with permissions. So I have a special file share now for images that don't support changing the user/group that processes run as.
Basically, the file share exposed and consumed by the image is configured (when using squashing) to have certain permissions. And we can make docker containers or Kubernetes deployments compatible by specifying the securityContext to tell it when user & group to run the pod as, or some containers allow specifying a user ID and group ID as environment variables. This is just to control the linux permissions for folder/file access.
Somewhere in the image setup, there's configuration forcing it to access /var/proget/database as the user "postgres" and with the group "root", and it tries changing permissions on that folder at some point. If the image supported the securityContext or the user/group ID environment variables, it would run as those specified permissions instead - but - your image would need to be able to run as those permissions, and it would probably take some work to get the image reconfigured to allow that, and to do testing.
For now, it works. It would take some evaluation to see if your image would support changing the user/group away from the current postgres/root.
It's been a minute since I was able to get to this.
So, I was able to get the database directory to work with the configuration below. The data mount is a new volume that doesn't have squashing enabled. In reviewing the container configuration, it wants the /var/proget/database to be setup with 101:0 permissions. I wasn't able to replicate this with squashing, so I created a new volume to set these permissions explicitly. The package, logs, backup, and certs mounts are all set up with squashing enabled (though I don't think I actually need the certs, but I'll have to double check if the free version supports SSL). The package directory works, I'll be able to see tomorrow if the backup works, and I think the logs work, but there's nothing logged in the UI, so I guess I'll have to wait.
While this does work, could we not implement a UID or GID env var, or follow the securityContext, so that the pod definition controls permissions, rather than the container strictly forcing 101:0 on the database directory?
apiVersion: apps/v1
kind: Deployment
metadata:
name: proget
namespace: proget
spec:
replicas: 1
selector:
matchLabels:
app: proget
template:
metadata:
labels:
app: proget
spec:
# securityContext:
# runAsUser: 1024
# runAsGroup: 100
# runAsNonRoot: false
securityContext:
runAsUser: 0
runAsGroup: 0
runAsNonRoot: false
containers:
- name: proget
image: proget.inedo.com/productimages/inedo/proget:latest
ports:
- containerPort: 8624
name: http
volumeMounts:
- name: proget-data
mountPath: /var/proget/database
- name: proget-package
mountPath: /var/proget/packages
- name: proget-logs
mountPath: /var/proget/logs
- name: proget-backup
mountPath: /var/proget/backups
- name: proget-certs
mountPath: /etc/ssl/certs
volumes:
- name: proget-data
persistentVolumeClaim:
claimName: proget-data
- name: proget-package
persistentVolumeClaim:
claimName: proget-package
- name: proget-logs
persistentVolumeClaim:
claimName: proget-logs
- name: proget-backup
persistentVolumeClaim:
claimName: proget-backup
- name: proget-certs
persistentVolumeClaim:
claimName: proget-certs
I waited for 25.11 to get the permission check logic in place. Now that's resolved, there appears to be a new issue, but based on the error alone, I can't be 100% sure on what it is, although I'm betting it's tied to permissions.
The instance shows healthy, but it fails to run due to a connection string issue. See error:
Updating certificates in /etc/ssl/certs...
142 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
done.
info: Microsoft.Hosting.Lifetime[0]
Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
Hosting environment: Production
info: Microsoft.Hosting.Lifetime[0]
Content root path: /usr/local/proget
Initializing embedded database...
fail: Microsoft.Extensions.Hosting.Internal.Host[11]
Hosting failed to start
System.InvalidOperationException: The ConnectionString property has not been initialized.
at Npgsql.ThrowHelper.ThrowInvalidOperationException(String message)
at Npgsql.NpgsqlConnection.Open(Boolean async, CancellationToken cancellationToken)
at Npgsql.NpgsqlConnection.Open()
at Inedo.ProGet.Data.PostgresDatabaseContext.CreateConnection() in C:\Users\builds\AppData\Local\Temp\InedoAgent\BuildMaster\192.168.44.60\Temp\_E597550\Src\src\ProGet\Data\PostgresDatabaseContext.cs:line 58
at Inedo.ProGet.Data.VirtualDatabaseContext.PostgresContext.CreateConnection() in C:\Users\builds\AppData\Local\Temp\InedoAgent\BuildMaster\192.168.44.60\Temp\_E597550\Src\src\ProGet\Data\VirtualDatabaseContext.cs:line 49
at Inedo.ProGet.Data.VirtualDatabaseContext.CreateConnection() in C:\Users\builds\AppData\Local\Temp\InedoAgent\BuildMaster\192.168.44.60\Temp\_E597550\Src\src\ProGet\Data\VirtualDatabaseContext.cs:line 24
at Inedo.Data.DatabaseContext.ExecuteInternal(String storedProcName, GenericDbParameter[] parameters, DatabaseCommandReturnType returnType)
at Inedo.Data.DatabaseContext.ExecuteNonQuery(String storedProcName, GenericDbParameter[] parameters)
at Inedo.Data.DatabaseContext.ExecuteScalar[TResult](String storedProcName, GenericDbParameter[] parameters, Int32 outParameterIndex)
at Inedo.ProGet.Data.DB.Context.CheckSqlServerDbo(Nullable`1 IsOwner_Indicator) in C:\Users\builds\AppData\Local\Temp\InedoAgent\BuildMaster\192.168.44.60\Temp\_E597550\Src\src\ProGet\obj\Release\net8.0\linux-x64\InedoLib.Analyzers\InedoLib.Analyzers.DatabaseContextGenerator\DB.g.cs:line 2191
at Inedo.ProGet.Data.DB.CheckSqlServerDbo(Nullable`1 IsOwner_Indicator) in C:\Users\builds\AppData\Local\Temp\InedoAgent\BuildMaster\192.168.44.60\Temp\_E597550\Src\src\ProGet\obj\Release\net8.0\linux-x64\InedoLib.Analyzers\InedoLib.Analyzers.DatabaseContextGenerator\DB.g.cs:line 183
at Inedo.ProGet.Data.DatabaseMan.CheckConnection(Boolean asyncWithRetryMode, CancellationToken cancellationToken) in C:\Users\builds\AppData\Local\Temp\InedoAgent\BuildMaster\192.168.44.60\Temp\_E597550\Src\src\ProGet\Data\DatabaseMan.cs:line 62
at Inedo.ProGet.Service.ProGetService.OnStartAsync(CancellationToken cancellationToken) in C:\Users\builds\AppData\Local\Temp\InedoAgent\BuildMaster\192.168.44.60\Temp\_E597550\Src\src\ProGet\Service\ProGetService.cs:line 40
at Inedo.ProGet.Service.ProGetService.ExecuteAsync(CancellationToken stoppingToken) in C:\Users\builds\AppData\Local\Temp\InedoAgent\BuildMaster\192.168.44.60\Temp\_E597550\Src\src\ProGet\Service\ProGetService.cs:line 26
at Microsoft.Extensions.Hosting.Internal.Host.<StartAsync>b__15_1(IHostedService service, CancellationToken token)
at Microsoft.Extensions.Hosting.Internal.Host.ForeachService[T](IEnumerable`1 services, CancellationToken token, Boolean concurrent, Boolean abortOnFirstException, List`1 exceptions, Func`3 operation)
And config (note, I added the SSL Certificate directory as a mounted volume to avoid a different permission problem when running the certificate updates).
spec:
replicas: 1
selector:
matchLabels:
app: proget
template:
metadata:
labels:
app: proget
spec:
securityContext:
runAsUser: 1024
runAsGroup: 100
runAsNonRoot: false
containers:
- name: proget
image: proget.inedo.com/productimages/inedo/proget:latest
ports:
- containerPort: 8624
name: http
volumeMounts:
- name: proget-data
mountPath: /var/proget/database
- name: proget-package
mountPath: /var/proget/packages
- name: proget-backup
mountPath: /var/proget/backups
- name: proget-certs
mountPath: /etc/ssl/certs
volumes:
- name: proget-data
persistentVolumeClaim:
claimName: proget-data
- name: proget-package
persistentVolumeClaim:
claimName: proget-package
- name: proget-backup
persistentVolumeClaim:
claimName: proget-backup
- name: proget-certs
persistentVolumeClaim:
claimName: proget-certs
The chown fails no matter what because it needs to be run as sudo.
I could set the permissions on the folder, but I'd have to create a new network share because I'm mapping all Kubernetes connections to 1024:100 to keep the pods consistent. For the most part, this usually fixes any access issues to the folders, if a pod has issues.
securityContext:
runAsUser: 1024
runAsGroup: 100
runAsNonRoot: false
I chose NFS over iSCSI and SMB because it is more standard them then, but also easier to manage than iSCSI. The shares were originally setup without squashing, but then it became a mess to keep permissions on the NAS to match the pods. It's been far easier telling a few pods to use the UID/GID rather than to manage both the deployments & the NAS permissions.
You can't chown within the containers, without using sudo which is why this fails. I can run chown on the NAS to get the permissions to match (I'll have to test tomorrow, DHCP is being a pain on the new servers being added).
For your last question, I would have assumed the UID/GID configured would have taken over for the postgres user, and would make the chown irrelevant at that point. I'm actually running into a similar problem with the AWX helm chart because they forcefully use chmod 755 and chown 1000 without allowing any way to bypass it. The permissions are already set, so there's no need to change them.
When the pod starts, there's this error message
Initializing embedded database...
chown: changing ownership of '/var/proget/database': Operation not permitted
When inside the pod, we can see that permissions are properly set, as also verified on the NAS and via the K8 config.
ls -la /var/proget
total 0
drwxr-xr-x. 7 root root 82 Aug 22 02:52 .
drwxr-xr-x. 1 root root 20 Aug 22 02:52 ..
drwxrwxrwx. 1 1024 users 0 Aug 22 02:49 backups
drwxrwxrwx. 1 1024 users 0 Aug 22 02:54 database
drwxr-xr-x. 2 root root 6 Aug 22 02:52 extensions
drwxrwxrwx. 1 1024 users 0 Aug 22 02:49 packages
drwxr-xr-x. 2 root root 6 Aug 22 02:52 ssl
apiVersion: apps/v1
kind: Deployment
metadata:
name: proget
namespace: proget
spec:
replicas: 1
selector:
matchLabels:
app: proget
template:
metadata:
labels:
app: proget
spec:
securityContext:
runAsUser: 1024
runAsGroup: 100
runAsNonRoot: false
containers:
- name: proget
image: proget.inedo.com/productimages/inedo/proget:latest
ports:
- containerPort: 8624
name: http
volumeMounts:
- name: proget-data
mountPath: /var/proget/database
- name: proget-package
mountPath: /var/proget/packages
- name: proget-backup
mountPath: /var/proget/backups
volumes:
- name: proget-data
persistentVolumeClaim:
claimName: proget-data
- name: proget-package
persistentVolumeClaim:
claimName: proget-package
- name: proget-backup
persistentVolumeClaim:
claimName: proget-backup
This appears to be something in the setup of ProGet that is trying to perform a chown, which is not allowable on the NFS share.