Configure security contexts
Overview Copied
Important
These procedures apply only to ITRS Analytics installations deployed using Helm.
ITRS Analytics supports flexible security context configuration to meet your organization’s security policies, including compliance with Kubernetes Pod Security Standards and admission controllers such as Gatekeeper and Kyverno.
Security contexts control how containers run, including:
- User and group IDs — will be assigned to the container’s main process.
- Linux capabilities — kernel capabilities are granted or dropped.
- Privilege escalation — whether processes can gain additional privileges.
- File system groups — assigned to every volume attached to the pod.
Manual recovery for user ID changes Copied
Warning
ChangingrunAsUserafter installation requires manual intervention and temporarily relaxing security policies. This procedure must be performed with guidance from ITRS Support.
If you need to change runAsUser on an existing installation with strict security policies (Pod Security Admission, Gatekeeper, or Kyverno), see Manual recovery: Change user IDs with strict security policies for the complete recovery procedure.
As a best practice, configure the correct runAsUser and fsGroup before initial deployment to avoid this complexity.
Advanced security configuration Copied
For more information about security context configuration, see Using admission controllers and enabling optional logs collection which covers integration with admission controllers such as Gatekeeper, Kyverno, and Pod Security Admission (PSA), as well as the optional logs collection feature.
Security context default configuration Copied
ITRS Analytics uses the following default security contexts:
Pod-level (.spec) Copied
runAsUser: 10000
runAsGroup: 10000
supplementalGroups: [10000]
fsGroup: 10000
fsGroupChangePolicy: OnRootMismatch
Container-level (.spec.containers[] or .spec.initContainers[]) Copied
runAsNonRoot: true
allowPrivilegeEscalation: false
capabilities:
drop: [ALL]
seccompProfile:
type: RuntimeDefault
These defaults are suitable for immediate deployment on most Kubernetes clusters and adhere to the Pod Security Standard (PSS) at the Restricted level. The Restricted level enforces strong, least-privilege policies, such as prohibiting privileged containers and disallowed capabilities. This configuration is also compatible with Gatekeeper, Kyverno, and OpenShift Security Context Constraints.
For more information, see Pod Security Standards.
When customization is needed Copied
You may need to customize security contexts if:
- Your organization requires specific UID/GID ranges (different from default 10000).
- You’re deploying on OpenShift with project-specific UID ranges.
- Compliance policies mandate specific UIDs outside the default range.
Note
Configure security contexts before initial deployment. Changing
runAsUserafter installation requires manual intervention and temporarily relaxing security policies. See Manual recovery: Change user IDs with strict security policies for more information.If you change
runAsUser, you must also changefsGroupat the same time. See the Troubleshooting section for complete details on valid security context changes.
Configuration Copied
Basic configuration Copied
You can customize security contexts for ITRS Analytics during installation by specifying the required values in the Helm values files for the Platform and each app.
iax:
securityContext:
pod:
runAsUser: 5000
runAsGroup: 5000
supplementalGroups: [5000]
fsGroup: 5000
fsGroupChangePolicy: OnRootMismatch
container:
runAsNonRoot: true
allowPrivilegeEscalation: false
capabilities:
drop: [ALL]
seccompProfile:
type: RuntimeDefault
This configuration applies only to the ITRS Analytics Platform components. Each app has its own Helm chart, so you must configure security contexts separately in the values file for every app.
Configuration reference Copied
Pod security context Copied
| Field | Description | Type | Example |
|---|---|---|---|
runAsUser |
Assign this user ID to the main process. | Number | 5000 |
runAsGroup |
Assign this group ID to the main process. | Number | 5000 |
supplementalGroups |
Additional group IDs to assign to the main process. | List of numbers | [5000, 6000] |
fsGroup |
Change group ownership of mounted volumes. | Number | 5000 |
fsGroupChangePolicy |
When to change volume ownership. | String | OnRootMismatch |
Container security context Copied
| Field | Description | Type | Example |
|---|---|---|---|
runAsNonRoot |
(Recommended) Require non-root user. | Boolean | true |
allowPrivilegeEscalation |
(Required for PSA “Restricted”) Prevent gaining additional privileges. | Boolean | false |
capabilities.drop |
(Recommended) Linux capabilities to remove. | List of strings | [ALL] |
seccompProfile.type |
(required for PSA “Restricted”) Seccomp profile for syscall filtering. | String | RuntimeDefault |
Deployment scenarios Copied
Configure custom user and group ID (UID/GID) ranges Copied
Use this procedure when your organization has security policies that require ITRS Analytics to run with specific UID/GID ranges (for example, 5000-6000).
-
Determine the UID/GID range required by your organization’s security policy.
-
Edit your ITRS Analytics configuration to include the following security context settings:
spec: securityContext: pod: runAsUser: 5000 runAsGroup: 5000 supplementalGroups: [5000] fsGroup: 5000 fsGroupChangePolicy: OnRootMismatch container: runAsNonRoot: true allowPrivilegeEscalation: false capabilities: drop: [ALL] seccompProfile: type: RuntimeDefault -
Replace
5000with your organization’s required UID/GID values. -
Apply the configuration changes.
Deploy with OpenShift-assigned UIDs Copied
Use this procedure when deploying ITRS Analytics on Red Hat OpenShift Container Platform. OpenShift automatically assigns UID ranges to projects for enhanced security isolation.
-
Find your project’s UID range.
oc describe project <namespace> | grep uid-range # Example output: openshift.io/sa.scc.uid-range: 1000680000/10000 -
Configure using the range start as
fsGroup:spec: securityContext: pod: # Don't specify runAsUser - OpenShift assigns automatically fsGroup: 1000680000 container: runAsNonRoot: true allowPrivilegeEscalation: false capabilities: drop: [ALL] seccompProfile: type: RuntimeDefault -
After applying your configuration, verify the security contexts are working:
# Check pod security context kubectl get pod <pod-name> -n <namespace> -o yaml | grep -A 10 "securityContext:" # Verify process UID/GID inside container kubectl exec <pod-name> -n <namespace> -- id # Check for policy violations (if using Gatekeeper) kubectl get constraints -AExpected output:
- Process runs with your specified UID/GID
- No policy violations
- All pods in
Runningstate
Troubleshooting Copied
My pods don’t start after changing the security context Copied
This issue can occur due to several reasons:
- The UID/GID is outside the allowed range for your cluster.
- There are conflicting pod security policies enforced by admission controllers.
- There are permission issues with attached volumes.
To resolve the issue:
-
Check the pod events using:
kubectl describe pod <pod-name> -
Verify that your UID/GID is within the cluster’s allowed range.
-
Review admission controller logs to identify any policy violations.
Why am I seeing “Permission denied” errors in the logs? Copied
These errors occur when the application cannot write to certain directories or volumes. Common causes include an incorrectly set fsGroup or a UID/GID mismatch with the volume ownership.
To resolve the issue:
-
Ensure
fsGroupis set and matches your security policy. -
If using OpenShift, set
fsGroupto match the project’s UID range. See Deploy with OpenShift-assigned UIDs. -
Check volume permissions by running:
kubectl exec <pod> -- ls -la /data
Why is my pod creation being blocked by Gatekeeper or Kyverno policies? Copied
This usually happens when the admission controller detects policy violations. Common causes include:
- Missing required security context fields
- Not dropping all capabilities
- Using an incorrect UID/GID range
To resolve the issue:
-
Check which constraints are being violated.
kubectl get constraints -A -
Review the details of the specific constraint.
kubectl describe <constraint-type> <constraint-name> -
Update your pod’s security context to comply with the policy requirements.
Why do my OpenShift pods fail with “User not allowed”? Copied
This error occurs when OpenShift rejects pods due to permission issues. Common causes include:
- Specifying a
runAsUserthat conflicts with the Security Context Constraints (SCC). - Using the wrong
fsGroupfor the project.
To resolve the issue:
-
Remove the
runAsUsersetting and allow OpenShift to assign it automatically. -
Verify project UID range.
oc describe project <namespace> | grep uid-range -
Set the
fsGroupto the start of the project’s UID range.
Why are my pods being blocked with NET_ADMIN or NET_RAW capability violations? Copied
This typically occurs when an admission controller blocks pods due to violations of strict security policies. Common causes include:
- Using Linkerd service mesh without CNI mode enabled.
- Linkerd injecting privileged
linkerd-initcontainers that conflict with policies enforced by Pod Security Admission, Gatekeeper, Kyverno, or OpenShift SCCs.
To resolve the issue:
If you are using Linkerd with strict security policies, you must configure Linkerd in CNI mode.
- Installing the Linkerd CNI plugin before deploying the Linkerd control plane.
- Enabling CNI mode during Linkerd installation by setting
--set cniEnabled=true. - Following the complete setup instructions in the Linkerd CNI documentation.
CNI mode moves traffic redirection from privileged pod init containers to a node-level DaemonSet, allowing your application pods to remain compliant with “drop all capabilities” policies.