Configure security contexts

Overview Copied

Important

These procedures apply only to ITRS Analytics installations deployed using Helm.

ITRS Analytics supports flexible security context configuration to meet your organization’s security policies, including compliance with Kubernetes Pod Security Standards and admission controllers such as Gatekeeper and Kyverno.

Security contexts control how containers run, including:

Manual recovery for user ID changes Copied

Warning

Changing runAsUser after installation requires manual intervention and temporarily relaxing security policies. This procedure must be performed with guidance from ITRS Support.

If you need to change runAsUser on an existing installation with strict security policies (Pod Security Admission, Gatekeeper, or Kyverno), see Manual recovery: Change user IDs with strict security policies for the complete recovery procedure.

As a best practice, configure the correct runAsUser and fsGroup before initial deployment to avoid this complexity.

Advanced security configuration Copied

For more information about security context configuration, see Using admission controllers and enabling optional logs collection which covers integration with admission controllers such as Gatekeeper, Kyverno, and Pod Security Admission (PSA), as well as the optional logs collection feature.

Security context default configuration Copied

ITRS Analytics uses the following default security contexts:

Pod-level (.spec) Copied

runAsUser: 10000
runAsGroup: 10000
supplementalGroups: [10000]
fsGroup: 10000
fsGroupChangePolicy: OnRootMismatch

Container-level (.spec.containers[] or .spec.initContainers[]) Copied

runAsNonRoot: true
allowPrivilegeEscalation: false
capabilities:
  drop: [ALL]
seccompProfile:
  type: RuntimeDefault

These defaults are suitable for immediate deployment on most Kubernetes clusters and adhere to the Pod Security Standard (PSS) at the Restricted level. The Restricted level enforces strong, least-privilege policies, such as prohibiting privileged containers and disallowed capabilities. This configuration is also compatible with Gatekeeper, Kyverno, and OpenShift Security Context Constraints.

For more information, see Pod Security Standards.

When customization is needed Copied

You may need to customize security contexts if:

Note

Configure security contexts before initial deployment. Changing runAsUser after installation requires manual intervention and temporarily relaxing security policies. See Manual recovery: Change user IDs with strict security policies for more information.

If you change runAsUser, you must also change fsGroup at the same time. See the Troubleshooting section for complete details on valid security context changes.

Configuration Copied

Basic configuration Copied

You can customize security contexts for ITRS Analytics during installation by specifying the required values in the Helm values files for the Platform and each app.

iax:
  securityContext:
    pod:
      runAsUser: 5000
      runAsGroup: 5000
      supplementalGroups: [5000]
      fsGroup: 5000
      fsGroupChangePolicy: OnRootMismatch
    container:
      runAsNonRoot: true
      allowPrivilegeEscalation: false
      capabilities:
        drop: [ALL]
      seccompProfile:
        type: RuntimeDefault

This configuration applies only to the ITRS Analytics Platform components. Each app has its own Helm chart, so you must configure security contexts separately in the values file for every app.

Configuration reference Copied

Pod security context Copied

Field Description Type Example
runAsUser Assign this user ID to the main process. Number 5000
runAsGroup Assign this group ID to the main process. Number 5000
supplementalGroups Additional group IDs to assign to the main process. List of numbers [5000, 6000]
fsGroup Change group ownership of mounted volumes. Number 5000
fsGroupChangePolicy When to change volume ownership. String OnRootMismatch

Container security context Copied

Field Description Type Example
runAsNonRoot (Recommended) Require non-root user. Boolean true
allowPrivilegeEscalation (Required for PSA “Restricted”) Prevent gaining additional privileges. Boolean false
capabilities.drop (Recommended) Linux capabilities to remove. List of strings [ALL]
seccompProfile.type (required for PSA “Restricted”) Seccomp profile for syscall filtering. String RuntimeDefault

Deployment scenarios Copied

Configure custom user and group ID (UID/GID) ranges Copied

Use this procedure when your organization has security policies that require ITRS Analytics to run with specific UID/GID ranges (for example, 5000-6000).

  1. Determine the UID/GID range required by your organization’s security policy.

  2. Edit your ITRS Analytics configuration to include the following security context settings:

    spec:
      securityContext:
        pod:
          runAsUser: 5000
          runAsGroup: 5000
          supplementalGroups: [5000]
          fsGroup: 5000
          fsGroupChangePolicy: OnRootMismatch
        container:
          runAsNonRoot: true
          allowPrivilegeEscalation: false
          capabilities:
            drop: [ALL]
          seccompProfile:
            type: RuntimeDefault
    
  3. Replace 5000 with your organization’s required UID/GID values.

  4. Apply the configuration changes.

Deploy with OpenShift-assigned UIDs Copied

Use this procedure when deploying ITRS Analytics on Red Hat OpenShift Container Platform. OpenShift automatically assigns UID ranges to projects for enhanced security isolation.

  1. Find your project’s UID range.

    oc describe project <namespace> | grep uid-range
    # Example output: openshift.io/sa.scc.uid-range: 1000680000/10000
    
  2. Configure using the range start as fsGroup:

    spec:
      securityContext:
        pod:
          # Don't specify runAsUser - OpenShift assigns automatically
          fsGroup: 1000680000
        container:
          runAsNonRoot: true
          allowPrivilegeEscalation: false
          capabilities:
            drop: [ALL]
          seccompProfile:
            type: RuntimeDefault
    
  3. After applying your configuration, verify the security contexts are working:

    # Check pod security context
    kubectl get pod <pod-name> -n <namespace> -o yaml | grep -A 10 "securityContext:"
    
    # Verify process UID/GID inside container
    kubectl exec <pod-name> -n <namespace> -- id
    
    # Check for policy violations (if using Gatekeeper)
    kubectl get constraints -A
    

    Expected output:

    • Process runs with your specified UID/GID
    • No policy violations
    • All pods in Running state

Troubleshooting Copied

My pods don’t start after changing the security context Copied

This issue can occur due to several reasons:

To resolve the issue:

  1. Check the pod events using:

    kubectl describe pod <pod-name>
    
  2. Verify that your UID/GID is within the cluster’s allowed range.

  3. Review admission controller logs to identify any policy violations.

Why am I seeing “Permission denied” errors in the logs? Copied

These errors occur when the application cannot write to certain directories or volumes. Common causes include an incorrectly set fsGroup or a UID/GID mismatch with the volume ownership.

To resolve the issue:

  1. Ensure fsGroup is set and matches your security policy.

  2. If using OpenShift, set fsGroup to match the project’s UID range. See Deploy with OpenShift-assigned UIDs.

  3. Check volume permissions by running:

    kubectl exec <pod> -- ls -la /data
    

Why is my pod creation being blocked by Gatekeeper or Kyverno policies? Copied

This usually happens when the admission controller detects policy violations. Common causes include:

To resolve the issue:

  1. Check which constraints are being violated.

    kubectl get constraints -A
    
  2. Review the details of the specific constraint.

    kubectl describe <constraint-type> <constraint-name>
    
  3. Update your pod’s security context to comply with the policy requirements.

Why do my OpenShift pods fail with “User not allowed”? Copied

This error occurs when OpenShift rejects pods due to permission issues. Common causes include:

To resolve the issue:

  1. Remove the runAsUser setting and allow OpenShift to assign it automatically.

  2. Verify project UID range.

    oc describe project <namespace> | grep uid-range
    
  3. Set the fsGroup to the start of the project’s UID range.

Why are my pods being blocked with NET_ADMIN or NET_RAW capability violations? Copied

This typically occurs when an admission controller blocks pods due to violations of strict security policies. Common causes include:

To resolve the issue:

If you are using Linkerd with strict security policies, you must configure Linkerd in CNI mode.

  1. Installing the Linkerd CNI plugin before deploying the Linkerd control plane.
  2. Enabling CNI mode during Linkerd installation by setting --set cniEnabled=true.
  3. Following the complete setup instructions in the Linkerd CNI documentation.

CNI mode moves traffic redirection from privileged pod init containers to a node-level DaemonSet, allowing your application pods to remain compliant with “drop all capabilities” policies.

["ITRS Analytics"] ["User Guide", "Technical Reference"]

Was this topic helpful?