Using admission controllers and enabling optional logs collection

Important

These procedures apply only to ITRS Analytics installations deployed using Helm.

Please contact ITRS Support for guidance before performing this procedure.

This topic covers advanced security context configuration, including integration with admission controllers, optional features with specific security requirements, and frequently asked questions.

Integration with admission controllers Copied

Gatekeeper Copied

ITRS Analytics is fully compatible with Gatekeeper policies, allowing you to enforce Kubernetes security constraints across your deployments. When using Gatekeeper, you can use the following common policy templates:

To ensure smooth operation, configure the security contexts in your ITRS Analytics workloads to comply with the constraints defined in your Gatekeeper policies. This alignment ensures both security compliance and proper functioning of your applications.

Kyverno Copied

Similar to Gatekeeper, configure security contexts to match your Kyverno policies.

Pod Security Admission (PSA) Copied

Pod Security Admission (PSA) is Kubernetes’ built-in admission controller, introduced in v1.23, that enforces Pod Security Standards on pods. It provides a mechanism to control the security posture of pods at creation time, helping prevent misconfigurations and ensuring compliance with security best practices.

ITRS Analytics supports all PSS levels:

For PSS Restricted level, ITRS Analytics default configuration includes all required fields:

To enable PSS for your namespace:

kubectl label namespace <namespace> \
  pod-security.kubernetes.io/enforce=restricted \
  pod-security.kubernetes.io/audit=restricted \
  pod-security.kubernetes.io/warn=restricted

Optional features: Logs collection Copied

ITRS Analytics provides an optional logs collection feature that enables collection of container logs from all nodes in the cluster. This is implemented via a DaemonSet named log-collector. By default, this feature is disabled and must be explicitly enabled if needed.

Warning

Logs collection has specific security requirements and may not be compatible with all cluster security policies:

  • The feature is not compatible with Pod Security Standards (PSS) set to restricted or baseline levels.
  • It is only compatible with PSS set to the privileged level.
  • It is not compatible with Gatekeeper policies that enforce a drop-all capability constraint.

Before enabling logs collection, ensure your cluster meets these security requirements to avoid deployment issues or runtime failures.

Understanding the requirements Copied

The log-collector DaemonSet has specific requirements to function correctly:

  1. hostPath volumes — needed to read container log files directly from the host filesystem, including paths such as /var/log/containers, /var/log/pods, and /var/lib/docker/containers.
  2. allowPrivilegeEscalation: true — required for the Java binary used in the collector, which relies on file capabilities.
  3. DAC_OVERRIDE capability — allows the collector to read log files that are owned by other users.

These settings violate standard Pod Security Standards (PSS) at both the restricted and baseline levels, and also conflict with typical Gatekeeper policies that enforce a drop all capabilities approach.

When to enable log collection Copied

Enable the log-collector DaemonSet only if the following conditions are met:

When to keep log collection disabled Copied

Keep the log-collector DaemonSet disabled if any of the following apply:

Configuration Copied

Logs collection is controlled by the collection.logs.enabled parameter.

For Helm installations, you can enable it by setting the parameter in the Helm values files for the platform.

To enable (only if security policies allow):

apiVersion: itrsgroup.com/v1
kind: IAXPlatform
metadata:
  name: iax-production
spec:
  collection:
    logs:
      enabled: true

The log-collector uses a dedicated security context (separate from other platform components):

collection:
  logs:
    securityContext:
      pod:
        runAsUser: 10000
        runAsGroup: 10000
        supplementalGroups: [10000]
        fsGroup: 10000
        fsGroupChangePolicy: OnRootMismatch
      container:
        runAsNonRoot: true
        allowPrivilegeEscalation: true      # Required for setcap binary
        capabilities:
          add: [DAC_OVERRIDE]                # Required to read other users' files
          drop: [ALL]
        seccompProfile:
          type: RuntimeDefault

Note

You cannot override this security context to make logs collection work with PSS restricted or baseline. The requirements are fundamental to how the log-collector accesses host filesystem logs.

Verification Copied

If you enable logs collection, verify it’s working:

# Check DaemonSet status
kubectl get daemonset log-collector -n itrs

# Check log-collector pods (should be 1 per node)
kubectl get pods -n itrs -l app=log-collector

# Check logs are being collected
kubectl logs -n itrs -l app=log-collector --tail=20

If pods fail with security policy violations:

  1. Verify PSS level: kubectl get namespace itrs -o jsonpath='{.metadata.labels.pod-security\.kubernetes\.io/enforce}'
  2. Check Gatekeeper constraints: kubectl get constraints -A
  3. Disable logs collection: collection.logs.enabled: false

Best practices for logs collection Copied

By default, logs collection should remain disabled. Enable it only if you have specific requirements that necessitate it and you have implemented the appropriate security policies to mitigate potential risks.

It is important to understand the security trade-offs involved: enabling logs collection may require relaxing certain security policies, which could increase the system’s exposure. Always evaluate the necessity of logs collection against the associated security implications before enabling this feature.

Troubleshooting Copied

Which security context parameters can I safely change after installation? Copied

Critical rule: If you change runAsUser, you must also change fsGroup at the same time. This is required for all stateful workloads (PostgreSQL, TimescaleDB, etcd, Kafka) to maintain correct file ownership and permissions.

Parameters safe to change independently:

Parameters that require both values to change:

Example of a valid change:

# Before (initial installation)
runAsUser: 10000
fsGroup: 10000

# After (valid change)
runAsUser: 5000
fsGroup: 5000  # MUST change when runAsUser changes

Important

Changing runAsUser post-installation requires manual intervention and temporarily relaxing security policies. See Manual recovery: Change user IDs with strict security policies for the complete procedure. It is strongly recommended to configure these parameters correctly during initial deployment to avoid this complexity.

Can I use different security contexts for different components? Copied

Platform pods all run with the same security context, so it is not possible to configure different security contexts for individual platform components such as etcd or Kafka.

However, each app has its own security context and technically could be configured differently. This is strongly discouraged, as it can compromise the overall security posture.

What happens if I don’t specify a security context? Copied

ITRS Analytics applies sensible defaults, including UID 10000, drop all capabilities, and running as a non-root user, across all platform workloads and apps. These defaults are sufficient for most cluster environments, so specifying a custom security context is usually not necessary.

Can I use UID 0 (root)? Copied

While it is technically possible to use UID 0, it is not recommended. Many clusters enforce security policies that may block root access. It is best to use a non-root UID for security and compliance reasons.

How do I find my cluster’s allowed UID range? Copied

The method depends on your cluster type:

Does changing the security context require downtime? Copied

Yes. Updating the security context requires the pods to be recreated, so you should plan to perform this change during a maintenance window to minimize impact.

Can I use Linkerd service mesh with strict security policies? Copied

Yes, but you need to configure Linkerd in CNI mode.

The default Linkerd installation uses linkerd-init containers that require NET_ADMIN and NET_RAW capabilities, which conflict with policies that enforce drop all capabilities (such as Pod Security Admission, Gatekeeper, or Kyverno). CNI mode removes the need for these privileged init containers by handling traffic redirection at the node level via a DaemonSet.

For more details, consult the Linkerd CNI documentation.

["ITRS Analytics"] ["User Guide", "Technical Reference"]

Was this topic helpful?