Configuration reference

Important

This information refers to the previous helm install method for the Obcerv Platform. If you are looking to install using the more streamlined Kubernetes Off-the-Shelf (KOTS) method, see the updated installation overview.

ITRS provides sample configuration files for installing an Obcerv instance. You can download sample scenarios in Sample configuration files.

Advanced configuration settings are all optional.

You are not required to change the settings described here for a default installation of Obcerv.

Resource settings Copied

For each Obcerv workload there is a resources parameter that defines the resource requests and limits for the pods in that workload. We recommend setting these parameters for all applications, otherwise some pods may consume all available resources on your Kubernetes cluster.

The provided configuration examples have baseline resources and limits set for all workloads. They can be altered as needed, for example:

kafka:
  resources:
    requests:
      memory: "1Gi"
      cpu: "500m"
    limits:
      memory: "2Gi"
      cpu: "1"

Timescale volume sizing Copied

By setting certain parameters, you can control the sizing and distribution of persistent data volumes for Timescale. The following are sample values:

- dataDiskSize: "50Gi"
- timeseriesDiskCount: 4
- timeseriesDiskSize: "10Ti"

There are two types of data stored in Timescale: timeseries data and non-timeseries data. By default, all data is stored on the data volume which is sized by the dataDiskSize parameter, while the timeseriesDiskCount parameter is set to 0. However, Timeseries data can be stored on isolated volumes that may use a different storage class and can be scaled when more storage is needed.

When the timeseriesDiskCount parameter is set to greater than 0, all timeseries data is distributed among the timeseries data volumes, each of which has a size defined by the timeseriesDiskSize parameter. This is helpful when there are constraints on the maximum volume sizes set by a cloud provider (for example, when AWS has a max volume size of 16Ti).

To calculate the total disk volume size utilised by Timescale, use the following formula:

dataDiskSize + (timeseriesDiskSize * timeseriesDiskCount)

Timescale retention settings Copied

To adjust metric data retention, update the timescale.retention parameter. The values are in the format of <number><time>, such as: 5d or 45m. The time format options are:

- m  (minutes)
- h  (hours)
- d  (days)
- mo (months)
- y  (years)
timescale:
  retention:
    metrics:
      chunkSize: 3h
      retention: 60d
      compressAfter: 3h
  ...

Kafka retention settings Copied

The kafka.defaultRetentionMillis parameter controls how long data is retained in Kafka topics, which defaults to 21600000 (6 hours).

Timescale or Kafka on reserved nodes Copied

For larger deployments, you may want to run either Timescale or Kafka or both on reserved Kubernetes nodes. This can be achieved using Kubernetes labels and taints on the nodes and tolerations and nodeSelector on the workloads.

The following example shows how to deploy Timescale on reserved nodes:

  1. Add a label to the Timescale nodes in the manifest or using kubectl:

    instancegroup: timescale-nodes
    
  2. Add a taint to the Timescale nodes in the manifest or using kubectl:

    dedicated=timescale-nodes:NoSchedule
    
  3. Set the following in your parameters file:

    timescale:
    
      nodeSelector:
        # only schedule on nodes that have this label
        instancegroup: timescale-nodes
    
      tolerations:
      # must match the tainted node setting 
      - key: dedicated
        operator: Equal
        value: timescale-nodes
        effect: NoSchedule
    
  4. (Optional) In order for Obcerv to collect pod logs from the reserved nodes, the following tolerations are required so that the logs agent can run on these nodes:

    collection:
      daemonSet:
        tolerations:
        # must match the tainted node setting 
        - key: dedicated
          operator: Equal
          value: timescale-nodes
          effect: NoSchedule
        - key: dedicated
          operator: Equal
          value: kafka-nodes
          effect: NoSchedule
    

Ingestion services Copied

Obcerv supports ingesting data via two gRPC endpoints, which are both serviced from the same external host name (as defined by ingestion.externalHostname) and port (443).

In production environments, you can disable one of the two ingestion services to slightly reduce the CPU and memory footprint of the ingestion pod. To do this, you need to change the value of the ingestion.internalEnabled and ingestion.otelEnabled parameters, which are set to true by default.

Note

You cannot disable both ingestion services at the same time.

OpenTelemetry ingestion service Copied

While raw traces are currently not stored, the OpenTelemetry ingestion service in the Obcerv Platform captures the following metrics (where <span> is the span name):

When pointing your instrumentation to the Obcerv OpenTelemetry API, make sure to:

For more information, refer to the instrumentation guide from Open Telemetry.

Kubernetes log collection Copied

Obcerv is configured to collect logs for Kubernetes pods and containers in the configured namespaces, or the Obcerv namespace by default. This behavior is controlled by the following setting:

collection:
  logs:
    enabled: true
    # Restrict collection to specific namespaces. If empty, only the Obcerv installation namespace
    # will be collected. To collect all, specify "*".
    namespaces: []

Note

For the log collection process to function properly, the obcerv-operator Kubernetes service account must have appropriate permissions to create resources with read and write access to hostPath volumes. The process for granting these permissions may vary depending on your Kubernetes distribution.
["Obcerv"] ["User Guide", "Technical Reference"]

Was this topic helpful?