Sample configuration for micro sized cluster with AWS ALB Ingress controller (micro, no HA)

Download this sample Micro sized cluster with AWS ALB Ingress controller configuration provided by ITRS for installations with High Availability (HA) disabled.

# Example Obcerv configuration for micro-sized instance handling 10k entities, 500k time series, 3k metrics/sec,
# no HA and with ALB Ingress controller.
#
# The resource requests total ~6 cores and ~29GiB memory and does not include optional Linkerd resources.
#
# Disk requirements for 7 days approximately, depending on the amount and type of data being ingested:
# - Timescale:
#   - 100 GiB data disk
#   - 40 GiB WAL disk
# - Kafka: 100 GiB
# - Loki: 5 GiB
# - etcd: 1 GiB
# - Downsampled Metrics:
#   - Raw: 2 GiB
#   - Bucketed:  2 GiB
#
# Disk requirements for default retention approximately, depending on the amount and type of data being ingested:
# - Timescale:
#   - 500 GiB data disk
#   - 40 GiB WAL disk
# - Kafka: 100 GiB
# - Loki: 10 GiB
# - etcd: 1 GiB
# - Downsampled Metrics:
#   - Raw: 5 GiB
#   - Bucketed: 5 GiB
#
# The AWS Load Balancer Controller is required in order to support external ingestion.  This example assumes version
# 2.3.0 or later is installed. See https://kubernetes-sigs.github.io/aws-load-balancer-controller/.
#
# The AWS Load Balancer Controller requires annotations for each ingress configured below.
# Be sure to change the certificate ARN and group names.  The group name can be any unique value (for example
# use the same value you set for externalHostname) but it must be the same for the `apps` and `iam` ingresses.
#
# The `alb.ingress.kubernetes.io/target-type` annotation controls how traffic is routed to pods.  The simplest
# option ("ip") is used below.  If this is not supported in your cluster, the default setting of "instance" must be
# used instead and all services backed by each ingress must be changed to NodePort instead of the default ClusterIP.
#

apps:
  externalHostname: "obcerv.mydomain.internal"
  ingress:
    annotations:
      kubernetes.io/ingress.class: alb
      alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:...
      alb.ingress.kubernetes.io/group.name: obcerv.mydomain.internal
      alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
      alb.ingress.kubernetes.io/scheme: internet-facing
      alb.ingress.kubernetes.io/ssl-redirect: "443"
      alb.ingress.kubernetes.io/target-type: ip
ingestion:
  externalHostname: "obcerv-ingestion.mydomain.internal"
  ingress:
    annotations:
      kubernetes.io/ingress.class: alb
      alb.ingress.kubernetes.io/backend-protocol-version: GRPC
      alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:...
      alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
      alb.ingress.kubernetes.io/scheme: internet-facing
      alb.ingress.kubernetes.io/target-type: ip
iam:
  ingress:
    annotations:
      kubernetes.io/ingress.class: alb
      alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:...
      alb.ingress.kubernetes.io/group.name: obcerv.mydomain.internal
      alb.ingress.kubernetes.io/ssl-redirect: "443"
      alb.ingress.kubernetes.io/target-type: ip
kafka:
  resources:
    limits:
      memory: "3Gi"
    requests:
      memory: "3Gi"
  diskSize: "100Gi"
timescale:
  dataDiskSize: "100Gi"
  walDiskSize: "40Gi"
  resources:
    limits:
      memory: "6Gi"
    requests:
      memory: "6Gi"
loki:
  diskSize: "5Gi"
downsampledMetricsStream:
  diskSize: "2Gi"
  bucketedDiskSize: "2Gi"

# For higher-volume installations, it is recommended run additional sinkd replicas.
#sinkd:
#  replicas: 2
#  rawReplicas: 2
["Obcerv"] ["User Guide", "Technical Reference"]

Was this topic helpful?