Skip to content

HELM Deployment Guide

This guide provides comprehensive instructions for deploying the Locust Kubernetes Operator using its official Helm chart.

Quick Start

For experienced users, here are the essential commands to get the operator running:

helm repo add locust-k8s-operator https://abdelrhmanhamouda.github.io/locust-k8s-operator/
helm repo update
helm install locust-operator locust-k8s-operator/locust-k8s-operator \
  --namespace locust-system --create-namespace

Installation

Prerequisites

  • A running Kubernetes cluster (e.g., Minikube, GKE, EKS, AKS).
  • Helm 3 installed on your local machine.

Step 1: Add the Helm Repository

First, add the Locust Kubernetes Operator Helm repository to your local Helm client:

helm repo add locust-k8s-operator https://abdelrhmanhamouda.github.io/locust-k8s-operator/

Next, update your local chart repository cache to ensure you have the latest version:

helm repo update

Step 2: Install the Chart

You can install the chart with a release name of your choice (e.g., locust-operator).

Default Installation:

To install the chart with the default configuration, run:

helm install locust-operator locust-k8s-operator/locust-k8s-operator \
  --namespace locust-system --create-namespace

Installation with a Custom Values File:

For more advanced configurations, it's best to use a custom values.yaml file. Create a file named my-values.yaml and add your overrides:

# my-values.yaml
replicaCount: 2

locustPods:
  resources:
    limits:
      cpu: "2000m"
      memory: "2048Mi"
    requests:
      cpu: "500m"
      memory: "512Mi"
# my-values.yaml (old format - still works via compatibility shims)
replicaCount: 2

config:
  loadGenerationPods:
    resource:
      cpuLimit: "2000m"
      memLimit: "2048Mi"

Then, install the chart, specifying your custom values file and a target namespace:

helm install locust-operator locust-k8s-operator/locust-k8s-operator \
  --namespace locust-system \
  --create-namespace \
  -f my-values.yaml

Verifying the Installation

After installation, you can verify that the operator is running correctly by checking the pods in the target namespace:

kubectl get pods -n locust-system

You should see a pod with a name similar to locust-operator-b5c9f4f7-xxxxx in the Running state.

To view the operator's logs, run:

kubectl logs -f -n locust-system -l app.kubernetes.io/name=locust-k8s-operator

Configuration

The following tables list the configurable parameters of the Locust Operator Helm chart and their default values.

v2.0 Changes

The v2 Helm chart has been updated for the Go operator. Java-specific settings (Micronaut, JVM) have been removed. Backward compatibility shims are provided for common settings.

Deployment Settings

ParameterDescriptionDefault
replicaCountNumber of replicas for the operator deployment.2
image.repositoryThe repository of the Docker image.lotest/locust-k8s-operator
image.pullPolicyThe image pull policy.IfNotPresent
image.tagOverrides the default image tag (defaults to the chart's appVersion).""
image.pullSecretsList of image pull secrets.[]

Kubernetes Resources

ParameterDescriptionDefault
k8s.clusterRole.enabledDeploy with a cluster-wide role (true) or a namespaced role (false).true
serviceAccount.createSpecifies whether a service account should be created.true
serviceAccount.nameThe name of the service account to use. If empty and serviceAccount.create is true, a name is generated using the release name. If serviceAccount.create is false, defaults to default.""
serviceAccount.annotationsAnnotations to add to the service account.{}

Operator Resources

The Go operator requires significantly fewer resources than the Java version:

ParameterDescriptionDefault
resources.limits.memoryOperator memory limit.256Mi
resources.limits.cpuOperator CPU limit.500m
resources.requests.memoryOperator memory request.64Mi
resources.requests.cpuOperator CPU request.10m

Feature Toggles

ParameterDescriptionDefault
leaderElection.enabledEnable leader election for HA deployments.true
metrics.enabledEnable Prometheus metrics endpoint.false
metrics.portMetrics server port.8080
metrics.secureUse HTTPS for metrics endpoint.false
webhook.enabledEnable conversion webhook (requires cert-manager).false

Webhook Configuration (optional)

The conversion webhook is meant for cases where both the old and new CRDs are present in the cluster. Required when webhook.enabled: true:

ParameterDescriptionDefault
webhook.portWebhook server port.9443
webhook.certManager.enabledUse cert-manager for TLS certificate management.true

Note

The conversion webhook requires cert-manager to be installed in your cluster for automatic TLS certificate management.

Locust Pod Configuration

ParameterDescriptionDefault
locustPods.resources.requests.cpuCPU request for Locust pods.250m
locustPods.resources.requests.memoryMemory request for Locust pods.128Mi
locustPods.resources.requests.ephemeralStorageEphemeral storage request for Locust pods.30M
locustPods.resources.limits.cpuCPU limit for Locust pods. Set to "" to unbind.1000m
locustPods.resources.limits.memoryMemory limit for Locust pods. Set to "" to unbind.1024Mi
locustPods.resources.limits.ephemeralStorageEphemeral storage limit for Locust pods.50M
locustPods.affinityInjectionEnable affinity injection from CRs.true
locustPods.tolerationsInjectionEnable tolerations injection from CRs.true

Metrics Exporter

ParameterDescriptionDefault
locustPods.metricsExporter.imageMetrics Exporter Docker image.containersol/locust_exporter:v0.5.0
locustPods.metricsExporter.portMetrics Exporter port.9646
locustPods.metricsExporter.pullPolicyImage pull policy for the metrics exporter.IfNotPresent
locustPods.metricsExporter.resources.requests.cpuCPU request for metrics exporter.100m
locustPods.metricsExporter.resources.requests.memoryMemory request for metrics exporter.64Mi
locustPods.metricsExporter.resources.requests.ephemeralStorageEphemeral storage request for metrics exporter.30M
locustPods.metricsExporter.resources.limits.cpuCPU limit for metrics exporter.250m
locustPods.metricsExporter.resources.limits.memoryMemory limit for metrics exporter.128Mi
locustPods.metricsExporter.resources.limits.ephemeralStorageEphemeral storage limit for metrics exporter.50M

Tip

When using OpenTelemetry (spec.observability.openTelemetry.enabled: true), the metrics exporter sidecar is not deployed.

Job Configuration

ParameterDescriptionDefault
locustPods.ttlSecondsAfterFinishedTTL for finished jobs. Set to "" to disable.""

Kafka Configuration

ParameterDescriptionDefault
kafka.enabledEnable Kafka configuration injection.false
kafka.bootstrapServersKafka bootstrap servers.localhost:9092
kafka.security.enabledEnable Kafka security.false
kafka.security.protocolSecurity protocol (SASL_SSL, SASL_PLAINTEXT, etc.).SASL_PLAINTEXT
kafka.security.saslMechanismSASL mechanism.SCRAM-SHA-512
kafka.security.jaasConfigJAAS configuration string.""
kafka.credentials.secretNameName of secret containing Kafka credentials.""
kafka.credentials.usernameKeyKey in secret for username.username
kafka.credentials.passwordKeyKey in secret for password.password

OpenTelemetry Collector (Optional)

Deploy an OTel Collector alongside the operator:

ParameterDescriptionDefault
otelCollector.enabledDeploy OTel Collector.false
otelCollector.imageCollector image.otel/opentelemetry-collector-contrib:0.145.0
otelCollector.replicasNumber of collector replicas.1
otelCollector.resources.requests.cpuCPU request for collector.50m
otelCollector.resources.requests.memoryMemory request for collector.64Mi
otelCollector.resources.limits.cpuCPU limit for collector.200m
otelCollector.resources.limits.memoryMemory limit for collector.256Mi
otelCollector.configOTel Collector configuration (YAML string).See values.yaml

Pod Scheduling

ParameterDescriptionDefault
nodeSelectorNode selector for scheduling the operator pod.{}
tolerationsTolerations for scheduling the operator pod.[]
affinityAffinity rules for scheduling the operator pod.{}
podAnnotationsAnnotations to add to the operator pod.{}

Backward Compatibility

The following v1 paths are still supported via helper functions:

Old Path (v1)New Path (v2)
config.loadGenerationPods.resource.cpuRequestlocustPods.resources.requests.cpu
config.loadGenerationPods.resource.memLimitlocustPods.resources.limits.memory
config.loadGenerationPods.affinity.enableCrInjectionlocustPods.affinityInjection
config.loadGenerationPods.kafka.*kafka.*

Removed Settings

The following Java-specific settings have been removed and have no effect in v2:

  • appPort - Fixed at 8081
  • micronaut.* - No Micronaut in Go operator
  • livenessProbe.* / readinessProbe.* - Fixed probes on /healthz and /readyz

Upgrading the Chart

To upgrade an existing release to a new version, use the helm upgrade command:

helm upgrade locust-operator locust-k8s-operator/locust-k8s-operator -f my-values.yaml

Uninstalling the Chart

To uninstall and delete the locust-operator deployment, run:

helm uninstall locust-operator

This command will remove all the Kubernetes components associated with the chart and delete the release.

Next Steps

Once the operator is installed, you're ready to start running performance tests! Head over to the Getting Started guide to learn how to deploy your first LocustTest.