Skip to content

API Reference

This document provides a complete reference for the LocustTest Custom Resource Definition (CRD).

Overview

PropertyValue
Grouplocust.io
KindLocustTest
Versionsv2 (recommended), v1 (deprecated)
Short Namelotest
ScopeNamespaced

The v2 API provides a cleaner, grouped configuration structure with new features.

Spec Fields

Root Fields

FieldTypeRequiredDefaultDescription
imagestringYes-Container image for Locust pods (e.g., locustio/locust:2.43.3)
imagePullPolicystringNoIfNotPresentImage pull policy: Always, IfNotPresent, Never
imagePullSecrets[]LocalObjectReferenceNo-Secrets for pulling from private registries (specify as - name: secret-name)
masterMasterSpecYes-Master pod configuration
workerWorkerSpecYes-Worker pod configuration
testFilesTestFilesConfigNo-ConfigMap references for test files
schedulingSchedulingConfigNo-Affinity, tolerations, nodeSelector
envEnvConfigNo-Environment variable injection
volumes[]corev1.VolumeNo-Additional volumes to mount
volumeMounts[]TargetedVolumeMountNo-Volume mounts with target filtering
observabilityObservabilityConfigNo-OpenTelemetry configuration

MasterSpec

FieldTypeRequiredDefaultDescription
commandstringYes-Locust command seed (e.g., --locustfile /lotest/src/test.py --host https://example.com)
resourcescorev1.ResourceRequirementsNoFrom operator configCPU/memory requests and limits
labelsmap[string]stringNo-Additional labels for master pod
annotationsmap[string]stringNo-Additional annotations for master pod
autostartboolNotrueStart test automatically when workers connect
autoquitAutoquitConfigNo{enabled: true, timeout: 60}Auto-quit behavior after test completion
extraArgs[]stringNo-Additional command-line arguments

WorkerSpec

FieldTypeRequiredDefaultDescription
commandstringYes-Locust command seed (e.g., --locustfile /lotest/src/test.py)
replicasint32Yes-Number of worker replicas (1-500)
resourcescorev1.ResourceRequirementsNoFrom operator configCPU/memory requests and limits
labelsmap[string]stringNo-Additional labels for worker pods
annotationsmap[string]stringNo-Additional annotations for worker pods
extraArgs[]stringNo-Additional command-line arguments

AutoquitConfig

FieldTypeRequiredDefaultDescription
enabledboolNotrueEnable auto-quit after test completion
timeoutint32No60Seconds to wait before quitting after test ends

TestFilesConfig

FieldTypeRequiredDefaultDescription
configMapRefstringNo-ConfigMap containing test files
libConfigMapRefstringNo-ConfigMap containing library files
srcMountPathstringNo/lotest/srcMount path for test files
libMountPathstringNo/opt/locust/libMount path for library files

SchedulingConfig

FieldTypeRequiredDefaultDescription
affinitycorev1.AffinityNo-Standard Kubernetes affinity rules
tolerations[]corev1.TolerationNo-Standard Kubernetes tolerations
nodeSelectormap[string]stringNo-Node selector labels

EnvConfig

FieldTypeRequiredDefaultDescription
configMapRefs[]ConfigMapEnvSourceNo-ConfigMaps to inject as environment variables
secretRefs[]SecretEnvSourceNo-Secrets to inject as environment variables
variables[]corev1.EnvVarNo-Individual environment variables
secretMounts[]SecretMountNo-Secrets to mount as files

ConfigMapEnvSource

FieldTypeRequiredDefaultDescription
namestringYes-ConfigMap name
prefixstringNo-Prefix to add to all keys (e.g., APP_)

SecretEnvSource

FieldTypeRequiredDefaultDescription
namestringYes-Secret name
prefixstringNo-Prefix to add to all keys

SecretMount

FieldTypeRequiredDefaultDescription
namestringYes-Secret name
mountPathstringYes-Path to mount the secret
readOnlyboolNotrueMount as read-only

TargetedVolumeMount

FieldTypeRequiredDefaultDescription
namestringYes-Volume name (must match a volume in volumes)
mountPathstringYes-Path to mount the volume
subPathstringNo-Sub-path within the volume
readOnlyboolNofalseMount as read-only
targetstringNobothTarget pods: master, worker, or both

ObservabilityConfig

FieldTypeRequiredDefaultDescription
openTelemetryOpenTelemetryConfigNo-OpenTelemetry configuration

OpenTelemetryConfig

FieldTypeRequiredDefaultDescription
enabledboolNofalseEnable OpenTelemetry integration
endpointstringRequired if enabled-OTel collector endpoint (e.g., otel-collector:4317)
protocolstringNogrpcProtocol: grpc or http/protobuf
insecureboolNofalseUse insecure connection
extraEnvVarsmap[string]stringNo-Additional OTel environment variables

Status Fields

FieldTypeDescription
phasestringCurrent lifecycle phase: Pending, Running, Succeeded, Failed
observedGenerationint64Most recent generation observed by the controller
expectedWorkersint32Number of expected worker replicas (from spec)
connectedWorkersint32Approximate number of connected workers (from Job.Status.Active)
startTimemetav1.TimeWhen the test transitioned to Running
completionTimemetav1.TimeWhen the test reached Succeeded or Failed
conditions[]metav1.ConditionStandard Kubernetes conditions (see below)

Note

connectedWorkers is an approximation derived from the worker Job's active pod count. It may briefly lag behind actual Locust worker connections.

Phase Lifecycle

stateDiagram-v2
    [*] --> Pending: CR Created
    Pending --> Running: Master Job active
    Running --> Succeeded: Master Job completed
    Running --> Failed: Master Job failed
    Pending --> Failed: Pod health check failed (after grace period)
    Running --> Failed: Pod health check failed (after grace period)
PhaseMeaningWhat to do
PendingResources are being created (Service, master Job, worker Job). Initial state after CR creation. Also set during recovery after external resource deletion.Wait for resources to be scheduled. Check events if stuck.
RunningMaster Job has at least one active pod. Test execution is in progress. startTime is set on this transition.Monitor worker connections and test progress.
SucceededMaster Job completed successfully (exit code 0). completionTime is set.Collect results. CR can be deleted or kept for records.
FailedMaster Job failed, or pod health checks detected persistent failures after the 2-minute grace period. completionTime is set.Check pod logs and events for failure details. Delete and recreate to retry.

The operator waits 2 minutes after pod creation before reporting pod health failures. This prevents false alarms during normal startup activities like image pulling, volume mounting, and scheduling.

Condition Types

Ready

StatusReasonMeaning
TrueResourcesCreatedAll resources (Service, Jobs) created successfully
FalseResourcesCreatingResources are being created
FalseResourcesFailedTest failed, resources in error state

WorkersConnected

StatusReasonMeaning
TrueAllWorkersConnectedAll expected workers have active pods
FalseWaitingForWorkersInitial state, waiting for worker pods
FalseWorkersMissingSome workers not yet active (shows N/M count)

TestCompleted

StatusReasonMeaning
TrueTestSucceededTest completed successfully
TrueTestFailedTest completed with failure
FalseTestInProgressTest has not finished

PodsHealthy

StatusReasonMeaning
TruePodsHealthyAll pods running normally
TruePodsStartingWithin 2-minute grace period (not yet checking)
FalseImagePullErrorOne or more pods cannot pull container image
FalseConfigurationErrorConfigMap or Secret not found
FalseSchedulingErrorPod cannot be scheduled (node affinity, resources)
FalseCrashLoopBackOffContainer repeatedly crashing
FalseInitializationErrorInit container failed

SpecDrifted

StatusReasonMeaning
TrueSpecChangeIgnoredCR spec was modified after creation. Changes are ignored. Delete and recreate to apply.

Info

The SpecDrifted condition only appears when a user edits the CR spec after initial creation. It serves as a reminder that tests are immutable.

Checking Status

# Quick status overview
kubectl get locusttest my-test

# Detailed status with conditions
kubectl get locusttest my-test -o jsonpath='{.status}' | jq .

# Watch phase changes in real-time
kubectl get locusttest my-test -w

# Check specific condition
kubectl get locusttest my-test -o jsonpath='{.status.conditions[?(@.type=="Ready")].status}'

# Check worker connection progress
kubectl get locusttest my-test -o jsonpath='{.status.connectedWorkers}/{.status.expectedWorkers}'

CI/CD Integration

Use kubectl wait to integrate LocustTest into CI/CD pipelines. The operator's status conditions follow standard Kubernetes conventions, making them compatible with any tool that supports kubectl wait.

GitHub Actions example:

name: Load Test
on:
  workflow_dispatch:

jobs:
  load-test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Apply test
        run: kubectl apply -f locusttest.yaml

      - name: Wait for test completion
        run: |
          kubectl wait locusttest/my-test \
            --for=jsonpath='{.status.phase}'=Succeeded \
            --timeout=30m

      - name: Check result
        if: failure()
        run: |
          echo "Test failed or timed out"
          kubectl describe locusttest my-test
          kubectl logs -l performance-test-name=my-test --tail=50

      - name: Cleanup
        if: always()
        run: kubectl delete locusttest my-test --ignore-not-found

Generic shell script example:

#!/bin/bash
set -e

# Apply test
kubectl apply -f locusttest.yaml

# Wait for completion (either Succeeded or Failed)
echo "Waiting for test to complete..."
while true; do
  PHASE=$(kubectl get locusttest my-test -o jsonpath='{.status.phase}' 2>/dev/null)
  case "$PHASE" in
    Succeeded)
      echo "Test passed!"
      exit 0
      ;;
    Failed)
      echo "Test failed!"
      kubectl describe locusttest my-test
      exit 1
      ;;
    *)
      echo "Phase: $PHASE - waiting..."
      sleep 10
      ;;
  esac
done

Complete v2 Example

apiVersion: locust.io/v2
kind: LocustTest
metadata:
  name: comprehensive-test
spec:
  image: locustio/locust:2.43.3
  imagePullPolicy: IfNotPresent

  master:
    command: "--locustfile /lotest/src/test.py --host https://api.example.com --users 1000 --spawn-rate 50 --run-time 10m"
    resources:
      requests:
        memory: "256Mi"
        cpu: "100m"
      limits:
        memory: "512Mi"
        cpu: "500m"
    labels:
      role: master
    autostart: true
    autoquit:
      enabled: true
      timeout: 120

  worker:
    command: "--locustfile /lotest/src/test.py"
    replicas: 10
    resources:
      requests:
        memory: "512Mi"
        cpu: "500m"
      limits:
        memory: "1Gi"
        cpu: "1000m"
    labels:
      role: worker

  testFiles:
    configMapRef: my-test-scripts
    libConfigMapRef: my-lib-files

  scheduling:
    nodeSelector:
      node-type: performance
    tolerations:
      - key: "dedicated"
        operator: "Equal"
        value: "performance"
        effect: "NoSchedule"

  env:
    secretRefs:
      - name: api-credentials
        prefix: "API_"
    configMapRefs:
      - name: app-config
    variables:
      - name: LOG_LEVEL
        value: "INFO"

  volumes:
    - name: test-data
      persistentVolumeClaim:
        claimName: test-data-pvc

  volumeMounts:
    - name: test-data
      mountPath: /data
      target: both

  observability:
    openTelemetry:
      enabled: true
      endpoint: "otel-collector.monitoring:4317"
      protocol: "grpc"
      extraEnvVars:
        OTEL_SERVICE_NAME: "load-test"
        OTEL_RESOURCE_ATTRIBUTES: "environment=staging,team=platform"

LocustTest v1 (Deprecated)

Deprecated

The v1 API is deprecated and will be removed in v3.0. Use v2 for new deployments. See the Migration Guide for upgrade instructions.

Spec Fields (v1)

FieldTypeRequiredDefaultDescription
masterCommandSeedstringYes-Command seed for master pod
workerCommandSeedstringYes-Command seed for worker pods
workerReplicasint32Yes-Number of worker replicas (1-500)
imagestringYes-Container image
imagePullPolicystringNoIfNotPresentImage pull policy
imagePullSecrets[]stringNo-Pull secrets
configMapstringNo-ConfigMap for test files
libConfigMapstringNo-ConfigMap for library files
labelsPodLabelsNo-Labels with master and worker maps
annotationsPodAnnotationsNo-Annotations with master and worker maps
affinityLocustTestAffinityNo-Custom affinity structure
tolerations[]LocustTestTolerationNo-Custom toleration structure

v1 Example

apiVersion: locust.io/v1
kind: LocustTest
metadata:
  name: basic-test
spec:
  image: locustio/locust:2.43.3
  masterCommandSeed: "--locustfile /lotest/src/test.py --host https://example.com"
  workerCommandSeed: "--locustfile /lotest/src/test.py"
  workerReplicas: 3
  configMap: test-scripts

Kubectl Commands

# List all LocustTests
kubectl get locusttests
kubectl get lotest  # short name

# Describe a LocustTest
kubectl describe locusttest <name>

# Watch status changes
kubectl get locusttest <name> -w

# Delete a LocustTest
kubectl delete locusttest <name>

Printer Columns

When listing LocustTests, the following columns are displayed:

ColumnDescription
NAMEResource name
PHASECurrent phase (Pending/Running/Succeeded/Failed)
WORKERSRequested worker count
CONNECTEDConnected worker count
IMAGEContainer image (priority column)
AGETime since creation