Skip to content

Monitor test status and health

The operator reports test status through standard Kubernetes status fields and conditions. This guide shows you how to monitor test execution, detect failures, and integrate with CI/CD pipelines.

How the operator reports test status

The operator updates .status on your LocustTest CR throughout its lifecycle:

  • Phase: Current state (Pending → Running → Succeeded/Failed)
  • Conditions: Detailed health indicators (Ready, WorkersConnected, PodsHealthy, etc.)
  • Worker counts: Expected vs connected workers
  • Timestamps: Start time and completion time

Watch test progress

Monitor phase changes in real-time:

kubectl get locusttest my-test -w

Expected output:

NAME      PHASE      WORKERS   CONNECTED   AGE
my-test   Pending    5                     2s
my-test   Pending    5         0           5s
my-test   Running    5         3           12s
my-test   Running    5         5           18s
my-test   Succeeded  5         5           5m32s

Output columns explained:

ColumnDescription
NAMELocustTest resource name
PHASECurrent lifecycle phase
WORKERSRequested worker count (from spec)
CONNECTEDActive worker pods (approximation from Job status)
AGETime since CR creation

Phase progression

Tests move through these phases:

stateDiagram-v2
    [*] --> Pending: CR created
    Pending --> Running: Master pod active
    Running --> Succeeded: Test completed (exit 0)
    Running --> Failed: Master failed or pods unhealthy
    Running --> Failed: Pod health check failed (after grace period)
    Pending --> Failed: Pod health check failed
PhaseMeaningWhat to do
PendingResources creating (Service, Jobs), pods schedulingWait for resources to schedule. Check events if stuck >2 minutes.
RunningMaster pod active, test executingMonitor worker connections and test progress.
SucceededMaster job completed successfully (exit code 0)Collect results. CR can be deleted or kept for records.
FailedMaster job failed or pods unhealthy (after 2-minute grace period)Check pod logs and events. Delete and recreate to retry.

Grace period

The operator waits 2 minutes after pod creation before reporting pod health failures. This prevents false alarms during image pulls and startup.

Check status conditions

Conditions provide detailed health information beyond the phase.

View all conditions:

kubectl get locusttest my-test -o jsonpath='{.status.conditions}' | jq .

Example output:

[
  {
    "type": "Ready",
    "status": "True",
    "reason": "ResourcesCreated",
    "message": "All resources created successfully"
  },
  {
    "type": "WorkersConnected",
    "status": "True",
    "reason": "AllWorkersConnected",
    "message": "5/5 workers connected"
  },
  {
    "type": "PodsHealthy",
    "status": "True",
    "reason": "PodsHealthy",
    "message": "All pods running normally"
  }
]

Key condition types

Ready

Indicates whether test resources were created successfully.

StatusReasonMeaning
TrueResourcesCreatedAll resources (Service, Jobs) created successfully
FalseResourcesCreatingResources are being created
FalseResourcesFailedTest failed, resources in error state

WorkersConnected

Tracks worker connection progress.

StatusReasonMeaning
TrueAllWorkersConnectedAll expected workers have active pods
FalseWaitingForWorkersInitial state, waiting for worker pods
FalseWorkersMissingSome workers not yet active (message shows N/M count)

Note

connectedWorkers is an approximation from Job.Status.Active. It may briefly lag behind actual Locust master connections.

PodsHealthy

Detects pod-level failures (crashes, scheduling issues, image pull errors).

StatusReasonMeaning
TruePodsHealthyAll pods running normally
TruePodsStartingWithin 2-minute grace period (not yet checking health)
FalseImagePullErrorOne or more pods cannot pull container image
FalseConfigurationErrorConfigMap or Secret referenced in CR not found
FalseSchedulingErrorPod cannot be scheduled (node affinity, insufficient resources)
FalseCrashLoopBackOffContainer repeatedly crashing
FalseInitializationErrorInit container failed

Check a specific condition:

kubectl get locusttest my-test -o jsonpath='{.status.conditions[?(@.type=="PodsHealthy")]}'

TestCompleted

Indicates whether the test has finished and the outcome.

StatusReasonMeaning
TrueTestSucceededTest completed successfully (master exited with code 0)
TrueTestFailedTest completed with failure
FalseTestInProgressTest is still running

SpecDrifted

Appears when the CR spec is edited after creation, once the test has moved past the Pending phase.

StatusReasonMeaning
TrueSpecChangeIgnoredSpec was modified after creation. Changes ignored. Delete and recreate to apply.

Detect pod failures

When PodsHealthy=False, the operator detected a problem with test pods.

Get condition details:

kubectl describe locusttest my-test

Look for the PodsHealthy condition in the Status section. The message field explains what failed.

Failure message format:

Messages follow the pattern: {FailureType}: {N} pod(s) affected [{pod-names}]: {error-detail}

Example failure messages:

  • ImagePullError: 1 pod(s) affected [my-test-master-abc12]: ErrImagePull
  • ConfigurationError: 3 pod(s) affected [my-test-worker-def34, my-test-worker-ghi56, my-test-worker-jkl78]: Secret "api-creds" not found
  • SchedulingError: 2 pod(s) affected [my-test-worker-mno90, my-test-worker-pqr12]: 0/3 nodes available: insufficient cpu
  • CrashLoopBackOff: 1 pod(s) affected [my-test-master-stu34]: CrashLoopBackOff

View pod states directly:

The operator applies two label selectors to test pods:

LabelSelectsExample
performance-test-name=<cr-name>All pods (master + workers)kubectl get pods -l performance-test-name=my-test
performance-test-pod-name=<cr-name>-<role>Specific role (master or worker)kubectl get pods -l performance-test-pod-name=my-test-worker
kubectl get pods -l performance-test-name=my-test

Check pod logs for errors:

# Master logs
kubectl logs job/my-test-master

# Worker logs (first worker pod)
kubectl logs job/my-test-worker --max-log-requests=1

Common failure scenarios

SymptomLikely causeHow to investigate
Phase stuck in PendingPods not schedulingkubectl describe pod for scheduling errors
PodsHealthy=False with ImagePullErrorWrong image name or missing imagePullSecretCheck image name in spec, verify secret exists
PodsHealthy=False with ConfigurationErrorMissing ConfigMap or SecretVerify referenced resources exist: kubectl get configmap,secret
Phase transitions to Failed immediatelyMaster pod crashed on startupCheck master logs for Python errors in locustfile
Workers never connectNetwork policy or firewallVerify workers can reach master service on port 5557

CI/CD integration

Use kubectl wait to block until test completion. The operator follows Kubernetes condition conventions, making it compatible with standard CI/CD tools.

GitHub Actions example

name: Load Test
on:
  workflow_dispatch:

jobs:
  load-test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Apply test
        run: kubectl apply -f locusttest.yaml

      - name: Wait for test completion
        run: |
          kubectl wait locusttest/my-test \
            --for=jsonpath='{.status.phase}'=Succeeded \
            --timeout=30m

      - name: Check result
        if: failure()
        run: |
          echo "Test failed or timed out"
          kubectl describe locusttest my-test
          kubectl logs -l performance-test-name=my-test --tail=50

      - name: Cleanup
        if: always()
        run: kubectl delete locusttest my-test --ignore-not-found

Generic shell script

#!/bin/bash
set -e

# Apply test
kubectl apply -f locusttest.yaml

# Wait for completion (Succeeded or Failed)
echo "Waiting for test to complete..."
while true; do
  PHASE=$(kubectl get locusttest my-test -o jsonpath='{.status.phase}' 2>/dev/null)
  case "$PHASE" in
    Succeeded)
      echo "Test passed!"
      exit 0
      ;;
    Failed)
      echo "Test failed!"
      kubectl describe locusttest my-test
      kubectl logs job/my-test-master --tail=50
      exit 1
      ;;
    Pending|Running)
      echo "Phase: $PHASE - waiting..."
      sleep 10
      ;;
    *)
      echo "Unknown phase: $PHASE"
      sleep 10
      ;;
  esac
done

Wait patterns:

# Wait for specific phase
kubectl wait locusttest/my-test --for=jsonpath='{.status.phase}'=Succeeded --timeout=30m

# Wait for condition
kubectl wait locusttest/my-test --for=condition=Ready --timeout=5m

# Check if test completed (success or failure)
PHASE=$(kubectl get locusttest my-test -o jsonpath='{.status.phase}')
if [ "$PHASE" = "Succeeded" ]; then
  echo "Test passed"
elif [ "$PHASE" = "Failed" ]; then
  echo "Test failed"
  exit 1
fi

Check worker connection progress

Monitor how many workers have connected to the master:

kubectl get locusttest my-test -o jsonpath='{.status.connectedWorkers}/{.status.expectedWorkers}'

Example output: 5/5 (all workers connected)

View WorkersConnected condition:

kubectl get locusttest my-test -o jsonpath='{.status.conditions[?(@.type=="WorkersConnected")]}'

If workers aren't connecting:

  1. Check worker pod status:

    kubectl get pods -l performance-test-pod-name=my-test-worker
    

  2. Verify master service exists:

    kubectl get service my-test-master
    

  3. Check worker logs for connection errors:

    kubectl logs job/my-test-worker --max-log-requests=1 | grep -i connect