Configure resource limits and requests⚓
Resource configuration ensures your load tests have the resources they need without consuming excessive cluster capacity.
Prerequisites⚓
- Locust Kubernetes Operator installed
- Basic understanding of Kubernetes resource requests and limits
Set global defaults via Helm⚓
Configure default resources for all tests during operator installation:
# values.yaml
locustPods:
resources:
requests:
cpu: "250m" # Guaranteed CPU
memory: "128Mi" # Guaranteed memory
ephemeralStorage: 30M # Scratch space for logs and temp files
limits:
cpu: "1000m" # Maximum CPU
memory: "1024Mi" # Maximum memory
ephemeralStorage: 50M # Prevents runaway disk usage from evicting the pod
Install or upgrade the operator:
helm upgrade --install locust-operator locust-k8s-operator/locust-k8s-operator \
--namespace locust-system \
-f values.yaml
These defaults apply to all Locust pods unless overridden in individual CRs.
Configure per-test resources⚓
Override defaults for specific tests using the v2 API. Master and worker pods can have different resource configurations:
apiVersion: locust.io/v2
kind: LocustTest
metadata:
name: resource-optimized-test
spec:
image: locustio/locust:2.43.3
testFiles:
configMapRef: my-test
master:
command: "--locustfile /lotest/src/test.py --host https://api.example.com"
resources:
requests:
memory: "256Mi" # Master needs less memory
cpu: "100m" # Master is not CPU-intensive
limits:
memory: "512Mi"
cpu: "500m"
worker:
command: "--locustfile /lotest/src/test.py"
replicas: 10
resources:
requests:
memory: "512Mi" # Workers need more memory for load generation
cpu: "500m" # Workers are CPU-intensive
limits:
memory: "1Gi"
cpu: "1000m"
Apply the configuration:
Resource precedence chain⚓
The operator resolves resource values using a 3-level precedence chain:
| Priority | Source | Scope | Merge behavior |
|---|---|---|---|
| 1 (highest) | CR-level (spec.master.resources / spec.worker.resources) | Per-test | Complete override -- replaces the entire resources block |
| 2 | Helm role-specific (masterResources / workerResources) | All tests | Field-level fallback -- individual fields fall back to unified defaults |
| 3 (lowest) | Helm unified (resources) | All tests | Base defaults for every pod |
Example: Given these Helm values:
locustPods:
resources: # Level 3: unified defaults
requests:
cpu: "250m"
memory: "128Mi"
workerResources: # Level 2: role-specific override
requests:
memory: "512Mi" # Only memory is overridden
Workers get cpu: "250m" (from unified) and memory: "512Mi" (from role-specific). If a CR also sets spec.worker.resources, it replaces the entire block.
Disable CPU limits for performance tests⚓
CPU limits can cause throttling in performance-sensitive tests. Disable them by omitting the CPU limit field:
apiVersion: locust.io/v2
kind: LocustTest
metadata:
name: no-cpu-limit-test
spec:
image: locustio/locust:2.43.3
testFiles:
configMapRef: my-test
master:
command: "--locustfile /lotest/src/test.py --host https://api.example.com"
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
# No CPU limit - allows maximum performance
worker:
command: "--locustfile /lotest/src/test.py"
replicas: 10
resources:
requests:
memory: "512Mi"
cpu: "500m"
limits:
memory: "1Gi"
# No CPU limit - workers can use all available CPU
When to disable CPU limits:
- High-throughput performance tests (>5000 RPS)
- Benchmarking scenarios where you need maximum performance
- Tests with bursty traffic patterns
Risk: Pods can consume all available CPU on the node, potentially affecting other workloads. Use with node affinity to isolate tests on dedicated nodes.
Resource sizing guidelines⚓
Master pod:
- CPU: 100-500m (master coordinates, doesn't generate load)
- Memory: 256-512Mi (depends on test complexity and UI usage)
- Usually 1 replica
Worker pod:
- CPU: 500-1000m per worker (depends on test script complexity)
- Memory: 512Mi-1Gi per worker (depends on data handling)
- Scale workers based on user count (see Scale worker replicas)
Example sizing for 1000 users:
master:
resources:
requests:
memory: "256Mi"
cpu: "200m"
limits:
memory: "512Mi"
cpu: "500m"
worker:
replicas: 20 # ~50 users per worker
resources:
requests:
memory: "512Mi"
cpu: "500m"
limits:
memory: "1Gi"
# CPU limit omitted for performance
Verify resource configuration⚓
Check actual resource specs on running pods:
# Get master pod name
MASTER_POD=$(kubectl get pod -l performance-test-pod-name=resource-optimized-test-master -o jsonpath='{.items[0].metadata.name}')
# Verify resource configuration
kubectl describe pod $MASTER_POD | grep -A 10 "Limits:\|Requests:"
Expected output:
Monitor resource usage⚓
Check actual resource consumption:
If pods consistently hit memory limits, they'll be OOMKilled. If they hit CPU limits, they'll be throttled (slower performance).
What's next⚓
- Scale worker replicas — Calculate worker count for high-load scenarios
- Use node affinity — Run resource-intensive tests on dedicated nodes
- Configure tolerations — Schedule tests on high-performance node pools