Use node selector for simple node targeting⚓
Target specific nodes using simple label matching with node selector, the easiest way to control pod placement.
Prerequisites⚓
- Locust Kubernetes Operator installed
- Access to label cluster nodes
When to use node selector⚓
Use node selector when:
- You need simple label matching (key=value)
- All conditions are AND (all labels must match)
- You want the simplest configuration
Use node affinity when:
- You need OR logic (match any of multiple labels)
- You need soft preferences (preferred but not required)
- You need complex expressions (In, NotIn, Exists, DoesNotExist)
See Use node affinity for advanced scenarios.
Label your nodes⚓
Add labels to nodes:
# Label for SSD storage
kubectl label nodes node-1 disktype=ssd
# Label for performance environment
kubectl label nodes node-1 environment=performance
# Label multiple nodes
kubectl label nodes node-2 disktype=ssd environment=performance
kubectl label nodes node-3 disktype=ssd environment=performance
# Verify labels
kubectl get nodes --show-labels | grep disktype
Configure node selector⚓
Add scheduling.nodeSelector to your LocustTest CR:
apiVersion: locust.io/v2
kind: LocustTest
metadata:
name: nodeselector-test
spec:
image: locustio/locust:2.43.3
testFiles:
configMapRef: my-test
master:
command: "--locustfile /lotest/src/test.py --host https://api.example.com"
worker:
command: "--locustfile /lotest/src/test.py"
replicas: 5
scheduling:
nodeSelector:
disktype: ssd # Only schedule on nodes with this label
Apply the configuration:
Multiple labels (AND logic)⚓
Require multiple labels on nodes:
apiVersion: locust.io/v2
kind: LocustTest
metadata:
name: multi-label-selector
spec:
image: locustio/locust:2.43.3
testFiles:
configMapRef: my-test
master:
command: "--locustfile /lotest/src/test.py --host https://api.example.com"
worker:
command: "--locustfile /lotest/src/test.py"
replicas: 10
scheduling:
nodeSelector:
disktype: ssd # Must have SSD
environment: performance # AND must be performance environment
Nodes must have both labels to be selected.
Example: High-performance nodes⚓
Target high-performance node pool:
1. Label your high-performance nodes:
kubectl label nodes perf-node-1 performance-tier=high
kubectl label nodes perf-node-2 performance-tier=high
kubectl label nodes perf-node-3 performance-tier=high
2. Configure test to use labeled nodes:
apiVersion: locust.io/v2
kind: LocustTest
metadata:
name: high-perf-test
spec:
image: locustio/locust:2.43.3
testFiles:
configMapRef: performance-test
master:
command: "--locustfile /lotest/src/test.py --host https://api.example.com"
worker:
command: "--locustfile /lotest/src/test.py"
replicas: 20
scheduling:
nodeSelector:
performance-tier: high # Only high-performance nodes
Example: AWS instance type targeting⚓
Target specific EC2 instance types:
apiVersion: locust.io/v2
kind: LocustTest
metadata:
name: aws-instance-test
spec:
image: locustio/locust:2.43.3
testFiles:
configMapRef: my-test
master:
command: "--locustfile /lotest/src/test.py --host https://api.example.com"
worker:
command: "--locustfile /lotest/src/test.py"
replicas: 10
scheduling:
nodeSelector:
node.kubernetes.io/instance-type: c5.2xlarge # Compute-optimized
Note: This only matches one instance type. For multiple types, use node affinity with In operator.
Example: Zone-specific deployment⚓
Keep tests in a specific availability zone:
apiVersion: locust.io/v2
kind: LocustTest
metadata:
name: zone-specific-test
spec:
image: locustio/locust:2.43.3
testFiles:
configMapRef: my-test
master:
command: "--locustfile /lotest/src/test.py --host https://api.example.com"
worker:
command: "--locustfile /lotest/src/test.py"
replicas: 10
scheduling:
nodeSelector:
topology.kubernetes.io/zone: us-east-1a # Only zone 1a
Verify node placement⚓
Check that pods are scheduled on the correct nodes:
# Show pod-to-node mapping
kubectl get pods -l performance-test-name=<test-name> -o wide
# Check labels on nodes where pods are running
NODE=$(kubectl get pod -l performance-test-pod-name=<test-name>-master -o jsonpath='{.items[0].spec.nodeName}')
kubectl get node $NODE --show-labels | grep disktype
Expected: All pods running on nodes with matching labels.
Troubleshoot scheduling failures⚓
If pods remain Pending:
Common issue:
Warning FailedScheduling 0/5 nodes are available: 5 node(s) didn't match Pod's node affinity/selector
Causes:
- No nodes with matching labels:
Fix: Label at least one node.
- Typo in label key or value:
Ensure spelling and case match exactly.
- Insufficient capacity on labeled nodes:
Fix: Add more labeled nodes or reduce resource requests.
Compare with node affinity⚓
Node selector:
Equivalent node affinity:
scheduling:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: disktype
operator: In
values:
- ssd
Node selector is simpler. Node affinity is more powerful.
Feature flag
Node affinity injection requires the ENABLE_AFFINITY_CR_INJECTION environment variable to be enabled on the operator (Helm default: locustPods.affinityInjection: true).
Combine with other scheduling⚓
Node selector works with tolerations:
apiVersion: locust.io/v2
kind: LocustTest
metadata:
name: selector-toleration-test
spec:
image: locustio/locust:2.43.3
testFiles:
configMapRef: my-test
master:
command: "--locustfile /lotest/src/test.py --host https://api.example.com"
worker:
command: "--locustfile /lotest/src/test.py"
replicas: 10
scheduling:
nodeSelector:
disktype: ssd # Simple label matching
tolerations:
- key: dedicated
operator: Equal
value: load-testing
effect: NoSchedule # Tolerate taint on SSD nodes
Feature flag
Tolerations injection requires the helm value.
See Configure tolerations for details.
What's next⚓
- Use node affinity — Complex scheduling with OR logic and preferences
- Configure tolerations — Schedule on tainted nodes
- Scale worker replicas — Calculate capacity for labeled nodes