Advanced topics⚓
Basic configuration is not always enough to satisfy the performance test needs, for example when needing to work with Kafka and MSK. Below is a collection of topics of an advanced nature. This list will be keep growing as the tool matures more and more.
Kafka & AWS MSK configuration⚓
Generally speaking, the usage of Kafka in a locustfile is identical to how it would be used anywhere else within the cloud context. Thus, no special setup is needed for the purposes of performance testing with the Operator.
That being said, if an organization is using kafka in production, chances are that authenticated kafka is being used. One of the main providers of such managed service is AWS in the form of MSK. For that end, the Operator have an out-of-the-box support for MSK.
To enable performance testing with MSK, a central/global Kafka user can be created by the "cloud admin" or "the team" responsible for the Operator deployment within the organization. The Operator can then be easily configured to inject the configuration of that user as environment variables in all generated resources. Those variables can be used by the test to establish authentication with the kafka broker.
Variable Name | Description |
---|---|
KAFKA_BOOTSTRAP_SERVERS |
Kafka bootstrap servers |
KAFKA_SECURITY_ENABLED |
- |
KAFKA_SECURITY_PROTOCOL_CONFIG |
Security protocol. Options: PLAINTEXT , SASL_PLAINTEXT , SASL_SSL , SSL |
KAFKA_SASL_MECHANISM |
Authentication mechanism. Options: PLAINTEXT , SCRAM-SHA-256 , SCRAM-SHA-512 |
KAFKA_USERNAME |
The username used to authenticate Kafka clients with the Kafka server |
KAFKA_PASSWORD |
The password used to authenticate Kafka clients with the Kafka server |
Dedicated Kubernetes Nodes⚓
To run test resources on dedicated Kubernetes node(s), the Operator support deploying resources with Affinity and Taint Tolerations.
Affinity⚓
This allows generated resources to have specific Affinity options.
Note
The Custom Resource Definition Spec is designed with modularity and expandability in mind. This means that although a specific set of Kubernetes Affinity options are supported today, extending this support based on need is a streamlined and easy processes. If additonal support is needed, don't hesitate to open a feature request.
Affinity Options⚓
The specification for affinity is defined as follows
apiVersion: locust.io/v1
...
spec:
...
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution
<label-key>: <label-value>
...
...
Node Affinity⚓
This optional section causes generated pods to declare specific Node Affinity so Kubernetes scheduler becomes aware of this requirement.
The implementation from the Custom Resource perspective is strongly influenced by Kubernetes native definition of node affinity. However, the implementation is on purpose slightly simplified in order to allow users to have easier time working with affinity.
The nodeAffinity
section supports declaring node affinity under requiredDuringSchedulingIgnoredDuringExecution
. Meaning that any
declared affinity labels must be present in nodes in order for resources to be deployed on them.
Example:
In the below example, generated pods will declare 3 required labels (keys and values) to be present on nodes before they are scheduled.
apiVersion: locust.io/v1
...
spec:
...
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeAffinityLabel1: locust-cloud-tests
nodeAffinityLabel2: performance-nodes
nodeAffinityLabel3: high-memory
...
...
Taint Tolerations⚓
This optional sections allows deployed pods to have specific taint(s) tolerations. The features is also modeled to follow closely Kubernetes native definition.
Spec breakdown & example⚓
apiVersion: locust.io/v1
...
spec:
...
tolerations:
- key: <string value>
operator: <"Exists", "Equal">
value: <string value> #(1)!
effect: <"NoSchedule", "PreferNoSchedule", "NoExecute">
...
- Optional when
operator
value is set toExists
.
apiVersion: locust.io/v1
...
spec:
...
tolerations:
- key: taint-A
operator: Equal
value: ssd
effect: NoSchedule
- key: taint-B
operator: Exists
effect: NoExecute
...
...
Usage of a private image registry⚓
Images from a private image registry can be used through various methods as described in the kubernetes documentation, one of those methods depends on setting imagePullSecrets
for pods. This is supported in the operator by simply setting the imagePullSecrets
option in the deployed custom resource. For example:
apiVersion: locust.io/v1
...
spec:
image: ghcr.io/mycompany/locust:latest #(1)!
imagePullSecrets: #(2)!
- gcr-secret
...
- Specify which Locust image to use for both master and worker containers.
- [Optional] Specify an existing pull secret to use for master and worker pods.
Image pull policy⚓
Kubernetes uses the image tag and pull policy to control when kubelet attempts to download (pull) a container image. The image pull policy can be defined through the imagePullPolicy
option, as explained in the kubernetes documentation. When using the operator, the imagePullPolicy
option can be directly configured in the custom resource. For example:
apiVersion: locust.io/v1
...
spec:
image: ghcr.io/mycompany/locust:latest #(1)!
imagePullPolicy: Always #(2)!
...
- Specify which Locust image to use for both master and worker containers.
- [Optional] Specify the pull policy to use for containers defined within master and worker containers. Supported options include
Always
,IfNotPresent
andNever
.
Automatic Cleanup for Finished Master and Worker Jobs⚓
Once load tests finish, master and worker jobs remain available in Kubernetes. You can set up a time-to-live (TTL) value in the operator's Helm chart, so that kubernetes jobs are eligible for cascading removal once the TTL expires. This means that Master and Worker jobs and their dependent objects (e.g., pods) will be deleted.
Note that setting up a TTL will not delete LocustTest
or ConfigMap
resources.
To set a TTL value, override the key ttlSecondsAfterFinished
in values.yaml
:
...
config:
loadGenerationJobs:
# Either leave empty or use an empty string to avoid setting this option
ttlSecondsAfterFinished: 3600
...
You can also use Helm's CLI arguments: helm install ... --set config.loadGenerationJobs.ttlSecondsAfterFinished=0
.
Read more about the ttlSecondsAfterFinished
parameter in Kubernetes's official documentation.
Kubernetes Support for ttlSecondsAfterFinished
⚓
Support for parameter ttlSecondsAfterFinished
was added in Kubernetes v1.12.
In case you're deploying the locust operator to a Kubernetes cluster that does not
support ttlSecondsAfterFinished
, you may leave the Helm key empty or use an empty
string. In this case, job definitions will not include the parameter.