Skip to content

Getting started

Only few simple steps are needed to get a test up and running in the cluster. The following is a step-by-step guide on how to achieve this.

Step 1: Write a valid Locust test script

For this example, we will be using the following script

demo_test.py
from locust import HttpUser, task

class User(HttpUser): # (1)!
    @task #(2)!
    def get_employees(self) -> None:
        """Get a list of employees."""
        self.client.get("/api/v1/employees") #(3)!
  1. Class representing users that will be simulated by Locust.
  2. One or more task that each simulated user will be performing.
  3. HTTP call to a specific endpoint.

Note

To be able to run performance tests effectivly, an understanding of Locust which is the underline load generation tool is required. All tests must be valid locust tests.

Locust provide a very good and detail rich documentation that can be found here.

Step 2: Write a valid custom resource for LocustTest CRD

A simple custom resource for the previous test can be something like the following example;

To streamline this step, intensive-brew should be used. It is a simple cli tool that converts a declarative yaml into a compatible LocustTest kubernetes custom resource.

locusttest-cr.yaml
apiVersion: locust.io/v1 #(1)!
kind: LocustTest #(2)!
metadata:
  name: demo.test #(3)!
spec:
  image: locustio/locust:latest #(4)!
  masterCommandSeed: #(5)!
    --locustfile /lotest/src/demo_test.py
    --host https://dummy.restapiexample.com
    --users 100
    --spawn-rate 3
    --run-time 3m
  workerCommandSeed: --locustfile /lotest/src/demo_test.py #(6)!
  workerReplicas: 3 #(7)!
  configMap: demo-test-map #(8)!
  1. API version based on the deployed LocustTest CRD.
  2. Resource kind.
  3. The name field used by the operator to infer the names of test generated resources. While this value is insignificant to the Operator itself, it is important to keep a good convention here since it helps in tracking resources across the cluster when needed.
  4. Image to use for the load generation pods
  5. Seed command for the master node. The Operator will append to this seed command/s all operational parameters needed for the master to perform its job e.g. ports, rebalancing settings, timeouts, etc...
  6. Seed command for the worker node. The Operator will append to this seed command/s all operational parameters needed for the worker to perform its job e.g. ports, master node url, master node ports, etc...
  7. The amount of worker nodes to spawn in the cluster.
  8. [Optional] Name of configMap to mount into the pod

Other options

Labels and annotations

You can add labels and annotations to generated Pods. For example:

locusttest-cr.yaml
apiVersion: locust.io/v1
...
spec:
  image: locustio/locust:latest
  labels: #(1)!
    master:
      locust.io/role: "master"
      myapp.com/testId: "abc-123"
      myapp.com/tenantId: "xyz-789"
    worker:
      locust.io/role: "worker"
  annotations: #(2)!
    master:
      myapp.com/threads: "1000"
      myapp.com/version: "2.1.0"
    worker:
      myapp.com/version: "2.1.0"
  ...
  1. [Optional] Labels are attached to both master and worker pods. They can later be used to identify pods belonging to a particular execution context. This is useful, for example, when tests are deployed programmatically. A launcher application can query the Kubernetes API for specific resources.
  2. [Optional] Annotations too are attached to master and worker pods. They can be used to include additional context about a test. For example, configuration parameters of the software system being tested.

Both labels and annotations can be added to the Prometheus configuration, so that metrics are associated with the appropriate information, such as the test and tenant ids. You can read more about this in the Prometheus documentation site.

Step 3: Deploy Locust k8s Operator in the cluster.

The recommended way to install the Operator is by using the official HELM chart. Documentation on how to perform that is available here.

Step 4: Deploy test as a configMap

For the purposes of this example, the demo_test.py test previously demonstrated will be deployed into the cluster as a configMap that the Operator will mount to the load generation pods.
To deploy the test as a configMap, run the bellow command following this template kubectl create configmap <configMap-name> --from-file <your_test.py>:

  • kubectl create configmap demo-test-map --from-file demo_test.py

Fresh cluster resources

Fresh cluster resources are allocated for each running test, meaning that tests DO NOT have any cross impact on each other.

Step 5: Start the test by deploying the LocustTest custom resource.

Deploying a custom resource, signals to the Operator the desire to start a test and thus the Operator starts creating and scheduling all needed resources.
To do that, deploy the custom resource following this template kubectl apply -f <valid_cr>.yaml:

  • kubectl apply -f locusttest-cr.yaml

Step 5.1: Check cluster for running resources

At this point, it is possible to check the cluster and all required resources will be running based on the passed configuration in the custom resource.

The Operator will create the following resources in the cluster for each valid custom resource:

  • A kubernetes service for the master node so it is reachable by other worker nodes.
  • A kubernetes Job to manage the master node.
  • A kubernetes Job to manage the worker node.

Step 6: Clear resources after test run

In order to remove the cluster resources after a test run, simply remove the custom resource and the Operator will react to this event by cleaning the cluster of all related resources.
To delete a resource, run the below command following this template kubectl delete -f <valid_cr>.yaml:

  • kubectl delete -f locusttest-cr.yaml

Last update: April 22, 2023