What is Antithesis? How we're different Problems we solve Security approach Demo Fintech Blockchain Databases Customer stories Working with Antithesis Contact us Backstory Leadership Careers Brand Distributed systems reliability glossary Cost of outages white paper Deterministic simulation testing Property-based testing

Kubernetes setup guide

This is a step-by-step guide to packaging your software, pushing it to the Antithesis registry, and running it in our Kubernetes environment.

If you’re a Kubernetes user, follow this guide. If you’re not currently running your software with Kubernetes, check out the Docker setup guide.

If you’re trying to learn to use Antithesis, our tutorials walk you through it.

Before getting started

Make sure you have a container registry and credentials – or contact us to request them.

Antithesis runs your software in Kubernetes, using the same manifest files you use to deploy and run your code in production. So it’s helpful to have some understanding of:

This guide assumes that your software is already containerized. If you need some help with that, please follow our containerization guide.

Here’s something important to keep in mind: Antithesis runs your system in a hermetic simulation environment — meaning no internet access at all. (Think of it as giving your software an internet sabbatical.) Anything your system needs must either be packaged into the environment or mocked appropriately.

Need help?

If you run into trouble, or simply want to make sure your testing is as thorough as possible, our Solutions Engineering team would be happy to help – email us at: support@antithesis.com or join our Discord.

1. Kubernetes orchestration

The Kubernetes cluster

We run your software in a single-node K3s cluster. K3s is a lightweight, fully-compliant and certified Kubernetes distribution.

What’s included

Your software and all of its dependencies must be included with your manifests and images.

What’s not included

  • Flannel: K3s’s default CNI. In our single-node cluster, pod-to-pod and pod-to-service communication continue to work on a local bridge network.
  • Traefik: K3s’s built-in Ingress controller. Ingress objects will have no effect.
  • ServiceLB: K3s’s service load balancer. Services of type LoadBalancer will be stuck in Pending.
  • metrics-server: Provides resource metrics. Features like kubectl top and HorizontalPodAutoscaler will not work.
  • IPv6

Package your manifests

We expect you to deliver a container image that contains a specific directory structure of what to run. We call this the config image.

Inside this image, simply include a top-level manifests/ folder containing all Kubernetes manifests needed to deploy your software and its dependencies. Depending on your setup, you may need to include additional files and folders in this image.

Here’s an example top-level directory structure inside the image:

.
├── manifests/          # Top-level folder of all your manifests
|   ├── deployment.yaml # Must be valid k8s manifests
|   ├── service.yaml    # Each file may contain one or more resources
|   └── ingress.yaml
└── some_other_file     # Anything needed for setup

Using Helm

Our environment currently requires raw Kubernetes manifests, and helm template allows you to render any chart into manifests ready to be deployed in our environment.

Here is an example workflow.

  1. Add the chart repo.
$ helm repo add my-repo https://charts.example.com/
$ helm repo update

If your chart is local and not hosted somewhere, just use its path, e.g. ./charts/my-chart.

  1. Render manifests by overriding values as needed, and output to the manifests/ directory to keep all rendered manifests together in one place.
$ helm template my-release my-repo/my-chart \
  --set image.tag=v2.0 \
  -f values-prod.yaml -f secrets.yaml \
  --output-dir manifests/

Useful flags and tips:

  • --include-crds to render CustomResourceDefinitions alongside templates when a chart defines CRDs.
  • -n, --namespace <ns> bakes the target namespace into names and manifests.
  • --version <x.y.z> pins an exact chart version from a repo.
  • For local charts with dependencies on other charts, run helm dependency update <chart-dir> before templating. This downloads (or updates) those dependency charts into the charts/ subdirectory of your local chart, so that helm template will render everything correctly.
  • helm template runs entirely locally and does not need or connect to a Kubernetes cluster.

This process allows you to leverage Helm charts while providing our system with the raw Kubernetes manifests it needs to orchestrate your application.

Include external dependencies

Because the Antithesis environment has no internet access, every dependency your software needs must also be deployed alongside your software.

These dependencies fall into two categories:

Services
These are databases, queues, caches, or other components your application relies on.

  • If you own the service (e.g., a custom database), build a container image, upload it to the Antithesis registry, and include a manifest to deploy it.
  • For popular third-party services (Redis, Kafka, MongoDB, MySQL, etc.), public container images are usually available on registries like Docker Hub or Quay, and Helm charts are often listed on Artifact Hub. In these cases, reference the public image in your manifests or use a Helm chart to generate them, and add those manifests to the manifests/ folder.

Mocks
These are stand-ins for services that don’t have container images available or that you don’t need in their full production form.

  • Some providers publish mocks, such as stripe-mock.
  • General tools like Localstack simulate many AWS services.

Include manifests for these mocks in the manifests/ folder just like any other dependency.

Handling external dependencies offers more details and a list of commonly used mocks.

Build the config image

Once you have everything working locally, create the config image. You’ll send this image along with all your other service container images in a future step Pushing your containers. Antithesis will extract all the files from the config image to run your system. Creating a tagged config image will also help you maintain a versioning system for your configurations.

Create a Dockerfile in the root of your working directory. Copy the manifests/ folder and any other files to add them to a scratch image:

FROM scratch
COPY manifests/ /manifests/     # All your kubernetes manifests need to be in this folder.
COPY license.file /license.file # And include any other files you need to include

Test locally

We suggest you test that the manifests you’re planning to give us come up locally, using K3s for local testing to match our environment as closely as possible.

You can also test on any other valid Kubernetes distribution such as kind and minikube, but be aware that some things may work on your machine but not on ours. This won’t happen if you use K3s. See Kubernetes Best Practices for more info on how to minimize the chances of this happening.

To test locally, images must be available to your local cluster. Your distribution will have documentation for how to make a local image accessible to your cluster. For example, with K3s you can use the command k3s ctr images import your-image-name.tar to do this.

We use kapp to deploy all your manifests with a single command, which allows us to avoid making assumptions about ordering and dependencies. It applies them in a logical order and watches all resources until they become ready. For best results, we recommend you do the same during your local testing.

Here’s an example of how to deploy your manifests with kapp:

$ kapp deploy -a app-name -f manifests/ --yes

Test in isolation

In our environment, your application runs with no external connectivity. Testing this locally allows you to catch hidden dependencies before they break in our cluster.

A simple default-deny NetworkPolicy is a good start, but is not sufficient. NetworkPolicies apply only to Pod traffic once the Pod is scheduled, and don’t block the kubelet from pulling images. As a result, misconfigurations such as imagePullPolicy: Always or latest tags will still succeed on an internet-connected local cluster but fail in our air-gapped cluster.

To properly simulate isolation, you must ensure both Pod traffic and image pulls cannot reach the internet. Some approaches include:

  • Disconnect your machine from the internet.
  • Configure your cluster to pull images only from a local registry, blocking external registries. For example, see the K3s private registry docs.
  • Use firewall rules (e.g. iptables or nftables) to block outbound traffic from your cluster except for loopback and your Pod/Service CIDRs. This simulates the same isolation your workloads will face in our environment.

Preload all required images into your cluster before disconnecting from the internet. For example, in K3s you can do this with:

$ k3s ctr images import your-image.tar

Once you’ve blocked external access, verify that:

  • Pods can still talk to each other through Services.
  • Your application runs successfully using only local images and services, with no hidden internet dependencies.

2. Provide a basic test template

To test your system and find bugs, Antithesis needs to exercise your software. We do this using a test template – code that makes your software do something. There’s a lot you can do with test templates (see our explainer or our tutorial on this for more), but to test your setup, you can use any of your existing tests.

To do this, use the following naming conventions to enable Antithesis to detect and run your test. These conventions should be followed exactly.

  1. Create a directory called /opt/antithesis/test/v1/quickstart in any of your containers.

  2. Paste an existing integration test into an executable named singleton_driver_<your_test_name>.<extension> in the directory you just created. Make sure your executable has an appropriate shebang in the first line, e.g. #!/usr/bin/env bash

Now you’ll need to validate that your system can find the test template you just defined – details here. The easiest way to do this is to get the name of your running pod and then call kubectl exec to run your test.

$ kubectl exec <pod_name> -- /opt/antithesis/test/v1/quickstart/singleton_driver_<your_test_name>

3. (Optional) Add a ready signal for fuzzing

Once your system is fully set up and ready for testing, Antithesis needs a way to know it can begin fuzzing. This is what we call a ready signal.

We monitor for two types of signals:

  1. Orchestration completion: The successful completion of kapp deploy, which only reports success once all Kubernetes resources are ready.
  2. A Custom Ready Signal: You can emit this yourself (setup_complete via our SDK or JSONL).

Precedence: Antithesis begins testing as soon as the first signal is received.

  • If you emit a custom signal before orchestration completes, fuzzing begins immediately and the later orchestration signal is ignored.
  • If you don’t emit a custom signal, we wait until orchestration completes successfully (kapp deploy succeeds) to begin fuzzing.

Antithesis only expects to receive one setup_complete message from any of the containers in your system. Antithesis will treat the first such message sent by any running process as its signal to begin testing and injecting faults. Emitting further setup_complete messages has no effect, but if your system isn’t actually ready when the first one is sent, this can lead to unexpected problems.

For most use cases, the easiest and recommended approach is to rely on Orchestration completion, i.e. the successful completion of kapp deploy. This works best if your pods are configured with sensible readinessProbes, which is the standard way you tell Kubernetes that your pod is ready to accept traffic. By relying on this signal, you ensure that fuzzing begins only after all your core Kubernetes resources are in a ready state, providing a stable starting point for testing.

kapp will only emit its success message when all of the resources it watches become ready.

For example:

  • Deployments: Considered ready when unavailableReplicas = 0.
  • StatefulSets: kapp uses their update strategy and pod readiness.
  • Pods: A Pod is considered ready when its phase is Running.

See the kapp docs for more details on how kapp determines readiness for different resource types.

When should I emit a ready signal?

There are two cases where you should or might want to emit a ready signal rather than rely on Kubernetes resources being ready:

  • Post-startup tasks: when you have tasks that run outside the Kubernetes resource lifecycle (for example priming a database or creating test users).
  • Intentional early testing: when you want fuzzing to begin before all Kubernetes resources are ready, either because testing can safely start early or to intentionally exercise failure modes caused by partial or incorrect startup.

In these cases, use our SDKs to emit the ready signal. If you can’t use the SDK, you can also append a JSONL message to $ANTITHESIS_OUTPUT_DIR/sdk.jsonl. In our environment, we ensure that this variable and the directory it points to always exist.

{"antithesis_setup": { "status": "complete", "details": {"message": "Set up complete - ready for testing!" }}}

details must not terminate with a newline, because the entire message must be JSONL. More details on this syntax here.

4. Push your containers

When you become a customer, we configure a container registry for you and send you a credential file $TENANT_NAME.key.json.

To authenticate to your container registry, run the following command:

$ cat $TENANT_NAME.key.json | docker login -u _json_key https://us-central1-docker.pkg.dev --password-stdin

Now you’re locally authenticated to the registry and can run all other Docker commands as normal.

Push your containers and config image to: us-central1-docker.pkg.dev/molten-verve-216720/$TENANT_NAME-repository/

For example, if your local image is named my_app, you’d tag it as it’s referenced in your docker-compose.yaml and push it as follows:

$ docker tag my_app:my_tag us-central1-docker.pkg.dev/molten-verve-216720/$TENANT_NAME-repository/my_app
$ docker push us-central1-docker.pkg.dev/molten-verve-216720/$TENANT_NAME-repository/my_app:my_tag

5. Run your first test

You can use this webhook endpoint to kick off a test run using the username and password we sent you when you became a customer.

curl --fail -u 'user:password' \
-X POST https://<tenant>.antithesis.com/api/v1/launch/basic_k8s_test \
-d '{"params": { "antithesis.description":"basic_k8s_test on main",
    "antithesis.duration":"30",
    "antithesis.config_image":"config_image_with_tag",
    "antithesis.images":"my_images_with_tags", 
    "antithesis.report.recipients":"foo@email.com;bar@email.com"
    } }'

The antithesis.images parameter should be a list of all images and their tags that are referenced in your manifests. If the image is from a public registry, specify the full path to the image (eg. "antithesis.images":"docker.io/bitnamilegacy/etcd:3.5").

Do not pass the config image in antithesis.images; it should only be passed to antithesis.config_image.

All lists of parameters should be ; delimited. For more information on these and other parameters, please consult our webhook reference.

Since you’re just learning the ropes here, we’ve set Antithesis up to test for 30 minutes, but once you’re up and running you’ll be able to specify a longer testing duration. You can also get results through other channels, e.g. via a Slack or Discord integration.

6. Do some reading

Within an hour, you’ll receive an email with a link to a triage report. We suggest you read about the triage report and test properties while you wait!

Congratulations – you’re now set up with Antithesis!

From here, we suggest:

Later on, you’ll probably also want to configure your CI system to automate the process of building your software and kicking off webhooks.

  • Introduction
  • How Antithesis works
  • Tutorial
  • Testing with Antithesis
  • Docker Compose
  • Build and run an etcd cluster
  • Meet the Test Composer
  • Kubernetes
  • Build and run an etcd cluster
  • Meet the Test Composer
  • User manual
  • Setup guide
  • Using Docker Compose
  • Using Kubernetes
  • Properties and Assertions
  • Properties in Antithesis
  • Assertions in Antithesis
  • Sometimes Assertions
  • Properties to test for
  • Test Composer
  • Test Composer basics
  • Test Composer reference
  • Principles of test composition
  • Checking test templates locally
  • Getting started with Test Composer
  • Webhooks
  • Launching a test in Docker environment
  • Launching a test in Kubernetes environment
  • Launching a debugging session
  • Retrieving logs
  • Reports
  • The triage report
  • Findings
  • Environment
  • Utilization
  • Properties
  • The bug report
  • Context, Instance, & Logs
  • Bug likelihood over time
  • Statistical debug information
  • Search dashboard & multiverse map
  • Multiverse debugging
  • Overview
  • The Antithesis multiverse
  • Querying with event sets
  • The Environment and its utilities
  • Using the Antithesis Notebook
  • Cookbook
  • Antithesis' testing environment
  • The Antithesis Environment
  • Fault Injection
  • Reference
  • Handling external dependencies
  • SDK reference
  • Go
  • Tutorial
  • Instrumentor
  • Assert (reference)
  • Lifecycle (reference)
  • Random (reference)
  • Java
  • Tutorial
  • Instrumentation
  • Assert (reference)
  • Lifecycle (reference)
  • Random (reference)
  • C
  • C++
  • Tutorial
  • C/C++ Instrumentation
  • Assert (reference)
  • Lifecycle (reference)
  • Random (reference)
  • JavaScript
  • Python
  • Tutorial
  • Assert (reference)
  • Lifecycle (reference)
  • Random (reference)
  • Rust
  • Tutorial
  • Instrumentation
  • Assert (reference)
  • Lifecycle (reference)
  • Random (reference)
  • .NET
  • Tutorial
  • Instrumentation
  • Assert (reference)
  • Lifecycle (reference)
  • Random (reference)
  • Languages not listed above
  • Assert (reference)
  • Lifecycle (reference)
  • Assertion Schema
  • Tooling integrations
  • CI integration
  • Discord and Slack integrations
  • Issue tracker integration - BETA
  • Configuring Antithesis
  • Instrumentation
  • User management
  • Best practices
  • Kubernetes best practices
  • Docker best practices
  • Optimizing for Antithesis
  • Finding more bugs
  • FAQ
  • About Antithesis POCs
  • Product FAQs
  • Release notes
  • Release notes