Kubernetes setup guide

This is a step-by-step guide to packaging your software, pushing it to the Antithesis registry, and running it in our Kubernetes environment.

If you’re a Kubernetes user, follow this guide. If you’re not currently running your software with Kubernetes, check out the Docker setup guide.

If you’re trying to learn to use Antithesis, our tutorials walk you through it.

Before getting started

  1. Make sure you have a container registry and credentials – or contact us to request them.

  2. Antithesis runs your software in Kubernetes, using the same manifest files you use to deploy and run your code in production. So it’s helpful to have some understanding of:

    This guide assumes that your software is already containerized. If you need some help with that, please follow our containerization guide.

  3. Antithesis runs your system in a hermetic simulation environment — meaning no internet access at all. Anything your system needs must either be packaged into the environment or mocked appropriately.

Need help?

If you run into trouble, or simply want to make sure your testing is as thorough as possible, our solutions engineering team would be happy to help – email us at: support@antithesis.com or join our Discord.

tl;dr

  1. Know the environment

    • Single-node K3s cluster.
    • Provide manifests and images for everything that runs inside the cluster – we run without internet access.
  2. Package manifests

    • Collect all Kubernetes manifests into a manifests/ folder.
      • Use helm template to render manifests from charts.
      • Ensure all images are fully qualified with registry and tags/digests.
    • Build a config image (e.g. FROM scratch; COPY manifests/ /manifests/).
  3. Test locally

    • Simulate isolation (no internet connectivity).
    • Use K3s (recommended) or another K8s distro (kind, minikube).
    • Deploy manifests/ with kapp (recommended) or kubectl apply -f.
  4. Add a test template

    • Place an executable test in /opt/antithesis/test/v1/quickstart/ in a pod running indefinitely (preferably a dedicated client pod).
    • Run the test locally with kubectl exec.
  5. (Optional) Add a ready signal

    • By default, testing begins when all Kubernetes resources are ready.
    • Optionally, emit a signal from the SDK to start testing earlier.
  6. Push your images

    • Tag and push all container images, including the config image, to the Antithesis registry.
    • Ensure images are fully qualified, exactly as referenced in the manifests.
  7. Run your first test!

    • Trigger a test with the provided webhook endpoint.
    • Review the triage report you’ll receive by email.

The Kubernetes Environment

We run your software in a single-node K3s cluster. K3s is a lightweight, fully-compliant and certified Kubernetes distribution.

What’s included

Your software and all of its dependencies must be included with your manifests and images.

What’s not included

  • Flannel: K3s’s default CNI. In our single-node cluster, pod-to-pod and pod-to-service communication continue to work on a local bridge network.
  • Traefik: K3s’s built-in Ingress controller. Ingress objects will have no effect.
  • ServiceLB: K3s’s service load balancer. Services of type LoadBalancer will be stuck in Pending.
  • metrics-server: Provides resource metrics. Features like kubectl top and HorizontalPodAutoscaler will not work.
  • IPv6

Package your manifests

We expect you to deliver a container image that contains a specific directory structure of what to run. We call this the config image.

Inside this image, you must include a manifests/ folder at the root of the image filesystem, containing all Kubernetes manifests needed to deploy your software and its dependencies.

Important:

  • All images in your manifests must be fully qualified: include registry, name, and tag or digest.
  • For private images, the registry will be the Antithesis internal repository:
    us-central1-docker.pkg.dev/molten-verve-216720/$TENANT_NAME-repository
    (referred to as {INTERNAL_REPOSITORY} throughout this document).
  • Public images (e.g. from Docker Hub or Quay) must also be fully qualified with their registry path, e.g. docker.io/bitnamilegacy/etcd:3.5.

Example: directory structure inside the config image

.
├── manifests/          # Must be at the root of the config image
│   ├── deployment.yaml # Valid Kubernetes manifests
│   ├── service.yaml    # Each file may contain one or more resources
│   └── ingress.yaml
└── some_other_file     # Any additional files needed for setup

Using Helm

Our environment currently requires raw Kubernetes manifests, and helm template allows you to render any chart into manifests ready to be deployed in our environment.

Here is an example workflow.

  1. Add the chart repo.
$ helm repo add my-repo https://charts.example.com/
$ helm repo update

If your chart is local and not hosted somewhere, just use its path, e.g. ./charts/my-chart.

  1. Render manifests by overriding values as needed, and output to the manifests/ directory to keep all rendered manifests together in one place.
$ helm template my-release my-repo/my-chart \
  --set image.tag=v2.0 \
  -f values-prod.yaml -f secrets.yaml \
  --output-dir manifests/

Useful flags and tips:

  • --include-crds to render CustomResourceDefinitions alongside templates when a chart defines CRDs.
  • -n, --namespace <ns>: bakes the target namespace into names and manifests. Include any namespace resources you need.
  • --version <x.y.z> pins an exact chart version from a repo.
  • For local charts with dependencies on other charts, run helm dependency update <chart-dir> before templating. This downloads (or updates) those dependency charts into the charts/ subdirectory of your local chart, so that helm template will render everything correctly.
  • helm template runs entirely locally and does not need or connect to a Kubernetes cluster.

This process allows you to leverage Helm charts while providing our system with the raw Kubernetes manifests it needs to orchestrate your application.

Include external dependencies

Because the Antithesis environment has no internet access, every dependency your software needs must also be deployed alongside your software.

These dependencies fall into two categories:

Services
These are databases, queues, caches, or other components your application relies on.

  • If you own the service (e.g., a custom database), build a container image, upload it to the Antithesis registry, and include a manifest to deploy it.
  • For popular third-party services (Redis, Kafka, MongoDB, MySQL, etc.), public container images are usually available on registries like Docker Hub or Quay, and Helm charts are often listed on Artifact Hub. In these cases, reference the public image in your manifests or use a Helm chart to generate them, and add those manifests to the manifests/ folder.

Mocks
These are stand-ins for services that don’t have container images available or that you don’t need in their full production form.

  • Some providers publish mocks, such as stripe-mock.
  • General tools like Localstack simulate many AWS services.

Include manifests for these mocks in the manifests/ folder just like any other dependency.

Handling external dependencies offers more details and a list of commonly used mocks.

Build the config image

Once you have everything working locally, create the config image. You’ll send this image along with all your other service container images in a future step Pushing your containers. Antithesis will extract all the files from the config image to run your system. Creating a tagged config image will also help you maintain a versioning system for your configurations.

Create a Dockerfile in the root of your working directory. Copy the manifests/ folder and any other files to add them to a scratch image:

FROM scratch
COPY manifests/ /manifests/     # All your kubernetes manifests need to be in this folder.
COPY license.file /license.file # And include any other files you need to include

Test locally

We suggest you test that the manifests you’re planning to give us come up locally, using K3s for local testing to match our environment as closely as possible.

You can also test on any other valid Kubernetes distribution such as kind and minikube, but be aware that some things may work on your machine but not on ours. This won’t happen if you use K3s. See Kubernetes Best Practices for more info on how to minimize the chances of this happening.

To test locally, images must be available to your local cluster. Your distribution will have documentation for how to make a local image accessible to your cluster. For example, with K3s you can use the command k3s ctr images import your-image-name.tar to do this.

We use kapp to deploy all your manifests with a single command, which allows us to avoid making assumptions about ordering and dependencies. It applies them in a logical order and watches all resources until they become ready. For best results, we recommend you do the same during your local testing.

Here’s an example of how to deploy your manifests with kapp:

$ kapp deploy -a app-name -f manifests/ --yes

Test in isolation

In our environment, your application runs with no external connectivity. Testing this locally allows you to catch hidden dependencies before they break in our cluster.

A simple default-deny NetworkPolicy is a good start, but is not sufficient. NetworkPolicies apply only to Pod traffic once the Pod is scheduled, and don’t block the kubelet from pulling images. As a result, misconfigurations such as imagePullPolicy: Always or latest tags will still succeed on an internet-connected local cluster but fail in our air-gapped cluster.

To properly simulate isolation, you must ensure both Pod traffic and image pulls cannot reach the internet. Some approaches include:

  • Disconnect your machine from the internet.
  • Configure your cluster to pull images only from a local registry, blocking external registries. For example, see the K3s private registry docs.
  • Use firewall rules (e.g. iptables or nftables) to block outbound traffic from your cluster except for loopback and your Pod/Service CIDRs. This simulates the same isolation your workloads will face in our environment.

Preload all required images into your cluster before disconnecting from the internet. For example, in K3s you can do this with:

$ k3s ctr images import your-image.tar

Once you’ve blocked external access, verify that:

  • Pods can still talk to each other through Services.
  • Your application runs successfully using only local images and services, with no hidden internet dependencies.

Provide a basic test template

To test your system and find bugs, Antithesis needs to exercise your software. We do this using a test template – code that makes your software do something. There’s a lot you can do with test templates (see our explainer or our tutorial on this for more), but to test your setup, you can use any of your existing tests.

To do this, use the following naming conventions to enable Antithesis to detect and run your test. These conventions should be followed exactly.

  1. Create a directory called /opt/antithesis/test/v1/quickstart in any of your containers.

  2. Paste an existing integration test into an executable named singleton_driver_<your_test_name>.<extension> in the directory you just created. Make sure your executable has an appropriate shebang in the first line, e.g. #!/usr/bin/env bash

Now you’ll need to validate that your system can find the test template you just defined – details here. The easiest way to do this is to get the name of your running pod and then call kubectl exec to run your test.

$ kubectl exec <pod_name> -- /opt/antithesis/test/v1/quickstart/singleton_driver_<your_test_name>

(Optional) Add a ready signal for fuzzing

Once your system is fully set up and ready for testing, Antithesis needs a way to know it can begin fuzzing. This is what we call a ready signal.

We monitor for two types of signals:

  1. Orchestration completion: The successful completion of kapp deploy, which only reports success once all Kubernetes resources are ready.
  2. A Custom Ready Signal: You can emit this yourself (setup_complete via our SDK or JSONL).

Precedence: Antithesis begins testing as soon as the first signal is received.

  • If you emit a custom signal before orchestration completes, fuzzing begins immediately and the later orchestration signal is ignored.
  • If you don’t emit a custom signal, we wait until orchestration completes successfully (kapp deploy succeeds) to begin fuzzing.

Antithesis only expects to receive one setup_complete message from any of the containers in your system. Antithesis will treat the first such message sent by any running process as its signal to begin testing and injecting faults. Emitting further setup_complete messages has no effect, but if your system isn’t actually ready when the first one is sent, this can lead to unexpected problems.

For most use cases, the easiest and recommended approach is to rely on Orchestration completion, i.e. the successful completion of kapp deploy. This works best if your pods are configured with sensible readinessProbes, which is the standard way you tell Kubernetes that your pod is ready to accept traffic. By relying on this signal, you ensure that fuzzing begins only after all your core Kubernetes resources are in a ready state, providing a stable starting point for testing.

kapp will only emit its success message when all of the resources it watches become ready.

For example:

  • Deployments: Considered ready when unavailableReplicas = 0.
  • StatefulSets: kapp uses their update strategy and pod readiness.
  • Pods: A Pod is considered ready when its phase is Running.

See the kapp docs for more details on how kapp determines readiness for different resource types.

When should I emit a ready signal?

There are two cases where you should or might want to emit a ready signal rather than rely on Kubernetes resources being ready:

  • Post-startup tasks: when you have tasks that run outside the Kubernetes resource lifecycle (for example priming a database or creating test users).
  • Intentional early testing: when you want fuzzing to begin before all Kubernetes resources are ready, either because testing can safely start early or to intentionally exercise failure modes caused by partial or incorrect startup.

In these cases, use our SDKs to emit the ready signal. If you can’t use the SDK, you can also append a JSONL message to $ANTITHESIS_OUTPUT_DIR/sdk.jsonl. In our environment, we ensure that this variable and the directory it points to always exist.

{"antithesis_setup": { "status": "complete", "details": {"message": "Set up complete - ready for testing!" }}}

details must not terminate with a newline, because the entire message must be JSONL. More details on this syntax here.

Push your images

When you become a customer, we configure a container registry for you and send you a credential file $TENANT_NAME.key.json.

To authenticate to your container registry, run the following command:

$ cat $TENANT_NAME.key.json | docker login -u _json_key https://us-central1-docker.pkg.dev --password-stdin

Now you’re locally authenticated to the registry and can run all other Docker commands as normal.

Push your custom and non-public images (including your config image) to: us-central1-docker.pkg.dev/molten-verve-216720/$TENANT_NAME-repository/

Images that are publicly available (e.g. docker.io/bitnamilegacy/etcd:3.5) can be referenced directly in your manifests with their fully qualified registry path, you do not need to copy them into the Antithesis registry.

Requirements

  • All images in your manifests must be fully qualified.
  • Manifests must reference images exactly as they exist in the registry, fully qualified with registry and tag/digest.

Example Here’s a pod spec with both a private image and a public image:

spec:
  containers:
    - name: app
      image: {INTERNAL_REPOSITORY}/app:1.0
    - name: etcd
      image: docker.io/bitnamilegacy/etcd:3.5

In this case, the public image docker.io/bitnamilegacy/etcd:3.5 can be referenced directly.

The private image for app:1.0 must be uploaded to the Antithesis registry:

$ docker tag app:1.0 {INTERNAL_REPOSITORY}/app:1.0
$ docker push {INTERNAL_REPOSITORY}/app:1.0

Run your first test

Use this webhook endpoint to kick off a test run with the username and password we sent you when you became a customer:

curl --fail -u 'user:password' \
  -X POST https://<tenant>.antithesis.com/api/v1/launch/basic_k8s_test \
  -d '{"params": {
    "antithesis.description":"basic_k8s_test on main",
    "antithesis.duration":"30",
    "antithesis.config_image":"${INTERNAL_REPOSITORY}/config:1.0",
    "antithesis.images":"${INTERNAL_REPOSITORY}/app:1.0;docker.io/bitnamilegacy/etcd:3.5",
    "antithesis.report.recipients":"foo@email.com;bar@email.com"
  }}'
  • The antithesis.images parameter must list all images exactly as referenced in your manifests, fully qualified.
  • Do not include the config image in antithesis.images; it should only be passed to antithesis.config_image.
  • Multiple images are ; delimited, as shown above.
  • See webhook reference for more information

Since you’re just learning the ropes here, we’ve set Antithesis up to test for 30 minutes, but once you’re up and running you’ll be able to specify a longer testing duration. You can also get results through other channels, e.g. via a Slack or Discord integration.

Review the triage report

Within an hour, you’ll receive an email with a link to a triage report. We suggest you read about the triage report and test properties while you wait!

Congratulations – you’re now set up with Antithesis!

From here, we suggest:

Later on, you’ll probably also want to configure your CI system to automate the process of building your software and kicking off webhooks.

  • Introduction
  • How Antithesis works
  • Get started
  • Test an example system
  • With Docker Compose
  • Build and run an etcd cluster
  • Meet the Test Composer
  • With Kubernetes
  • Build and run an etcd cluster
  • Meet the Test Composer
  • Setup guide
  • For Docker Compose users
  • For Kubernetes users
  • Product
  • Test Composer
  • Test Composer basics
  • Test Composer reference
  • How to check test templates locally
  • How to port tests to Antithesis
  • Reports
  • The triage report
  • Findings
  • Environment
  • Utilization
  • Properties
  • The bug report
  • Context, Instance, & Logs
  • Bug likelihood over time
  • Statistical debug information
  • Search dashboard & multiverse map
  • Multiverse debugging
  • Overview
  • The Antithesis multiverse
  • Querying with event sets
  • Environment utilities
  • Using the Antithesis Notebook
  • Cookbook
  • Tooling integrations
  • CI integration
  • Discord and Slack integrations
  • Issue tracker integration - BETA
  • Configuration
  • Access and authentication
  • The Antithesis environment
  • Optimizing for Antithesis
  • Docker best practices
  • Kubernetes best practices
  • Concepts
  • Properties and Assertions
  • Properties in Antithesis
  • Assertions in Antithesis
  • Sometimes Assertions
  • Properties to test for
  • Fault injection
  • Reference
  • Webhooks
  • Launching a test in Docker environment
  • Launching a test in Kubernetes environment
  • Launching a debugging session
  • Retrieving logs
  • SDK reference
  • Go
  • Tutorial
  • Instrumentor
  • Assert (reference)
  • Lifecycle (reference)
  • Random (reference)
  • Java
  • Tutorial
  • Instrumentation
  • Assert (reference)
  • Lifecycle (reference)
  • Random (reference)
  • C
  • C++
  • Tutorial
  • C/C++ Instrumentation
  • Assert (reference)
  • Lifecycle (reference)
  • Random (reference)
  • JavaScript
  • Python
  • Tutorial
  • Assert (reference)
  • Lifecycle (reference)
  • Random (reference)
  • Rust
  • Tutorial
  • Instrumentation
  • Assert (reference)
  • Lifecycle (reference)
  • Random (reference)
  • .NET
  • Tutorial
  • Instrumentation
  • Assert (reference)
  • Lifecycle (reference)
  • Random (reference)
  • Languages not listed above
  • Assert (reference)
  • Lifecycle (reference)
  • Assertion Schema
  • Instrumentation
  • Handling external dependencies
  • FAQ
  • Product FAQs
  • About Antithesis POCs
  • Release notes
  • Release notes