
Research
/Security News
Bitwarden CLI Compromised in Ongoing Checkmarx Supply Chain Campaign
Bitwarden CLI 2026.4.0 was compromised in the Checkmarx supply chain campaign after attackers abused a GitHub Action in Bitwarden’s CI/CD pipeline.
pytest-kubernetes
Advanced tools
A lightweight pytest plugin for managing local Kubernetes clusters (minikube, k3d, kind)
pytest-kubernetes is a lightweight pytest plugin that makes managing (local) Kubernetes clusters a breeze. You can easily spin up a Kubernetes cluster with one pytest fixure and remove them again.
The fixture comes with some simple functions to interact with the cluster, for example kubectl(...) that allows you to run typical kubectl commands against this cluster without worring
about the kubeconfig on the test machine.
Features:
This plugin can be installed from PyPI:
pip install pytest-kubernetespoetry add -D pytest-kubernetesNote that this package provides entrypoint hooks to be automatically loaded with pytest.
pytest-kubernetes expects the following components to be available on the test machine:
kubectlminikube (optional for minikube-based clusters)k3d (optional for k3d-based clusters)kind (optional for kind-based clusters)Please make sure they are installed to run pytest-kubernetes properly.
The k8s fixture provides access to an automatically selected Kubernetes provider (depending on the availability on the host). The priority is: k3d, kind, minikube-docker and minikube-kvm2.
The fixture passes a manager object of type AClusterManager.
It provides the following interface:
kubectl(...): Execute kubectl command against this cluster (defaults to dict as returning format)apply(...): Apply resources to this cluster, either from YAML file, or Python dictload_image(...): Load a container image into this clusterwait(...): Wait for a target and a conditionport_forwarding(...): Port forward a targetlogs(...): Get the logs of a podversion(): Get the Kubernetes version of this clustercreate(...): Create this cluster (pass special cluster arguments with options: List[str] to the CLI command)delete(): Delete this clusterreset(): Delete this cluster (if it exists) and create it againThe interface provides proper typing and should be easy to work with.
Example
def test_a_feature_with_k3d(k8s: AClusterManager):
k8s.create()
k8s.apply(
{
"apiVersion": "v1",
"kind": "ConfigMap",
"data": {"key": "value"},
"metadata": {"name": "myconfigmap"},
},
)
k8s.apply("./dependencies.yaml")
k8s.load_image("my-container-image:latest")
k8s.kubectl(
[
"run",
"test",
"--image",
"my-container-image:latest",
"--restart=Never",
"--image-pull-policy=Never",
]
)
This cluster will be deleted once the test case is over.
Please note that you need to set "--image-pull-policy=Never" for images that you loaded into the cluster via the
k8s.load(name: str)function (see example above).
The k8s_manager fixture provides a convenient factory method, similar to the util select_provider_manager (see below) to construct prepared Kubernetes clusters.
k8s_manager(name: Optional[str] = None) -> Type[AClusterManager]
In contrast to select_provider_manager, k8s_manager is sensitive to pytest-arguments from the command line or
configuration file. It allows to override the standard configuration via the --k8s-kubeconfig-override argument
to use an external cluster for this test run. It makes development a breeze.
Example
The following recipe does the following:
k3d cluster create --config k3d_cluster.yaml)k3d cluster, if it's not runningThis is used in Gefyra.
@pytest.fixture(scope="module")
def k3d(k8s_manager):
k8s: AClusterManager = k8s_manager("k3d")("gefyra")
# ClusterOptions() forces pytest-kubernetes to always write a new kubeconfig file to disk
cluster_exists = k8s.ready(timeout=1)
if not cluster_exists:
k8s.create(
ClusterOptions(api_version="1.29.5"),
options=[
"--agents",
"1",
"-p",
"8080:80@agent:0",
"-p",
"31820:31820/UDP@agent:0",
"--agents-memory",
"8G",
],
)
if "gefyra" not in k8s.kubectl(["get", "ns"], as_dict=False):
k8s.kubectl(["create", "ns", "gefyra"])
k8s.wait("ns/gefyra", "jsonpath='{.status.phase}'=Active")
else:
purge_gefyra_objects(k8s)
os.environ["KUBECONFIG"] = str(k8s.kubeconfig)
yield k8s
if cluster_exists:
# delete existing bridges
purge_gefyra_objects(k8s)
k8s.kubectl(["delete", "ns", "gefyra"], as_dict=False)
else:
# we delete this cluster only when created during this run
k8s.delete()
This example allows to run test cases against an automatic ephemeral cluster, and a "long-living" cluster.
To run local tests without losing time in the set up and tear down of the cluster, you can follow these steps:
k3d cluster, for example from a config file: k3d cluster create --config k3d_cluster.yamlk3d kubeconfig get gefyra > mycluster.yamlpytest --k8s-kubeconfig-override mycluster.yaml --k8s-cluster-name gefyra --k8s-provider k3d -s -x tests/pytest-kubernetes uses pytest marks for specifying the cluster configuration for a test case
Currently the following settings are supported:
Example
@pytest.mark.k8s(provider="minikube", cluster_name="test1", keep=True)
def test_a_feature_in_minikube(k8s: AClusterManager):
...
To write custom Kubernetes-based fixtures in your project you can make use of the following util functions.
select_provider_managerThis function returns a deriving class of AClusterManager that is not created and wrapped in a fixture yet.
Remark: Don not use this, if you can use the fixture k8s_manager instead (see above).
select_provider_manager(name: Optional[str] = None) -> Type[AClusterManager]
The returning object gets called with the init parameters of AClusterManager, the cluster_name: str.
Example
@pytest.fixture(scope="session")
def k8s_with_workload(request):
cluster = select_provider_manager("k3d")("my-cluster")
# if minikube should be used
# cluster = select_provider_manager("minikube")("my-cluster")
cluster.create()
# init the cluster with a workload
cluster.apply("./fixtures/hello.yaml")
cluster.wait("deployments/hello-nginxdemo", "condition=Available=True")
yield cluster
cluster.delete()
In this example, the cluster remains active for the entire session and is only deleted once pytest is done.
Note that
yieldnotation that is prefered by pytest to express clean up tasks for this fixture.
You can pass a cluster config file in the create method of a cluster:
cluster = select_provider_manager("k3d")("my-cluster")
# bind ports of this k3d cluster
cluster.create(
cluster_options=ClusterOptions(
cluster_config=Path("my_cluster_config.yaml")
)
)
For the different providers you have to submit different kinds of configuration files.
minikube config command. An example can be found in the fixtures directory of this repository.You can pass more options using kwargs['options']: List[str] to the create(options=...) function when creating the cluster like so:
cluster = select_provider_manager("k3d")("my-cluster")
# bind ports of this k3d cluster
cluster.create(options=["--agents", "1", "-p", "8080:80@agent:0", "-p", "31820:31820/UDP@agent:0"])
Please find more examples in tests/vendor.py in this repository. These test cases are written as users of pytest-kubernetes would write test cases in their projects.
FAQs
A lightweight pytest plugin for managing local Kubernetes clusters (minikube, k3d, kind)
We found that pytest-kubernetes demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 3 open source maintainers collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Research
/Security News
Bitwarden CLI 2026.4.0 was compromised in the Checkmarx supply chain campaign after attackers abused a GitHub Action in Bitwarden’s CI/CD pipeline.

Research
/Security News
Docker and Socket have uncovered malicious Checkmarx KICS images and suspicious code extension releases in a broader supply chain compromise.

Product
Stay on top of alert changes with filtered subscriptions, batched summaries, and notification routing built for triage.