Logs pointed in every direction. Unit tests passed. Staging worked once, then fell apart. The only thing left to try was running the real thing, end to end, against the same Kubernetes we would ship to production. That’s when integration testing with kubectl stopped being optional and became the only way forward.
Integration testing confirms that the moving parts of your system work together in the real world, not just in theory. In Kubernetes, that means testing against live clusters. It means applying manifests with kubectl, watching workloads spin up, probing services, and checking every dependency in one sweep. Every container, config map, secret, and ingress gets validated—not by mocks, but by the cluster itself.
The workflow is simple at its core:
- Spin up an ephemeral Kubernetes cluster or target your staging namespace.
- Deploy your application exactly as in production using
kubectl apply. - Run your integration test suite against live endpoints.
- Tear everything down when done.
This process catches network misconfigurations, RBAC gaps, unhealthy pods, and resource issues that unit tests never see. You can script it in CI pipelines, using kubectl commands to deploy, check pod status, and fetch logs in automated steps. Reruns are quick if you keep manifests and tests versioned together.