The first time you try to run Argo Workflows on Microk8s, it feels deceptively easy until something subtle breaks. Maybe the controller can’t reach the Kubernetes API or the service account token vanishes between namespaces. That tension—simple setup, hidden friction—is exactly what makes this pairing worth understanding.
Argo Workflows is the automation brain for Kubernetes, built for defining repeatable pipelines and CI jobs as code. Microk8s, on the other hand, is the lightweight Kubernetes distribution perfect for laptops, edge nodes, or air‑gapped clusters. Together they create a small but mighty automation cluster you can spin up anywhere. The magic is in giving workflows cluster‑level visibility without poking holes in security.
How Argo Workflows Integrates with Microk8s
In this setup, Microk8s acts as both the orchestrator and runtime. Argo Workflows plugs in through its controller’s service account, using Kubernetes RBAC to enforce permissions. Every workflow template becomes a Kubernetes Custom Resource, and execution logs stream through native events. No fragile external queues, no messy credential sharing. Just YAML, Pods, and clarity.
You link Argo’s namespace to Microk8s’ internal API server. Identity flows through OIDC if you choose, often mapped to enterprise providers like Okta or Google Workspace. From there, Argo submits pods directly via the Microk8s API, executes steps in sequence, then cleans up resources automatically. It feels like running CI pipelines inside a lab‑grade sandbox.
Tips for Reliable Argo Workflows on Microk8s
If the workflow fails to authenticate, check your Microk8s RBAC bindings. Make sure the Argo controller has cluster‑role permissions to create pods. Rotate secrets regularly through Kubernetes secrets, not custom configs. Enable pod logs persistence using hostPath volumes to avoid losing traceability after restarts.