Your build just failed again. The container pulled fine, but your cluster never saw the update. You double-check secrets, service accounts, and Git triggers. Everything looks right, yet nothing deploys. That loop of “works on my laptop” has a new villain: access drift between Bitbucket and k3s.
Bitbucket manages your source code and CI pipelines. K3s is Kubernetes that fits in your pocket. Marrying them creates a lean automation stack for edge clusters or lightweight test environments. The magic comes when your repository events directly apply manifests to k3s with confidence that identity, permissions, and context stay in sync.
The pair works best through a simple logic path: Bitbucket triggers a pipeline whenever you push to main. That pipeline uses a service identity with just enough privilege to talk to your k3s API. The cluster runs your deployment YAML and reports status back. It feels like a full CI/CD engine, stripped of every slow, noisy part. The trick is control of who can do what and where.
To keep things smooth, handle RBAC in k3s as if it were an IAM policy. Use service accounts mapped to namespaces, not global tokens floating around in pipeline variables. Rotate your secrets often and store them in Bitbucket’s secure variables section, or better, use an identity-aware proxy that injects short-lived credentials on demand. One expired token is cheaper than one leaked secret.
Run into pipeline errors like certificate signed by unknown authority? That’s usually a missing CA bundle in your runner container, not a problem with k3s. Fix it once and bake the certs into your image. Another common issue is an incorrect KUBECONFIG path. Explicit paths beat environment guesses every time.