The logs were clean, the YAML was clean, the cluster was healthy. But the silent killer was a constraint you forgot was there. In Kubernetes, kubectl is the tool you trust. It’s the bridge between you and your cluster. But it’s also the place where constraints can block, guard, and control everything that runs.
Running kubectl without understanding constraints is like shipping code without tests. You may get away with it once, but it will break later — and you won’t know why. Constraints in kubectl are not just flags or limits. They can be policy rules, admission controllers, resource caps, node selectors, role-based access limits, or full-on OPA Gatekeeper policies. These rules decide which pods live, which pods die, and which pods never see the scheduler.
The trick is that constraints hide in many layers. A single kubectl apply command travels through:
- Local client configuration
- API server admission phases
- Namespace quotas and limits
- Cluster-wide policy engines
- Mutating and validating webhooks
- Role-based access rules
If a constraint blocks your pod, the error may point you to the wrong place. You have to trace it. Use kubectl describe to read the events. Use kubectl get with -o yaml to see the object state. Check kubectl api-resources to confirm the resource type. If you use Gatekeeper or Kyverno, inspect the constraint templates and definitions.