Kubernetes network policies can be absolute in rules but fragile in reality. They look simple in YAML but operate in a web of cluster configs, CNI plugin behaviors, and target namespace differences. A “deny all” policy in one environment can allow unexpected traffic in another if defaults, labels, or controllers diverge. This is why so many engineers find themselves debugging packet traces at 2 a.m.
A network policy in Kubernetes is not just the YAML spec you write. It is the result of user config and cluster-level settings coming together—or colliding. The way your cluster enforces ingress and egress rules depends on:
- Which CNI plugin you run, and its specific feature set.
- Namespace defaults and label selectors in your manifests.
- How policies interact when multiple are applied to the same pod.
- Whether your cluster denies by default or allows by default.
Even small differences—an extra label, a missing namespace, a default rule set by someone months ago—can change the effective behavior of your policy. This is the core Kubernetes network policies user config dependent reality: the same manifest can mean different things in different clusters.