The firewall was silent, but the cluster was bleeding data.
Kubernetes network policies decide exactly which pods can talk to which. They give you control over ingress and egress traffic at the namespace level, enforced by the container runtime and the CNI plugin. Without them, every pod is a potential open door. With them, you can lock down paths so only authorized services connect.
When you work with gRPC, traffic patterns are different. gRPC uses HTTP/2 and often streams data continuously. Default network policies that assume simple REST calls may block or mishandle gRPC flows. The key is to define rules that match the prefixes used by your gRPC services and ensure the correct ports are exposed.
A Kubernetes network policy for gRPC must include port 50051 or any custom gRPC port in the spec. Use podSelector to target the pods running gRPC servers. In ingress, specify from rules that reference the client pods or namespaces. Add namespaceSelector if your clients live across multiple namespaces. Testing is critical: apply the policy, run a gRPC health check, and watch for refused connections.
Prefix matching is important for gRPC routing inside large microservice architectures. Many teams label pods with a prefix that indicates which service family they belong to. This prefix can be used in network policies through labels in the selector, ensuring only pods with the matching prefix can initiate calls. For example, a matchLabels block like "service-prefix": "payments" lets you isolate one gRPC service from all others.
Cluster network policies with gRPCs prefix strategy can prevent cross-service chatter, reduce blast radius in case of compromise, and keep compliance audits clean. Remember that policies are deny-by-default when no rules exist, so define every allowed path explicitly.
Do not leave gRPC pods open to the cluster unless required. Apply namespace isolation, prefix-based selectors, and port-specific rules. Audit them after any deployment changes.
Secure your Kubernetes network policies for gRPCs with precision. See it live in minutes at hoop.dev.