The cluster was quiet until the packets hit the wire. In Kubernetes, silence breaks when workloads start talking. Postgres speaks in its own binary protocol, and when you need to proxy that traffic securely, you find yourself staring at the intersection of Kubernetes Network Policies and raw database connections.
Network Policies in Kubernetes define how pods communicate. They control ingress and egress at the IP and port level. This works fine for HTTP. For Postgres, running over TCP with its binary protocol, the story changes. Every packet is part of a stateful conversation. Blocking or allowing traffic means thinking about entire sessions, not just single requests.
If you’re proxying Postgres inside Kubernetes, you need to keep three layers in view:
- The NetworkPolicy rules — These decide whether your pod can even open a TCP connection to Postgres.
- The proxy service — This might be a sidecar, a standalone pod, or a managed service. It terminates the incoming connection, then initiates a new one to the database.
- The binary protocol stream — The proxy must relay messages exactly as sent, without corrupting auth handshakes, prepared statements, or replication streams.
Misconfigurations show up fast. A NetworkPolicy that blocks egress from the proxy pod kills connections instantly. Policies allowing only certain CIDRs may prevent the proxy from reaching your StatefulSet or external Postgres host. Liveness and readiness probes can also be affected if they use TCP checks that are blocked by policy.