Pods were ready, traffic was flowing, but Postgres connections hit a wall. The problem wasn’t the database. It was the Kubernetes Ingress.
Most Kubernetes Ingress controllers focus on HTTP routing. They handle web traffic well, but anything outside HTTP—like the Postgres binary protocol—often fails or needs ugly workarounds. Postgres speaks its own protocol on TCP port 5432. HTTP-based ingress paths can’t understand it, can’t stream it efficiently, and sometimes can’t proxy it at all. If you want routing, TLS termination, or external access for Postgres inside Kubernetes, this is a hard constraint.
This is where native TCP load balancing in Kubernetes comes into play. Instead of forcing the Postgres binary protocol over HTTP, you use an ingress or a proxy that handles raw TCP directly. Some Ingress controllers support this with custom configuration. NGINX Ingress, for example, has a TCP services config map for mapping external ports to service ports. Traefik supports TCP routers alongside HTTP routers. HAProxy Ingress can do pure TCP proxying with SNI-based routing. All of these approaches bypass HTTP entirely.
For Postgres, this matters. TCP proxying preserves connection handling, streaming responses, and performance. It lets you connect from outside the cluster using standard Postgres clients, without rewriting application logic or tunneling. It also ensures that authentication, SSL, and other protocol-level features work as intended. When Kubernetes services are fronted by a TCP-capable ingress, Postgres just works—inside and outside the cluster.