Picture this: your team’s new service talks over raw TCP instead of HTTP, and suddenly your shiny API gateway turns into a grumpy doorman who only speaks REST. You still need visibility, control, and security across those TCP connections. That is exactly where Kong TCP Proxies come into play.
Kong TCP Proxies extend Kong Gateway beyond HTTP and gRPC to handle raw TCP streams. They let you apply the same security and traffic policies that protect your APIs to non-HTTP protocols like PostgreSQL, Redis, or SMTP. In short, you gain governance for everything that speaks over TCP, not just web traffic.
A Kong TCP Proxy works by establishing a stream route. Instead of routing URLs, it inspects connection metadata such as IP, SNI, or port. You can attach plugins just like you would to normal HTTP routes, but applied at the TCP layer. That means SSL termination, rate limits, or authentication can be enforced before the connection ever touches your backend. Once configured, Kong becomes the universal policy engine across all network traffic types.
When integrating TCP proxies into an existing environment, the real work is less about configs and more about flow. Each connection first hits Kong’s stream listener, which evaluates routing rules and executes plugin chains. Identity and access can be bound through mTLS or by pairing with OpenID Connect. This tight mapping prevents implicit trust based on IP addresses. Kong handles the transport, your policy defines the logic, and both stay independent of app code.
Best practices:
Keep stream plugins lightweight. A TCP proxy should never become a bottleneck. If latency spikes, check certificate renegotiation or upstream DNS lookups. Rotate secrets frequently and, when possible, integrate identity via systems like Okta or AWS IAM. That eliminates static credential sprawl.