You finally have a sleek microservice stack running. Then someone asks if it can be securely exposed over raw TCP instead of just HTTP. You shrug. Reverse proxies are supposed to make this easy, but most tools choke once you leave port 443 territory. That is where Caddy TCP Proxies earn their keep.
Caddy is known for effortless HTTPS, but it quietly packs a flexible proxy layer that works for any TCP or UDP stream. Instead of hacking together stunnel or nginx modules, you can define a Caddy route that forwards raw traffic to your upstream. The magic lies in the same configuration logic that powers its HTTP reverse proxy — with automatic TLS, connection reuse, and clean lifecycle control baked in.
Think of it as a transport concierge. Caddy listens on one end, validates and optionally decrypts TLS, then passes the payload through to your backend service. Databases, game servers, MQTT brokers, SSH bastions — anything speaking TCP can ride behind it. With the right automation, you get consistent identity enforcement across all those protocols, not just the ones wrapped in web requests.
How do Caddy TCP Proxies actually work?
At runtime, Caddy routes network traffic using handlers that match incoming connections and forward them to configured destinations. You can define multiple upstreams for load balancing or failover. Because the proxy integrates with Caddy’s TLS stack, certificate management and session resumption come for free.
A few best practices matter here. Enforce identity at the proxy edge using OIDC or short-lived client certificates from an authority like Okta or AWS IAM. Rotate secrets often. Use access logs that record session fingerprints, not just source IPs. When things go wrong, you’ll know who connected and why.