You spend half your morning watching tiny service calls crawl through traces, wondering which hop stole your latency budget. When the mesh starts whispering about retries and the API gateway shrugs, you need something that speaks both languages. That’s where Kong and Linkerd finally make sense together.
Kong excels at controlling who gets in. It handles north-south traffic, authentication, rate limits, and policy enforcement. Linkerd takes care of the inside conversation, giving your east-west traffic mutual TLS, retries, load balancing, and crisp metrics. Alone, they’re strong. Combined, they act like a well-trained security team with perfect hearing.
When Kong Linkerd integration is done right, identity follows every packet. Requests enter through Kong, which validates tokens using OIDC or your corporate SSO. Linkerd then keeps that identity intact, securing each hop with mTLS and verifying workload certificates. You get one verified identity chain from browser to pod. That clarity changes debugging from guesswork to geometry.
Think of the workflow as a relay race. Kong starts with the baton of user identity. Linkerd runs the rest of the track, ensuring every service who touches that request is known, trusted, and logged. No hidden runners, no “it worked on staging.”
How do I connect Kong and Linkerd?
You deploy Kong at the cluster boundary to manage external access. Linkerd runs as a lightweight data plane inside Kubernetes. Configure Kong to issue authenticated traffic through Linkerd’s injected sidecars, and align certificate authorities between the two. Most teams use OIDC via Okta or AWS IAM for identity consistency.