What looked like a routing tweak became a Kubernetes Ingress feedback loop that consumed every request, every pod, every ounce of available capacity. It wasn’t a crash from bad code. It was the architecture folding in on itself.
Kubernetes Ingress is powerful, but it is also a sharp edge. When routes point back into themselves—through rewrites, recursive rules, or wildcard expansions—the load balancer becomes a loop generator. The feedback is instant. Requests pile on. Latency spikes. Horizontal Pod Autoscalers see a traffic surge and spin up more pods, which feed even more requests back into the same loop. The cluster self‑amplifies its own problem.
Most teams don’t see it coming because everything looks normal for a few seconds. Then CPU burns hot. Thread counts explode. Outbound connections saturate. Observability tools report spikes in traffic without clear source attribution. By the time someone guesses it’s an Ingress problem, the damage has spread across services.
Preventing an Ingress feedback loop starts with clarity in routing definitions. Every path, every hostname, and every rewrite rule should be explicit, with no assumptions. Avoid patterns that capture your own ingress controller endpoints. Test changes in isolated environments before production. Use request tracing to see if any endpoint is returning traffic into the cluster through the same ingress path.