I filed my first Kubernetes Ingress feature request after a night of chasing down broken routes in production. It wasn’t a bug. It was a missing capability that every team I knew had hacked around in some fragile way.
Kubernetes Ingress has been one of the most divisive parts of the platform. It sits right at the edge, controlling traffic flow into your cluster, yet it feels like the slowest part of Kubernetes to evolve. Engineers want more: richer routing rules, native support for modern load balancing patterns, better observability, and first-class integration with service meshes.
The official feature set still lags behind what actual workloads demand. Sure, you can stack annotations, deploy custom controllers, or swap in an alternative like Gateway API. But these are patches, not solutions. Each workaround adds complexity, increases the learning curve for new contributors, and makes automation harder.
The biggest pain point remains flexibility. Today’s Ingress rules can handle basic path and host matching, but advanced HTTP routing—like weighted traffic splits, header-based routing, or real-time failover—is often pushed into external systems. That breaks the promise of Kubernetes as a unified control plane.
Development velocity is another issue. The community works hard, but major Ingress changes require long proposal and approval cycles. It’s why so many of us have written and abandoned multiple Kubernetes Ingress feature requests over the years. By the time a feature is accepted and shipped, many teams have already moved on to their own load balancer-level fixes.