The moment your analytics team asks for Kubernetes cluster metrics inside Metabase, you realize how messy access control can get. AWS EKS is built for managed containers. Metabase is built for insight through data exploration. Connecting them is simple in theory, but unless permissions are handled right, you’ll drown in IAM errors and half-broken dashboards.
EKS provides the infrastructure, scaling, and isolation your workloads need. Metabase gives you the friendly face on top of that data, letting teams query, visualize, and share. When integrated correctly, Metabase becomes the window into what EKS is doing beneath the surface: pods, logs, costs, and anything else your metrics system emits.
At its core, an EKS Metabase setup works like this. Your RDS or data warehouse inside EKS exposes metrics that Metabase connects to through a controlled network boundary. Metabase authenticates using IAM roles or OIDC, ensuring queries run under defined permissions. The real trick is wiring identity so analysts never need cluster-level credentials.
Start with defining clear AWS IAM roles mapped to Metabase’s service account. Limit policies to read-only access for tables or metrics sources. Next, wire your network layer so Metabase runs in a private subnet alongside EKS. That keeps data within your AWS perimeter without relying on external connections. Use security groups to control which queries cross boundaries, and rotate secrets regularly using AWS Secrets Manager or another vault service.
If your setup relies on Okta or another identity provider, configure OIDC integration so SSO users map neatly to IAM roles. This avoids manual RBAC duplication inside EKS and Metabase. If dashboards stall on queries or drop connections, check your proxy settings and TLS certificates. EKS ingress misconfigurations are a common culprit.