Picture this. Your internal analytics dashboard is humming along in Jetty, hosting Metabase behind a few layers of reverse proxies and access control. Then someone asks, “Can we share this with the data team, but not the contractors?” That’s the moment every engineer realizes how fragile DIY access controls can be.
Jetty is a lightweight Java-based HTTP server and servlet container. Metabase is an open-source data visualization tool designed for human-speed queries, not enterprise-grade permission systems. Together they make a fast data app, but without careful setup you’ll either overprotect (and kill flow) or underprotect (and wake up to an audit nightmare). A good integration means identity-aware routing, consistent session handling, and zero manual token juggling.
The typical Jetty Metabase pairing works like this. Jetty runs as your container or microservice endpoint. It handles OIDC or SAML authentication through Okta, Azure AD, or another provider. Once a user session is verified, requests are forwarded to Metabase with a signed header or cookie containing user identity and role. Metabase then enforces its own internal permissions—collections, dashboards, SQL queries—based on that identity context. The entire flow happens in milliseconds, yet determines who gets to see what data.
To keep it stable, a few practices matter. Always terminate SSL at Jetty to avoid leaking headers. Keep your OIDC tokens short-lived and enforce refresh through Jetty middleware. Map users via email claims instead of mutable usernames. And never embed secrets in Metabase configs; route them through environment variables or external vaults.
Key benefits that make teams stick with this setup: