You finally wired Metabase to Databricks, but the queries crawl, the permissions drift, and nobody remembers who granted what. You are not alone. Connecting a visualization tool to a data platform is easy until compliance shows up asking for lineage, access history, and one clean story about how the data moved.
Databricks runs the heavy analytics, blending notebooks, governance, and data pipelines under one unified lakehouse. Metabase sits on top, giving analysts and executives simple dashboards they can understand without SQL. Used together, they turn raw data into stories, provided identity and access control are treated as first-class citizens.
The logic starts at the credential layer. Databricks issues personal access tokens or uses SSO through OIDC, often federated by providers like Okta or Azure AD. Metabase, on the other hand, connects to Databricks via a JDBC driver, translating user queries into Spark SQL. The integration challenge is avoiding shared passwords and blind service accounts that linger far too long. The clean answer is to tie each dashboard query back to an authenticated identity with scoped permission.
A repeatable configuration looks like this: Metabase lives inside your VPC, connects over a private endpoint, and uses short-lived tokens fetched through a script or managed secret store. Databricks enforces catalog-level controls through Unity Catalog and logs every query event. You now have traceability, rotation, and auditability without manual approvals clogging Slack.
Common pain points appear when access tokens expire or when Metabase tries to reuse a session against a cluster that has shut down. Monitor connection status and set health checks to restart or refresh tokens periodically. Define roles clearly in Databricks and map them in Metabase groups. The less your engineers SSH into a dashboard server, the better your sleep.