You spin up an Azure ML workspace, build a few models, and then hit a wall: reports live elsewhere. The data scientists are in notebooks, the product team is in dashboards, and access control feels like a guessing game. That’s where wiring Azure ML to Metabase changes everything.
Azure Machine Learning (Azure ML) handles your managed compute, training pipelines, and deployment endpoints. Metabase turns data into readable dashboards without heavy SQL. Together, they give teams visibility from training metrics to production inference logs. The catch is secure, consistent access between them — a detail too many teams hack together with service principals and screenshots of tokens.
Here’s what the integration really needs. Azure ML hosts experiment and model data in storage accounts or Azure SQL. Metabase connects via JDBC or service credentials to those same stores. The identity thread is Azure AD. You configure role-based access control (RBAC) so the service principal Metabase uses only reads the permitted dataset. It’s not complex, but it deserves care: one misstep, and you either break automation or open too much data.
When set up properly, the flow looks clean. Azure ML logs results to its datastore. Metabase retrieves aggregates on schedule and displays them with the same identity constraints applied in Azure. Permissions mirror your RBAC roles instead of hardcoded SQL filters. That’s the moment dashboards stop leaking secrets and start reflecting production state.
Featured snippet answer:
To connect Azure ML and Metabase securely, store results in an Azure data service (like SQL Database or Blob with structured exports), register a restricted Azure AD application for Metabase, assign it read-only permissions, and link via JDBC or the Azure SQL connector. This preserves auditability and least privilege access.