You deploy dashboards, data models, and persistent volumes. Someone asks for read-only access at 4 p.m. You realize it’s buried inside three layers of Kubernetes YAML, a PVC, and a forgotten RBAC rule. Metabase Portworx can fix that, but only if it’s wired with intent rather than guesswork.
Metabase is the clean, human face of your data. Portworx is the durable spine that keeps that data alive under failure, scale, or migration. When they work together, the result is predictable analytics that survive cluster cycles and the occasional late-night patch.
So how do you make them behave like teammates instead of strangers? The key is identity and storage orchestration. Metabase runs as a service inside Kubernetes, and Portworx manages the persistent data layer beneath it. Connect Metabase’s container to a Portworx volume, map user roles to storage access through Kubernetes ServiceAccounts, and you get isolation per dataset without manual PVC juggling. Each dashboard reads from storage that remains consistent across node restarts or scaling events, and audit trails remain intact because permissions are bound to identity, not instance ID.
When configuring, start with built-in secrets management. Tie Metabase’s database credentials to Portworx volumes using your cluster’s KMS integration, whether AWS KMS or HashiCorp Vault. Rotate credentials regularly and use short-lived tokens. If you rely on Okta or an OIDC provider, use those identities to scope access—analytics users see sanitized datasets, admins see raw logs. You’ll avoid the common trap of over-provisioned storage claims that expose sensitive data.
Quick answer: The easiest way to connect Metabase with Portworx is to deploy Metabase in Kubernetes using a StatefulSet that points to a Portworx-backed PersistentVolumeClaim. Map user roles with RBAC, secure secrets, and let Portworx handle failover automatically.