A data scientist opens their notebook only to find storage latency creeping in again. The workflow hums, but datasets stutter. Somewhere, Kubernetes volumes and policy mappings are misaligned, and the clock ticks louder with each retry. That’s the everyday friction Domino Data Lab Portworx exists to erase.
Domino Data Lab gives enterprises a governed data science platform that wraps model development, versioning, and reproducibility into one environment. Portworx handles persistent storage for containerized applications with remarkable resilience and self-healing volume orchestration. Together, they form a stack that keeps AI workloads predictable even when clusters move or scale. Domino runs the experiments. Portworx keeps the bits alive.
The real trick is integration logic. When Domino’s compute environments need fast, reliable access to massive datasets, Portworx provides dynamic volume provisioning rooted in Kubernetes StorageClasses. Every notebook spin-up grabs the right storage without manual mounts or fragile NFS paths. A sane combination of RBAC controls, namespaces, and OIDC-based identity ensures data isolation per user or team, so you never mix research prototypes with production models again.
How do I connect Domino Data Lab and Portworx?
You configure Portworx as the default CSI driver in Domino’s underlying Kubernetes cluster. Domino can then request persistent volumes directly via Portworx, abstracting storage details behind the platform’s workspace settings. The setup takes minutes and immediately enables reproducible model execution across upgrade cycles.
Best practice: map your organization’s IAM roles to Domino’s internal groups before attaching Portworx volumes. This clarifies audit trails and locks down who can touch which dataset. Use rotation-friendly secrets management, ideally tied to Okta or AWS IAM, to avoid orphaned credentials. Once this RBAC layer matches, storage policies apply consistently across environments.