Your data team’s storage layer should be invisible until it fails. Then it becomes everyone’s problem. That’s the tension Domino Data Lab Longhorn resolves: persistent volume management you can actually trust when workloads scale or crash in spectacular fashion.
Domino Data Lab provides the enterprise platform for reproducible, secure data science. Longhorn, an open-source distributed block storage system built for Kubernetes, handles the low-level persistence so notebooks, experiments, and models survive pod restarts, node drains, and chaotic cluster upgrades. When you combine them, you get a repeatable compute environment with durable storage baked in, rather than bolted on.
In this setup, Domino’s project spaces map to Kubernetes namespaces. Each workspace spins up persistent volumes underneath Longhorn. Longhorn replicates these volumes across nodes, maintaining quorum writes even when infrastructure goes sideways. The workflow is elegant: Domino declares a PVC, Longhorn fulfills it through its engine, and the data scientist keeps coding without worrying which node their data lives on. That’s the kind of problem you only appreciate when you lose a terabyte to a faulty detach.
Security teams appreciate how Longhorn enforces cluster-scoped operations. Use fine-grained RBAC to control who can attach volumes or perform snapshots. Tie these rules to your identity provider—Okta or Azure AD—so volume access reflects real corporate roles. Rotate service account tokens frequently and let your storage credentials expire with human sessions. Domino picks up that security context cleanly through OIDC or AWS IAM integration.
Troubleshooting the pair usually comes down to policy alignment. When Longhorn reports “volume stuck,” check Domino’s pending pod scheduling. Nine times out of ten the storage class doesn’t match the namespace permissions. Fixing it means updating the PVC definition, not hacking around with manual mounts. Keep your storage classes consistent across Domino environments to avoid ghost volumes.