Picture a data scientist waiting thirty minutes for a Kubernetes environment to spin up while a cluster admin scrambles through YAML files. That’s the kind of clock Domino Data Lab Kubler was built to destroy. It’s the behind-the-scenes conductor that ensures enterprise data science runs fast, repeatable, and compliant on top of Kubernetes.
Domino Data Lab hosts the data science workbench, while Kubler manages the heavy lifting of Kubernetes cluster lifecycle management. Together, they turn what used to be a tedious setup of nodes, permissions, and versions into an orchestrated process that stays aligned with corporate security and resource policies. Kubler sits between the infrastructure team and the data scientists, abstracting away just enough of Kubernetes to make complex workloads reproducible and safe.
When Domino Data Lab Kubler is correctly configured, you get an environment factory. Kubler builds, updates, and retires clusters automatically based on policies and templates. Domino then uses those clusters as compute backends for model training, notebooks, and pipelines. Kubler can even align with your identity provider to apply consistent access controls through OpenID Connect, Okta, or AWS IAM roles. The result: no one touches unapproved infrastructure and no cowboy clusters survive past their expiration dates.
Integration workflow
Start by defining cluster blueprints in Kubler that map to Domino workspace requirements. Associate them with approved base images and system roles. Domino pulls those definitions through its admin panel, so every environment request matches the right Kubernetes class, version, and storage settings. Logging and metrics flow back through Kubler’s control plane, giving DevOps a single view of active and idle workloads.
Best practices