Picture a data scientist with five tabs open, SSH tunnels stacked like pancakes, and a notebook that refuses to sync because a kube policy expired. That is a normal Tuesday before CentOS and Domino Data Lab learn to cooperate. Once they do, access control stops being art and starts being policy.
CentOS gives you a predictable Linux environment built for longevity. Domino Data Lab turns that foundation into a governed, collaborative platform for modeling and experimentation. Combined, they strike a rare balance: hardened OS predictability with flexible data science agility. The secret is mapping identity and execution controls so they actually talk to each other.
In this setup, CentOS provides a stable container or VM base for Domino’s executor nodes. Each container inherits system-level security settings and SELinux enforcement. Domino manages compute environments on top of that, executing workspace sessions through containers or jobs pinned to those CentOS bases. The handshake happens through authentication metadata, not static keys. When connected to SSO providers like Okta or Azure AD via OIDC, user sessions authenticate once, then cascade permissions through Domino’s model registry and file stores automatically.
The logic is straightforward. CentOS enforces access at the system level, Domino tags and tracks each project run, and your identity provider mediates who can do what. The result is traceable activity across every notebook cell or API hit. No more shadow credentials or stale sudo privileges.
A few best practices make this integration clean and sustainable. Use role-based access controls at both layers, not just one. Rotate secrets through a central vault rather than hardcoding tokens in environment files. Make sure Domino jobs inherit network policies so outgoing connections stay inside expected CIDRs. Audit logs from both the OS and the Domino control plane tell you exactly who touched what and when.