You just finished wiring up your Buildkite pipelines when someone asks for experiments to trigger on every merge. Then the data team mentions compliance. Suddenly your CI/CD job now needs controlled access to Domino Data Lab, secure credentials, and traceability. It should be easy, but everyone knows “should” is a trap.
Buildkite is the engineer’s CI/CD Swiss Army knife. It runs builds with the flexibility of your own infrastructure while keeping the control plane managed. Domino Data Lab, on the other hand, is where enterprise data science lives: reproducible experiments, GPU workloads, governed datasets, and a single pane for research to production. They solve different problems, but together, they unlock a consistent delivery loop for machine learning systems.
To integrate them, think in terms of identity, environment, and automation. Buildkite triggers your jobs through pipelines. Those jobs run on agents that must call Domino’s API, push artifacts, or spin up compute environments. Each touchpoint needs identity. Instead of dumping service tokens into pipeline secrets like it’s 2015, use scoped access tied to your identity provider — Okta or AWS IAM with OIDC are common patterns. With Domino’s fine-grained project permissions, Buildkite can publish models or run validations without letting every agent act like an admin.
Once authentication is clean, map environments. Domino supports workspace automation via API endpoints, so Buildkite can invoke reproducible jobs with versioned configs. The result is a traceable handoff from training to testing to release. Failed builds trace back to a specific experiment, not a mystery container running under someone’s user ID.
Here’s the fast answer for most readers:
How do I connect Buildkite and Domino Data Lab?
Use OIDC-based service accounts in Domino and configure Buildkite pipelines to request temporary tokens at job start. This ensures short-lived credentials, clear audit trails, and no static secrets lying around in logs or agents.