You can tell when a platform is under stress. Dashboards lag, logs flood in unreadable bursts, and someone mutters that the data scientists broke staging again. That’s usually the moment someone opens Kibana inside Domino Data Lab and realizes the connection between experiment tracking and elastic log observability deserves an upgrade.
Domino Data Lab gives data teams reproducibility, secure compute environments, and centralized experiment management. Kibana gives everyone else a way to read what’s actually happening under all that Python and Spark. Put the two together and you get visibility across the whole ML lifecycle, from training to deployment logs. But only if identity, permissions, and index routing line up correctly.
Most teams start by connecting Domino’s internal logging to an Elastic stack. The pipeline works, but it’s easy to lose traceability between a model run and its container logs. The fix is to sync identity metadata from Domino projects into Kibana index patterns. Each run, notebook, or API job carries a tag that matches to an analyst’s domain account. When Kibana queries Elastic, the resulting dashboards stay scoped to only those projects the user should see. It’s simple role-based access control, enforced by structure rather than guesswork.
For authentication, Domino typically federates through an enterprise IdP like Okta or Azure AD. Kibana sits behind the same OIDC provider, so sign-on reuses session tokens with no password shuffle. Map Domino roles to Kibana Spaces, and you’ve got cross-platform observability without leaking internal datasets. Rotation of API keys every 30 days keeps compliance happy and maintains SOC 2 posture.
Quick answer: To integrate Domino Data Lab with Kibana, route logs from Domino’s internal Elastic indices to your enterprise Elastic cluster, tag them by project and user, then apply the same OIDC configuration to both services for unified login and scoped visibility. That’s enough to get secure, contextual dashboards of every model run.