The first time your data scientists and performance engineers argue about test environments, you realize the problem isn’t people, it’s plumbing. You cannot fix chaos in distributed performance testing without controlling who touches what, when, and how. That’s where Domino Data Lab LoadRunner walks in.
Domino Data Lab runs experiments, models, and heavy compute jobs across secure, policy‑governed clusters. LoadRunner, from Micro Focus, is the battle‑tested tool for simulating real user load at scale. Together, they turn guesswork into measurable performance truth. Domino handles reproducibility and data lineage. LoadRunner hits the endpoints until they sweat.
Linking the two means scientists can validate models under realistic load while operations teams keep credentials, environments, and audit trails intact. Instead of each engineer reinventing test setups, you create one integration that runs anywhere, whether on AWS, Azure, or an on‑prem stack still humming behind a firewall.
The integration flow looks like this. Domino schedules and tracks every run, attaching LoadRunner scripts as first‑class artifacts. Authentication rides through your identity provider, often via SAML or OIDC, so no loose tokens float around Slack. LoadRunner executes across the chosen infrastructure and sends back metrics automatically tagged to the experiment metadata. The result is a verifiable lineage: code, config, and load profile all stitched together.
To keep this pairing healthy, align RBAC in Domino with project roles used by LoadRunner controllers. Rotate secrets using the same mechanism that handles model credentials. Monitor throughput by pipeline, not by ego—if a test exceeds SLA thresholds, celebrate finding the bottleneck early instead of late on launch day.