Your data team just spun up a fresh Databricks cluster. Models are training, dashboards look sharp, and everything sings until someone can’t access the endpoint they need. That’s the moment Databricks ML Jetty earns its keep.
Jetty is the lightweight web server embedded inside Databricks Machine Learning. It handles requests, manages session-level security, and serves your ML endpoints without forcing you to bolt on custom proxy layers. Most teams never think about Jetty until access policies, model serving, or compliance audits show up. At that point, Jetty’s job becomes clear—it is the quiet middle layer keeping identity, data, and apps synchronized.
In plain terms, Databricks ML Jetty balances two hard things: efficient inference and secure service delivery. It wraps each incoming request inside Databricks’ ML runtime and uses your configured identity providers for authentication. Jetty translates routing rules, token scopes, and role-based constraints so the same infrastructure that trains models can safely expose them to production consumers.
Integration workflow
The flow is simple once you break it down. Users or automated agents authenticate through OIDC or SAML providers like Okta or Azure AD. Jetty captures those credentials, validates them against Databricks’s workspace permissions, then issues secure session tokens. Each ML endpoint runs as a managed servlet inside Jetty, which means the request lifecycle—from handshake to prediction—remains consistent across environments. Logs roll into your Databricks monitor, metrics feed Grafana or CloudWatch, and nothing leaks beyond the defined policies.
Best practices
Map Databricks user roles directly to Jetty servlet security policies to reduce shadow permission creep. Rotate service tokens regularly with an external secrets manager such as AWS Secrets Manager or HashiCorp Vault. If audit compliance matters, enable Jetty’s request logging and push those logs to a SOC 2–ready data lake for analysis.