You know the moment. Logs are clean. Services are humming. Then someone pings you: “Can I get temporary access to Jetty?” Another follows: “My service can’t publish to NATS.” Suddenly you’re knee-deep in manually passing tokens and flipping ACLs that should have been automated hours ago.
Jetty handles the web layer—requests, sessions, and HTTPS. NATS handles messaging—fast, lightweight pub/sub for everything from telemetry to microservice communication. They’re both brilliant at what they do. But linking them through consistent identity and access control is where things usually get messy. That’s where understanding how Jetty NATS fits together saves your day.
At its core, a Jetty NATS setup means Jetty serving as a secure edge or internal proxy for services that use NATS as their communication fabric. Instead of letting each client hold credentials for both systems, Jetty can authenticate via OIDC or SAML against your identity provider (Okta, AWS IAM, or anything modern), issue short-lived tokens, and hand those down to NATS through standardized claims. This pattern shrinks your attack surface and simplifies compliance under SOC 2 or ISO frameworks because identity becomes auditable across layers.
Think of it like a backstage pass system. Jetty checks who you are, NATS handles what you can send. Together they replace static credentials with dynamic permissions tied to real users or machines. No more SSHing into boxes to rotate secrets; the rotation happens automatically as tokens expire and refresh.
Best Practices to Keep Jetty NATS Confidently Locked