Picture this: a self‑hosted Confluence instance grinding under plugin sprawl, while Jetty hums quietly beneath it, handling every web request like an overworked traffic cop. Most admins never think about Jetty. They just know Confluence runs on it. But understanding how Confluence Jetty works—and how to tune it—can turn your Atlassian stack from sluggish to sharp.
Confluence is Atlassian’s knowledge base engine, built on Java and thick with plugins. Jetty is the lightweight HTTP server embedded inside it. Together, they power the browser connections, API calls, and authentication flows you depend on. When properly configured, Jetty is fast, secure, and surprisingly flexible. When ignored, it can be the quiet cause of latency, memory leaks, and misbehaving threads.
The heart of the pairing is the request lifecycle. A client hits Confluence, Jetty proxies that request, maps it to a servlet, and manages persistent connections. Its thread pools, session stores, and TLS configurations decide how predictable your performance remains under load. Simple parameters like maxThreads and idleTimeout define whether that Monday‑morning scrum drags or flies.
For teams integrating single sign‑on or external identity through services like Okta or AWS IAM, Jetty sits between the browser and Confluence’s authentication filters. Correct proxy and header handling are critical here. Miss one X‑Forwarded header and SSO breaks in the ugliest way possible. Map your reverse proxy settings carefully and test with OIDC before production.
A common optimization pattern is to externalize Jetty’s configuration and automate restarts. Use separate logs for request and error output, rotate them aggressively, and monitor active threads. Jetty will reward that discipline with cleaner debug trails and lower resource spikes.