Picture this: your API gateway is humming at scale, traffic spikes are hitting from every direction, and someone asks who exactly accessed that internal endpoint five minutes ago. If your logs live in the Apigee world, you’re probably leaning on Jetty to keep request handling reliable, secure, and auditable. That’s where the quiet power of Apigee Jetty shows up.
Apigee uses Jetty as its embedded application server. Jetty is lightweight, fast, and fits neatly into the distributed architecture Google Cloud favors for Apigee. It acts as the runtime container behind the proxy endpoints you expose. Together, Apigee and Jetty manage the entire lifecycle of an API call, from authentication and policy enforcement to routing and metrics export. You get centralized control without hand-coding every last HTTP decision.
The integration logic is simple but clever. Jetty handles low-level request parsing, connection pooling, and thread management. Apigee layers on identity, quotas, and dynamic routing based on defined policies. That means you can scale out your API clusters while letting Jetty keep the plumbing tight and Apigee keep the governance. Think of Jetty as the dependable bouncer and Apigee as the policy manager deciding who gets in and how long they can stay.
To configure Apigee Jetty well, focus on isolation and visibility. Map your RBAC model carefully to minimize who can modify Jetty configs. Rotate any TLS or OIDC secrets regularly. Monitor thread pools to avoid throttling under load. And if your enterprise identity provider like Okta or Azure AD is already running OIDC, connect that directly through Apigee’s security modules so Jetty only ever sees authenticated traffic.
Quick Answer: Apigee Jetty runs as the embedded web server inside Apigee, handling every HTTP connection and enforcing gateway-level policies. It optimizes performance by combining low-latency request handling with centralized API management.