You just shipped a new API. It runs fast, logs cleanly, and your CDN handles global reach. Then your security team asks where identity enforcement happens. You look at your Caddy reverse proxy and the new Fastly Compute@Edge service scripts and wonder: could these two actually work together?
They can, and when they do, the result is fast, policy-aware traffic routing that feels invisible to developers.
Caddy is a modern web server and proxy that handles TLS, routing, and authentication with strong defaults and clean configuration. Fastly Compute@Edge extends CDN logic into programmable runtime decisions that run milliseconds from the user. Together, they turn your infrastructure into a fast, globally distributed identity-aware access layer. In practical terms, Caddy authenticates and normalizes requests while Compute@Edge evaluates logic closer to the edge, reducing latency and risk.
Imagine requests hitting Fastly first. Compute@Edge evaluates user tokens, geo rules, or custom headers, then passes valid traffic to Caddy. Caddy terminates TLS, checks upstream permissions, and serves content or APIs cleanly. Audits now live at both ends: global edge logs and local proxy metrics that align naturally.
The integration workflow relies on clear identity flows. Fastly enforces front-door validation, while Caddy validates internal service certificates, often with OIDC providers like Okta or Auth0. This dual model eliminates mirroring policies across environments. You still write logic once, but it enforces everywhere.
Fast, secure traffic routing with Caddy Fastly Compute@Edge
To connect both, assign a dedicated Fastly service to the Caddy instance’s external endpoint. Configure Compute@Edge to evaluate authorization headers and propagate identity metadata downstream. Caddy reads those headers and maps them to internal policy decisions. Certificate rotation, rate limiting, and per-region authentication now operate automatically without scripts scattered across stacks.