You know that sinking feeling when your cloud access scripts break right before a deploy? That moment when a token expired, an RBAC rule changed, or someone “cleaned up” a service principal that your pipeline still needed? Azure Storage Caddy exists to make that pain a memory instead of a recurring meeting topic.
At its core, Azure Storage handles blobs, files, queues, and tables at scale. Caddy acts as a modern reverse proxy and automation-friendly server that brings elegant configuration, TLS, and policy control to edge and storage access. When you marry the two, you get authenticated, encrypted, and auditable file delivery that feels effortless once set up right.
The integration flow
Start by thinking of Azure Storage as the vault and Caddy as the key master. You connect Azure credentials or managed identities with Caddy’s storage plugin. Caddy reads the environment’s configuration, signs requests with your chosen identity (like an Azure Managed Identity, Service Principal, or OIDC token), and serves or caches objects with built-in HTTPS. The workflow maps credentials to containers behind well-defined routes. That means developers never see raw keys, and access can align with RBAC in Azure AD directly.
When you update assets or rotate secrets, Caddy refreshes automatically using the latest tokens. No more manual restarts or embedded secrets in CI pipelines. Everything that touches your container is logged with traceable origin context.
Setup tips that save hours
Keep storage access scoped to the least privilege required. A simple misaligned SAS token can expose whole containers. Use Azure Role-Based Access Control instead — Contributor or Reader roles often suffice. Enable versioning and soft delete to recover from accidental overwrites. Finally, keep Caddy’s config declarative so changes are easy to review and roll back in Git.