When your deployment depends on both GitOps and resilient storage, there’s a moment where YAML meets raw performance. That’s the moment ArgoCD and Ceph meet. One manages desired state, the other keeps every bit of your data safe even when disks fail. Put them together and your cluster starts behaving like it knows what it’s doing.
ArgoCD is Kubernetes’ control freak in the best way. It syncs your applications to the exact version defined in Git, forever enforcing your intent. Ceph, on the other hand, handles distributed storage with high durability and flexible replication. Using ArgoCD to manage Ceph means your storage layer becomes version-controlled infrastructure. Upgrades, configurations, and even CRD changes can roll through controlled pipelines rather than late-night manual edits.
When integrating ArgoCD and Ceph, think in terms of desired state flows. ArgoCD watches your Ceph Helm charts or manifests stored in Git. On commit, it pulls, diffs, and applies them to ensure the cluster matches Git. Ceph’s operators then handle the actual cluster orchestration. Identity flows through Kubernetes RBAC and, if you’re using Okta or OIDC, you can map developer roles directly to permissions for creating or modifying storage pools. The result: predictable operations with no click-heavy dashboards or tribal scripts.
One common gotcha is secret management. Ceph keys, S3 access credentials, or encryption tokens should never live inside Git. Instead, place them in an external secret store and reference them in manifests. ArgoCD will sync the references, not the secrets. Rotation becomes safer and reproducible, and your SOC 2 auditor sleeps better.
Quick snippet answer:
To connect ArgoCD and Ceph, store your Ceph cluster manifests or Helm releases in Git, configure ArgoCD to track that repo, and rely on Ceph’s operator to reconcile resources. This makes your storage lifecycle Git-driven, versioned, and cleanly auditable.