You just deployed a service on GKE that needs a Postgres connection, and now you are knee-deep in service accounts, private IPs, and firewall rules. Congratulations, you’ve hit the classic “connect Cloud SQL to Google Kubernetes Engine without losing your mind” puzzle. The good news: it’s solvable, and once you know the flow, it’s not scary at all.
Cloud SQL handles managed databases elegantly, freeing you from the patch-and-backup treadmill. Google Kubernetes Engine runs your workloads at scale without caring about VMs. Together they create a clean separation between application logic and persistence, as long as you connect them securely and efficiently. That connection, however, is where most teams bang their heads.
The key insight is identity. Kubernetes pods don’t magically inherit Google Cloud IAM credentials. Each pod needs a trusted way to prove who it is before getting access to Cloud SQL. The safe solution is Workload Identity. It maps a Kubernetes service account to a Google service account, which then holds the necessary IAM roles for Cloud SQL. Once bound, your app can use the Cloud SQL Auth Proxy or direct socket connection without storing secrets in manifests or ConfigMaps.
The proxy intercepts traffic, exchanges secure tokens, and ensures encrypted connections. You can deploy it as a sidecar or daemon set. Either way, it handles authentication, rotates credentials automatically, and exposes a local port for your application. When requests leave the cluster, they already carry the right identity proof. No keys, no leaks.
If something fails, check roles first. The service account needs roles/cloudsql.client. For private IP setups, verify that the GKE nodes use a VPC with Cloud SQL’s private service connection. When latency spikes, look for regional mismatches. Keep all resources in the same region whenever possible.