You just want your PostgreSQL database running on Google Kubernetes Engine (GKE) to be reachable when it should be, and invisible when it shouldn’t. Yet many teams end up cobbling together service accounts, static secrets, and manual IP whitelists that age about as well as milk in a heat wave. There’s a better pattern.
At its core, Google GKE orchestrates containers, while PostgreSQL powers reliable, relational data. Together, they can run any data-driven service at scale. The challenge is access. Who gets to connect? How do you handle rotation, auditing, and secure connections without reconfiguring pods every sprint? Done right, your GKE workloads talk to PostgreSQL using short-lived, identity-aware credentials that enforce least privilege automatically.
The integration workflow looks like this: GKE workloads use workload identity to assume a Google service account instead of carrying raw credentials. That account maps to a role in Cloud SQL or self-hosted PostgreSQL, often via IAM or OIDC. Connections are made over private IP or through a proxy sidecar. Permissions are scoped to the job or namespace, so your CI tasks can write metrics while your API pods only read. The result is clean separation with no lingering database users.
Best practices help keep this setup stable:
- Use workload identity instead of secret-mounted keys. It prevents leaked credentials from lingering in images.
- Keep PostgreSQL roles minimal. Avoid using “postgres” for everything; map Kubernetes service accounts to purpose-built roles.
- Rotate credentials automatically through IAM or a delegated secrets manager.
- Centralize network egress rules. Fine-grained service accounts are only half the security story; routing matters too.
Benefits you can count on: