Your Kubernetes app just worked flawlessly in staging, but once deployed on GKE it started throwing “AccessDenied” from S3. Somewhere between the pod, the service account, and your S3 bucket, the trust line broke. You can feel the pain of every engineer who’s fought that same invisible permission chain.
Google GKE S3 integration sounds odd at first. After all, GKE runs on Google Cloud, and S3 lives in AWS. But in multi-cloud environments, this pairing shows up constantly. You might rely on S3 for data lakes, logs, or backups, while your workloads scale on GKE. The trick is bridging those identities securely, without passing around long‑lived keys or leaking credentials between clouds.
The workflow is simpler than it looks. Each GKE workload can run under a Kubernetes service account linked to a Google Workload Identity. That identity can then assume a short-lived AWS IAM role using OIDC federation. The OIDC token from Google acts as proof of identity for AWS. In practice, the pod asks AWS for an access token, receives limited credentials, and then talks to S3. No static keys, no manual rotation, no environment variables that keep you up at night.
When it breaks, check the boring stuff first. Make sure the Kubernetes service account annotation matches the federated AWS IAM role ARN exactly. Verify the OIDC issuer URL in the IAM trust policy. And never, ever reuse credentials across namespaces. Those are the small details that cut off your access faster than any firewall.
Quick answer: To connect Google GKE to S3, use OIDC-based federation between your GKE service account and AWS IAM role. This issues temporary credentials that allow workloads in GKE to access S3 without storing secrets.