Kubectl to AWS S3 with Read-Only IAM Roles in Kubernetes
Your terminal waits. You need kubectl to pull data from AWS S3, but your role is read-only and the clock is ticking.
When Kubernetes workloads interact with S3, IAM roles define the scope. Many engineers try full-access policies first. That’s unnecessary, dangerous, and slows compliance reviews. The right approach is granting a read-only AWS S3 role, binding it cleanly to your Kubernetes service account, and keeping credentials out of pods.
Start with the IAM policy. Use s3:GetObject and s3:ListBucket permissions only. Attach this policy to a role. In EKS, enable IAM Roles for Service Accounts (IRSA). This lets your pod assume the role directly via its service account annotation.
Example process:
- Create the IAM role with trust policy for the EKS OIDC provider.
- Attach the read-only S3 policy.
- Add the annotation
eks.amazonaws.com/role-arnto the Kubernetes service account. - Deploy your pod with that service account.
From there, kubectl exec into the pod and run aws s3 ls s3://your-bucket-name/. No hardcoded keys. No risk of write actions.
For non-EKS clusters, use projected service account tokens and temporary IAM sessions, but keep the principle the same: minimal policy, short-lived credentials, no local secrets.
Restricting to read-only reduces blast radius, passes security audits faster, and keeps production stable. The method is straightforward but often overlooked.
Want to see it live without days of YAML tuning? Try it in minutes at hoop.dev and get kubectl to AWS S3 read-only roles working with zero friction.