The pod started fine, then your logs filled with AccessDenied.
Working with Kubernetes, AWS S3, and read-only roles should be simple. Yet time and again, the handoff between kubectl and AWS IAM trips teams up. You just want your pods to pull objects from S3 without risking a write, delete, or even list in the wrong bucket. That means nailing two things: IAM policy scope and how your workload assumes that role inside the cluster.
A proper read-only role for S3 starts with precision in IAM. At minimum, the role's policy must include s3:GetObject. Lock it to the exact bucket and prefixes your workload needs. Avoid wildcards unless you have no other choice. Add s3:ListBucket only if you must enumerate keys — many read paths don't require it, and leaving it out reduces the attack surface.
Next, connect that role with Kubernetes service accounts. With Amazon EKS or any k8s cluster using AWS IAM Roles for Service Accounts (IRSA), create a service account annotated with the role ARN. This ensures your pods assume the correct IAM role without passing static credentials. Switch context with kubectl, apply the service account manifest, and confirm it in your deployment spec.