Your build just passed in Bitbucket, but now the real work begins. You need to push artifacts somewhere safe, encrypted, and durable. Enter Amazon S3. It is reliable, cheap, and trusted by just about every DevOps engineer alive. Still, connecting Bitbucket to S3 can feel like a puzzle of credentials, IAM roles, and brittle YAML syntax.
Bitbucket handles CI/CD with pipelines that automate testing, linting, and deployment. S3 stores build results and static assets behind AWS’s tough security model. Together, they let you move artifacts from your source repo to long-term storage without touching local machines. The challenge is building that bridge in a way that stays auditable and does not bleed secrets all over your logs.
A proper Bitbucket S3 integration starts with identity. Instead of pasting long-lived AWS keys into your pipeline, use OpenID Connect (OIDC) so Bitbucket can request temporary credentials. Bitbucket becomes a trusted identity provider to AWS, which grants time-limited access using IAM roles. It is clean, automated, and fully compatible with AWS Security Token Service. This prevents the classic “orphaned key in repo” problem that keeps SOC 2 auditors awake at night.
Next, make permissions explicit. Define which buckets the pipeline can write to and at what paths. For example, restrict access to s3://my-build-artifacts/$BITBUCKET_BRANCH so each branch only touches its own folder. When that merge hits main, the pipeline uploads, tags, and prunes older builds automatically. No shell hacks, no manual cleanup.
If your pipeline fails with AccessDenied, check your audience and provider URLs in the Bitbucket OIDC config. They must exactly match what AWS expects. Misaligned claims cause most authentication errors.