You run a build. It spins up fine on your laptop, but the moment you push to TeamCity, your AWS stack crumbles. Keys expire, roles drift, CloudFormation templates fail, and you’re left staring at logs thicker than a legal brief. This is the pain of integrating AWS CloudFormation with TeamCity—the dance between automation and authentication.
AWS CloudFormation defines infrastructure. TeamCity executes pipelines. Each does its job well, yet when combined, small authentication gaps and state mismatches can waste hours of DevOps time. Tying them together is the difference between reproducible infrastructure and “why is staging different again?”
The integration works best when your CI runner can provision AWS resources safely without permanent credentials. The core mechanic is identity exchange: TeamCity asks AWS to assume a role via temporary credentials. AWS CloudFormation then runs using that assumed role, building stacks exactly as defined, with permission boundaries that persist across environments.
To connect the two, start with IAM. Define a dedicated CloudFormation role that only TeamCity can assume. Give it just enough policy to deploy your stacks, and nothing more. Replace static access keys with OpenID Connect (OIDC) federation or short-lived tokens. Once TeamCity pipelines can generate these sessions, every build inherits identical access levels—no humans needed.
Need to tune it? Watch your logs. Failed stack creations often trace back to permission scope or region mismatch. Validate that CloudFormation templates reference the same role your TeamCity build agent assumes. Add tagging conventions so deployed resources show their originating build ID. This helps with traceability and teardown, something auditors and SOC 2 reviews both adore.