Your CI job just failed again because someone forgot to rotate an AWS key. Meanwhile, your Kubernetes clusters keep changing IPs faster than your Terraform can keep up. That’s usually the moment an engineer starts muttering about “just wiring TeamCity directly to EKS.” Thankfully, that pairing—EKS TeamCity—is exactly what you need for controlled builds and dependable deployments.
Amazon EKS handles the orchestration side. It runs container workloads and abstracts most cluster management. TeamCity executes your build pipelines with all the knobs modern CI demands. Together, they let you automate everything from Docker image builds to rolling updates across multiple namespaces. The magic happens when TeamCity talks securely to your EKS API using the same identity your developers trust.
The simplest integration flow looks like this: TeamCity authenticates with AWS using a dedicated IAM role mapped through OIDC. That identity grants scoped permission to interact with your EKS cluster. Once authorized, the agent can run kubectl commands, deploy Helm charts, or kick off canary rollouts without static secrets. Instead of storing keys in TeamCity, you rely on IAM federation and Kubernetes RBAC to control access in real time.
Be strict about those RBAC mappings. Give each TeamCity project its own service account tied to a precise role. Avoid broad system:masters bindings just to make a job “finally work.” Rotate access tokens automatically or use short-lived credentials issued by AWS STS. If a pipeline fails mid-deploy, clean up jobs immediately so stale tokens cannot be reused. Engineers often skip these steps only once.
Quick answer:
To connect TeamCity with AWS EKS, configure an OIDC identity provider in IAM, assign a role with minimal EKS permissions, then reference that role in TeamCity’s build agent configuration. This removes static key storage and gives you auditable, short-lived access for every pipeline run.