You have containers running in Amazon EKS and messages flying through Azure Service Bus. Now you just need them to trust each other without handing out static keys like Halloween candy. The goal is simple: connect compute in AWS with messaging in Azure, securely and with the least human drama possible.
Amazon EKS gives you managed Kubernetes clusters that scale with demand and integrate neatly with AWS IAM for workload identity. Azure Service Bus, on the other hand, manages reliable message delivery across microservices. One handles orchestration, the other handles communication. Pair them and your services can publish, subscribe, and process events across clouds as if parity were built in.
The trick lies in identity. You can’t just drop credentials into a ConfigMap and hope no one notices. Instead, use OpenID Connect (OIDC) between EKS and Azure AD to grant short‑lived access tokens to your pods. Each pod presents its service account, OIDC signs off, and Azure validates the claim before allowing messages through Service Bus. The result is authentication that feels automatic and ephemeral rather than brittle.
Workflow: how EKS meets Service Bus
The setup flows like this. AWS IAM binds an OIDC provider to your cluster. Azure AD trusts that OIDC source and issues delegated access to a defined Service Bus namespace. EKS workloads assume temporary roles, call Azure APIs with the retrieved token, and push or process messages. Everything is auditable and no long‑term secret remains. It sounds fancy, but it’s just JSON Web Tokens doing honest work.
Best practices for cross‑cloud identity
Map your Kubernetes service accounts tightly. Over‑permissioned roles are the fastest path to regret. Rotate trust metadata regularly, the same way you patch cluster nodes. Keep Service Bus SAS keys sealed away for emergencies only. And if you route messages across private endpoints, verify DNS resolution from within the EKS network before assuming the issue is Azure’s fault. It usually isn’t.