You have a microservice running on k3s that needs to talk to DynamoDB. It should be simple, yet every time you add a new namespace or node, you end up wrestling with IAM credentials. The goal isn’t just to make it work once, but to make it stable, auditable, and easy for the next engineer to repeat.
DynamoDB is the low-latency, fully managed NoSQL database that powers half of AWS land. k3s is the stripped-down Kubernetes perfect for edge or development clusters. Together, they give you a fast control plane for stateful logic that still talks to cloud-native storage. The challenge is identity: who gets to access what, and how do you teach small clusters to play nicely with AWS IAM?
The usual pattern begins with assigning AWS credentials as secrets inside k3s, but that’s fragile. A better approach uses short-lived credentials from an identity provider like Okta or AWS IAM Roles Anywhere. Map them into your pods through federation, not static secrets. That way, your workloads call DynamoDB directly using assumed roles, and rotation happens automatically.
To tie DynamoDB and k3s together cleanly, think of the identity flow rather than the data flow. A pod requests a token from your identity provider. The identity provider verifies its service account and hands back a temporary AWS credential. The pod uses that to sign its DynamoDB requests. No persistent keys, no hidden YAML time bombs.
If roles or policies feel tangled, start small. Create one IAM role per service type, not per pod. Keep the principle of least privilege but avoid overfragmentation. Rotate client-side tokens every few hours. Monitor failed auth attempts and map them to cluster events. That’s where policy meets observability, and it saves midnight debugging later.