You know the feeling. You’ve got a BigQuery job that needs credentials, and somewhere in your CI pipeline those secrets are sitting in plaintext like an open soda next to your keyboard. AWS Secrets Manager promises to clean that up, but now you’re wondering how to make it talk cleanly to BigQuery without breaking your data workflow.
AWS Secrets Manager handles sensitive keys and tokens inside AWS, rotating them automatically and logging access through CloudTrail. BigQuery, on the other hand, expects credentials for service accounts or federated identities to authenticate and query data. When you stitch them together, you get secure, versioned credential handling with zero human babysitting.
The basic flow starts with AWS Secrets Manager holding a JSON payload for your GCP service credential. Your pipeline or app retrieves that secret through AWS IAM permissions at runtime, not build time. It injects the credential into an environment variable or memory, never committing it to disk. Then BigQuery uses it to run queries, stream inserts, or exports. That means no hardcoded passwords in Terraform, no expired tokens in Docker images, just predictable, logged, ephemeral authentication.
If you’re wiring this up in production, start by mapping IAM roles to specific BigQuery jobs or datasets. Give each role a unique secret ARN in AWS. Use short TTLs, automate rotation, and restrict decryption rights. When something breaks, CloudWatch logs and BigQuery audit tables will tell you which principal touched what and when, a gift during 2 a.m. pager duty.
A short answer many engineers search for: To connect AWS Secrets Manager to BigQuery, store your GCP service account key as a secret in AWS, grant IAM permissions to the runtime environment, retrieve it securely at execution, and let BigQuery authenticate using that short-lived credential. Nothing persists; everything logs. That’s the clean pattern.