You deploy your container to Cloud Run and need persistent storage that respects your data’s dignity. Stateless apps move fast until they hit a wall called “state.” That is where Portworx enters the picture, bringing resilient, Kubernetes-grade storage logic to Google’s managed, serverless platform.
Cloud Run handles scale without a single node to babysit. Portworx, on the other hand, speaks fluent volume orchestration across clusters. Combined, they let you run stateless and stateful workloads side by side with identity-aware policies controlling who touches your data. The goal is fast deployment without surrendering auditability or compliance.
The integration works through a few clean layers. Cloud Run manages compute instances dynamically, while Portworx abstracts storage so volumes can follow containers wherever they go. You treat volumes like cloud-native citizens, attaching them through configuration synced with IAM or OIDC identity. That alignment matters since permissions now translate between the runtime and storage tiers automatically. When done correctly, developers never see a ticket queue—they just get storage that respects their cloud identity.
To keep things tight, map your Cloud Run service account to Portworx’s RBAC model. If your team uses Okta or AWS IAM, link those identities via OpenID Connect and rotate those tokens often. Secrets should live in an encrypted source rather than an environment variable. Audit trails should feed into whatever log pipeline you trust most. It sounds dull, but it is exactly what keeps SOC 2 auditors smiling.
Quick answer: You connect Cloud Run and Portworx by binding Cloud Run service identities to Portworx volume policies using OIDC or IAM. That gives each deployment a secure, consistent data footprint without manual permission setup.