Every engineer who’s deployed a container on Google Cloud Run has hit the same quiet mystery: what port is this thing actually listening on? The app runs locally on 8080, but Cloud Run keeps whispering something about $PORT. Get that detail wrong and your container boots fine, then disappears into a black hole of failed health checks.
Cloud Run Port defines the single inbound port your container must listen on for HTTP traffic from the platform. You cannot pick two, you cannot ignore it, and you definitely should not hardcode a different one in production. The service injects the right value during runtime, and your app must respect it. Simple rule, deceptively easy to forget.
When you deploy, Cloud Run assigns each revision an environment variable named PORT, usually set to 8080 but not guaranteed. Your web server grabs this value to start listening. The platform then routes requests through its load balancer to that port. One container, one port, built for consistency across languages, frameworks, and CI/CD pipelines.
So why does it matter? Modern infrastructure thrives on predictable interfaces. The moment you introduce dynamic routing or background jobs that bind to random ports, scaling breaks. Cloud Run Port removes that guesswork. It locks your container into a contract: receive here, respond fast, shut down gracefully.
Common Pitfalls and Quick Fixes
If your service starts but returns connection errors, check that your framework respects environment variables. In Node.js or Python, always bind to process.env.PORT. For Go or Rust, pass it in as a runtime variable. Forget that step and you’ll be debugging “connection refused” messages that make you question reality.