The heartbeat was fine. Health checks passed. Yet the gRPC calls kept dying in silence. The reason was simple: the gRPC internal port was wrong.
When a gRPC service starts, it listens on a specific port. That port can be internal, hidden behind load balancers, firewalls, or service meshes. If the internal port is misconfigured, your service might appear healthy while quietly rejecting traffic. Debugging it without knowing the exact port mapping can waste hours.
The gRPC internal port is not always the same as what you set in your public-facing configuration. In containerized setups, Kubernetes Services, or sidecar proxies, the internal port is the actual number your gRPC server listens to. The external port is what the world sees. A mismatch between these can cause gRPC connection failures even when logs show no errors.
Checking and locking down the gRPC internal port early prevents cascading failures. In Kubernetes, confirm the container listens on the port you expect. Use readiness probes that hit the correct internal port, not just the external service port. In bare-metal or VM deployments, ensure your reverse proxy routes to the right target without TCP port errors.