The request hit your desk: secure authentication, minimal friction, zero guesswork. You open the spec. It’s OpenID Connect (OIDC). And buried inside the config is one detail most teams overlook—the internal port.
OpenID Connect wraps OAuth 2.0 with a standardized identity layer. It moves JSON Web Tokens (JWTs) over HTTPS using defined endpoints: authorization, token, userinfo. But when you run services inside a private network, the internal port for OIDC flows becomes more than a number—it’s a control point.
The internal port decides how the identity provider (IdP) and your service exchange data behind firewalls. Misconfigure it, and the redirect URI breaks, the token exchange fails, or worse—the service leaks to the wrong interface. In containerized deployments, the internal port maps to the pod or service spec. In cloud setups, it routes traffic through load balancers with TLS termination.
For OIDC, your internal port is not automatically the public one. The IdP redirect may call back to 443 externally, but inside, apps may bind to 8080, 3000, or a secured ephemeral TCP port. The discovery document must reflect actual internal routing. Match your internal port to the service binding in configs, and ensure your network policy allows IdP traffic through that exact port.