You ran kubectl get services and saw the LoadBalancer type staring back. It promised external access, but you still had questions: How does kubectl work with a LoadBalancer? What happens under the hood? Why isn’t your app reachable yet?
A Kubernetes LoadBalancer service tells the cluster to provision an external IP so traffic can hit your pods from the public internet. When you run:
kubectl expose deployment my-app \
--type=LoadBalancer \
--port=80 \
--target-port=8080
you create a service that forwards requests to the right pods, even if they scale up or down. Behind the scenes, your cloud provider’s API creates a network load balancer and assigns it an address. Kubernetes updates the service with that external IP once it’s ready.
Check it like this:
kubectl get svc my-app
If EXTERNAL-IP is <pending>, your cloud provider hasn’t finished allocating it. This step depends on integration between Kubernetes and the provider. On bare metal or unsupported environments, nothing will appear—you’ll need something like MetalLB or another load balancer integration.
Using a LoadBalancer with kubectl is the simplest way to make your Kubernetes service available to the world. It’s direct: define your service, wait for the external IP, and start sending traffic.
Still, LoadBalancer services have limits. Each one often creates its own external IP, which can be inefficient and costly. Sometimes an Ingress controller is a better choice for routing multiple apps through a single IP. But when you need fast, one-service-to-public access—kubectl plus a LoadBalancer is the cleanest path.
If you want to skip complex setup and watch a running service with a public endpoint in minutes, try it with hoop.dev. Push your code, create the service, and see it live fast. No waiting. No wrestling with cloud settings. Just your app, online.