Managing Kubernetes LoadBalancer Services with kubectl

A Kubernetes LoadBalancer service routes external traffic to your pods through a cloud provider's network load balancer. When you run:

kubectl expose deployment my-app --type=LoadBalancer --name=my-service

you create a service that gets a public IP and directs traffic to the pods in the deployment. The type=LoadBalancer flag tells Kubernetes to request this resource from the cluster’s cloud integration—AWS ELB, GCP TCP/HTTP, Azure Load Balancer, or another supported provider.

You can check the status with:

kubectl get services

Look at the EXTERNAL-IP column. Pending means the cloud provider hasn’t finished provisioning. Once you have an IP or hostname, you can connect directly from outside the cluster.

Scaling services with a LoadBalancer is straightforward. The load balancer automatically distributes traffic to healthy pods. Scale with:

kubectl scale deployment my-app --replicas=5

Kubernetes updates the endpoints list, and the load balancer begins routing to the new pods without downtime.

For production, add readiness probes to the deployment spec so Kubernetes removes failing pods from the load balancer rotation:

readinessProbe:
 httpGet:
 path: /health
 port: 8080
 initialDelaySeconds: 5
 periodSeconds: 10

To remove a LoadBalancer service:

kubectl delete service my-service

This tells the cloud provider to deallocate the external IP and dismantle the load balancer.

Understanding kubectl commands for LoadBalancer services is critical for controlling ingress, scaling workloads, and delivering high availability without manual intervention. Use them to deploy, inspect, and tear down services with confidence.

See how this works end-to-end and get a LoadBalancer up in minutes at hoop.dev.