Kubernetes Service for Absolute Beginners — Loadbalancer

Shubham Agarwal
3 min readDec 10, 2021

--

What is Kubernetes services?

  • Enable communication between various components within and outside of the application.
  • It helps us connect applications together with other applications or users.
  • It is an object, Just like Pods, Deployment, or Replicaset.

As we discussed in our Previous blog that Service NodePort helps us make an external-facing application available on a Port on the worker nodes.

So let’s take an example of a front-end application, say exam app and result app.

The pods of our application are hosted on the worker nodes in the cluster.

So let’s say we have a four-node cluster and to make the application accessible to external users, we created the service of type nodePort.

The NodePort service help in receiving traffic on the ports on the nodes and routing the traffic to the respective Pods.

From local we can access any of these 2 applications using IP of any of the nodes and the high Port, service is exposed on. Even your pods are only hosted on 2 of the nodes, they will still be accessible on the IPs of all the nodes in the cluster, like:

http://192.168.10.2:31110                http://192.168.10.2:30081
http://192.168.10.3:31110 http://192.168.10.3:30081
http://192.168.10.4:31110 http://192.168.10.4:30081
http://192.168.10.5:31110 http://192.168.10.5:30081

But what URL would you give to your end-users to access the applications? The end-user doesn’t want to use IP:port to access the application, they need a single URL like exam.example.com or result.example.com to access the application. So how do you achieve that?

One way to achieve this is to create a new VM for load balancer and install and configure the suitable load balancer on it, like HAProxy or Nginx, etc and then configure the load balancer to route traffic to the underlying nodes.

Hmm, but setting all of that external load balancing and then maintaining and managing them, can be a tedious task.

And here the load balancer service comes into play. If we were on a supported cloud platform like GCP or AWS or Azure we can use the native load-balancer of that cloud platform.

Kubernetes has support for integrating with the native load balancers of certain cloud providers and configuring that for us.

Load Balancer Service

So all we need to do is to set the service type for the front-end services to load balancer instead of nodePort in the service-definition file.

apiVersion: v1
kind: Service
metadata:
name: exam-service
spec:
type: LoadBalancer
ports:
- targetPort: 80
port: 80
nodePort: 31110

Remember, this only works with supported cloud platforms, so GCP, AWS & Azure are definitely supported.

So if you set the type of service to LoadBalancer in an unspportive environment like VirtualBox, then it would have the same effect as setting to NodePort, it won’t do any kind of external load balancer configuration.

That’s it for Service LoadBalancer. Thanks for reading :)

Refer Other useful Kubernetes Articles:

How kubectl apply command works?

Kubernetes Services for Absolute Beginners — NodePort

Kubernetes Services for Absolute Beginners — ClusterIP

labels-and-selectors-in-kubernetes

Kubernetes workflow for Absolute Beginners

--

--

Shubham Agarwal
Shubham Agarwal

Written by Shubham Agarwal

Site Reliability Engineer, have 5 years of experience in IT support and Operations

Responses (1)