What are microservices? Kubernetes for scalability in microservices.

Armish Munir
5 min readOct 24, 2022

--

You might have heard about this cool term “microservices”, but you might not understand it. Let’s talk about microservices. I’ll be explaining what are microservices, why and how we use them. But before getting into it, you must be aware of monolith architechture is and why the industry moved towards microservices architechture.

Monolith Architechture:

Before microservices architechture, the standard way of developing an application was with monolithic architechture. Which means all the components of the application(whole code basically) is part of a single unit.

Amazon app (monolithic arch)

For example, let’s take amazon as an application which is built using a monolithic architechture, then all parts i.e. user auth, shopping cart, products, payments etc. all the code with these functionalies would be in a single code base.

Challenges & drawbacks of monolithic architechture:

Everything is developed and deployed as a single unit. which means there a language(programming lang) barrier, your whole application must be written in a single language with one technology stack. One of many drawback of monoithic architechture is that the whole team has to coordinate with each other, they need to be very careful to not affect each other’s work. Also if one component of the application is changed, you must have to redeploy the whole application. which means you have to do this redundant work on each update.

In case of high traffic on some holidays and if you want to scale up a single component of the application let’s say payment momdule, you can’t do that. You have to scale the whole application. which means higher cost and less scalability. Also we have difficulty if services needs different dependency versions. Bug in any module can potentially bring down the whole application.

What are microservices?

With mocroservices we break down an application into multiple smaller independent services. Now you might have questions about how we divide our application into mocroservices, what would be the size of it, what goes where, how many services do i need to create? Let’s answer it.

We split the code or we create a mocroservices on the basis of their business functionality and not technical functionalities.

In terms of size each microservice must do one isolated job. You dont have a microservice doing multiple jobs e.g. you don’t have a microservice doing payments and notifications tasks. these must be managed by different micro services. Microservices should be self-contained and independent from each other, which means each service is managed, developed and scaled independently without any team, language and type dependency on anyother service.

Kubernetes:

Kubernetes provides a new set of abstractions that go well beyond the bases of container deployments, and enable you to focus on the big picture. Previously our focus has been deploying applications to individual machines which lock you into limited workflows. Kubernetes allows us to abstract the way in the individual machines and treats the entire cluster like a single logical machine.

The easiest way to get started with Kubernetes is to use a kubectl detail run and command. This command is used to run to launch a single instance of the Nginx container.

$ kubectl run nginx --image=nginx:1.10.0// output will be a response as follows.
-> deployment "nginx" created

In Kubernetes, all containers run in what’s called a pod. You can use the command to view the running Nginx container.

kubectl get pods

Pods:

what are pods? let’s discuss, Pods represent a logical application. Pods represent and hold a collection of one or more containers. Generally, if you have multiple containers with a hard dependency on each other they would be packaged together inside of a single pod.

Kubernetes doesn’t run containers directly; instead it wraps one or more containers into a higher-level structure called a pod. Any containers in the same pod will share the same resources and local network. How can we create pods? Pods can be created using pod configuration files,

cat pods/monolith.yaml

let’s get back to oue topic. Nginx container is running for now. We can expose it outside of the Kubernetes using the command :

$ kubectl expose deployments nginx — port 80 — type LoadBalancer

Behind the scenes, Kubernetes created an external load balancer with a public IP address attached to it. Any client who hits that public IP address will be routed to the pods behind the service. In this case that would be the Nginx pod.

Scaling:

Before getting into scalling we have already done deployments. let’s see how we can create deployments for each service? Deployments are a declarative way to say what goes where means deployments drive the current state towards the desired state. Deployments use the Kubernetes concept called replica sets to ensure that the current number of pods equals the desired number.

$ cat deployments/auth.yaml //to examine auth deployment config file
$ kubectl create -f deployments/auth.yaml //create auth deployment
$ kubectl create -f services/auth.yaml //create auth service

Scaling is done by updating the replicas field in our deployment manifest. Deployments create a replica set to handle pod creation, deletion, and updates. Deployments own to manage the replica sets for us.

$ kubectl get replicasets  //to view current set of replicas
$ kubectl get pods -l "app=delance,track=stable"
$ kubectl apply -f deployments/delance.yaml
$ kubectl get replicasets

Now at the endpoint, we have multiple copies of our application service running in Kubernetes and we have a single front-end service that is proxying traffic to all three pods. This allows us to share the load and scale our containers and Kubernetes.

--

--

No responses yet