Nodejs Load balancer using NGINX

NGINX is open source software for web serving, reverse proxing, caching, load balancing, media streaming and more. It started out as a web server designed for maximum performance and stability. In addtion to its HTTP server capabilities, Nginx can also function as a proxy server for email (IMAP,POP3,and SMTP) and a reverse proxy and load balancer for HTTP, TCP, and UDP servers.

In this article, we will learn how to use NGINX as a load balancer for handling multiple requests and manage the incoming traffic between the NodeJS servers.

What load balancing?

Load balancing is an approach to distribute and allocate the ingress traffic to multiple application servers.

Why load balancing?

By the time when your application becomes famous, popular and the number of users using this application increased. Traffic and number of requests increased so in this case we need to create a load balancer for this application which handle the incomming requests and traffic to increase the performance also reduce response time and achieve higher availability.

The load balancer stays in between the client and application servers and decides on which server this request should go. The decision can be configured using different algorithms.

  • Round-robin(Default)
  • hash
  • IP hash
  • Least connections
  • Least Time

We, Will, discuss these methods after the implementation part

Implementation

We will create a simple HTTP server and run multiple instances of this application by using docker .

Steps

  • Develop HTTP server and build an image for this app.
  • Run three instances of this application by using docker containers
  • Build an image for NGINX with load balancing configuration.
  • Run NGINX image and test load balancing via browser.

load balancer docker and nodejs

Note:Make Sure that docker is installed on your machine if not installed visit this link and install Docker

The advantages of using docker for your application to summarize them, docker containers speed up your deployments, make your application portable to any platform,images are lightweight, simplifies maintenance and are highly scalable.

  • Create a new project loadBalancer inside this directory run the following command npm init -y
  • Install express package by running the following command npm i express
  • Create a new file app.js and inside this file we will create HTTP server

in loadBalancer/app.js


const express = require("express");
const app = express();
const PORT = process.env.PORT || 4000;
const server = process.env.server||"GateWay"
app.get("/", (req, res, next) => {
    res.send(`This Request redirect to the server ${server}`);
});
app.listen(PORT, () => {
    console.log(`Server Running at port:${PORT}`)
});

  • Create a Dockerfile which used for creating docker image for this application

in loadBalancer/Dockerfile

FROM node
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
CMD node app.js
EXPOSE 4000

So to create a docker image inside the same directory which Dockerfile exist run the following command docker build -t server-app:1.0.0 .

After that to make sure that ervery thing is working fine check if this image created by running the following command docker images

Docker images

In loadBalancer create a new directory called gateway and inside gateway create two files nginx.conf for NGINX configuration and Dockerfile for create nginx image.

File structure for the project

in loadBalancer/gateway/nginx.conf

upstream loadbalance {
    least_conn;
    server 172.17.0.1:4001;
    server 172.17.0.1:4002;
    server 172.17.0.1:4003;
}

server {
    location / {
        proxy_pass http://loadbalance;
    }
}

in loadBalancer/gateway/Dockerfile

FROM nginx
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d/default.conf

After that create an image for nginx by running the following command docker build -t gateway:1.0.0 .

Running and Testing

we will create three containers as three instances for http server first one running on port 4001 the second on 4002 the third server will be on 4003 by running the following commands:

docker container run -d -p 4001:4000 --name server-01 -e "server=one" server-app:1.0.0

docker container run -d -p 4002:4000 --name server-02 -e "server=two" server-app:1.0.0

docker container run -d -p 4003:4000 --name server-02 -e "server=three" server-app:1.0.0

Now We have three instances running if you open the browser and visit http://localhost:4001 , http://localhost:4002 and http://localhost:4003 you will see the following output

three servers

Finally, we will run the Nginx container as a load balancer by running the following command docker container run -p 4000:80 -d gateway:1.0.0

Now visit http://localhost:4000 and refresh the page many times each time request will redirect to a different server and this the implmentation of load balance.

NGINX Load balance methods

Round-robin

This method is used by default (there is no directive for enabling it). In this method requests are distributed across the servers with server weights taken into considration.

in NGINX configuration file this will be the configuration setup for this method

upstream my_http_servers {
    server 127.0.0.1:4000;      # httpServer1 listens to port 4000
    server 127.0.0.1:4001;      # httpServer2 listens to port 4001
    server 127.0.0.1:4002;      # httpServer3 listens to port 4002
    server 127.0.0.1:4003;      # httpServer4 listens to port 4003
}

Hash

In this method, Nginx calculates a hash that is based on a combination of text and Nginx variables and assigns it to one of the servers. The incoming request matching the hash will go to that specific server only.

upstream my_http_servers {
    hash $scheme$request_uri;
    server 127.0.0.1:4000;      # httpServer1 listens to port 4000
    server 127.0.0.1:4001;      # httpServer2 listens to port 4001
    server 127.0.0.1:4002;      # httpServer3 listens to port 4002
    server 127.0.0.1:4003;      # httpServer4 listens to port 4003
}

IP hash

This method is the only available for the HTTP server. In this method, the hash is calculated based on the IP address of a client. This method makes sure the multiple requests coming from the same client go to the same server. This method is required for socket and session-based applications.

upstream my_http_servers {
    ip_hash;
    server 127.0.0.1:4000;      # httpServer1 listens to port 4000
    server 127.0.0.1:4001;      # httpServer2 listens to port 4001
    server 127.0.0.1:4002;      # httpServer3 listens to port 4002
    server 127.0.0.1:4003;      # httpServer4 listens to port 4003
}

Least connection

A request is sent to the server with the least number of active connections.

upstream my_http_servers {
    least_conn;
    server 127.0.0.1:4000;      # httpServer1 listens to port 4000
    server 127.0.0.1:4001;      # httpServer2 listens to port 4001
    server 127.0.0.1:4002;      # httpServer3 listens to port 4002
    server 127.0.0.1:4003;      # httpServer4 listens to port 4003
}

Least time

In this method, Nginx calculates and assigns values to each server by using the current number of active connections and weighted average response time for past requests and sends the incoming request to the server with the lowest value.

Which load balancing method to use?

This depends on two factors Nature of your application and Traffic pattern