Nginx as a Load Balancer – some details as we explored

A complex web application will consist Load Balancer at front web request handling. A picture of a typication web application deployment environment can be drawn as below –

Load Balancer

We had written about round-robin load balancing of nginx with node.js in our previous article.

Now in this article we will try to discuss some of the load balancing schemes in nginx and their configuration in web applications.

Nginx server can be used to distribute the load balancing of web request to different web/ application servers.

The Load Balancing Schemes –

Round Robin – Nginx send the web request to different servers in order they are defined in the nginx configuration file.

Example Configuration  –

http{

    upstream sampleapp {
        server <<dns entry or IP Address(optional with port)>>;
        server <<another dns entry or IP Address(optional with port)>>;
        }
    ....
    server{
       listen 80;
       ...
       location / {
          proxy_pass http://sampleapp;
       }  
  }

In above only the DNS entries are to be inserted in upstream section with a name mentioned, in our case it 
is sampleapp. Also the name to be mentioned in proxy_pass section.
 

Least Connections Web request is sent to the server with the least connections which are active/ least load.

Example Configuration  –

 http{

    upstream sampleapp {
        least_conn;
        server <<dns entry or IP Address(optional with port)>>;
        server <<another dns entry or IP Address(optional with port)>>;
        }
    ....
    server{
       listen 80;
       ...
       location / {
          proxy_pass http://sampleapp;
       }  
  }

In above, the only line that is to be added in upstream section is least_conn. Other things are same as previous one.

Ip-Hash – For the above 2 methods, the subsequent web request from client can be sent to different servers. So session handling will be complex here. Only DB based session persistence and handling can be done here. To overcome this scenario, we can use ip_hash scheme. Here subsequent web requests of client will be sent to same server.

Example Configuration  –

 http{

    upstream sampleapp {
        ip_hash;
        server <<dns entry or IP Address(optional with port)>>;
        server <<another dns entry or IP Address(optional with port)>>;
        }
    ....
    server{
       listen 80;
       ...
       location / {
          proxy_pass http://sampleapp;
       }  
  }

In above, the only line that is to be added in upstream section is ip_hash. Other things are same as first one.

Weighted Load Balancing – We can configure nginx such as it can send more web requests to more power full servers and less request to less resource based servers. So weight is defined for the more power full server.

Example Configuration  –

 http{

    upstream sampleapp {
        server <<dns entry or IP Address(optional with port)>> weight =2;
        server <<another dns entry or IP Address(optional with port)>>;
        }
    ....
    server{
       listen 80;
       ...
       location / {
          proxy_pass http://sampleapp;
       }  
  }

In above weight=2 is mentioned for a server. This means for every 3 requests, first 2 will go to first 
server and 1 to second server. This weight parameter can be combined with ip_hash scheme also.

For more details about those load balancing settings, we should refer to nginx documentation.

Reference : Load Balancing with Nginx

We will discuss about nginx caching in our next article/s.

If you find this article helpful, you can connect us in Google+ and Twitter for other updates.

2 thoughts on “Nginx as a Load Balancer – some details as we explored

    1. We have implemented open source nginx as static resource web server in projects. All these will work on the open source nginx. Still now we had no need to implement ngnix plus.

Leave a Reply

Your email address will not be published. Required fields are marked *