Nginx + Keepalive production environment construction

created at 08-08-2021 views: 35

Introduction

Nginx's main uses include reverse proxy, load balancing, etc. No matter what its use, Nginx itself also needs to be highly available to prevent single points of failure. The high availability of Nginx can be achieved by collocation with Keepalive. The main idea is to configure the active and standby Nginx services and detect them through Keepalive. When the active Nginx hangs, it can automatically switch to the standby Nginx by transferring the VIP, thereby realizing the high availability of Nginx.

Deployment architecture

Deployment architecture

Nginx deployment

For many companies, the production environment machines are not connected to the Internet, so we install Nginx through the source code here.

1) Download the source package: nginx download, the file directory after decompression is as follows (here we use the latest stable version of nginx-1.20.1):

structure of nginx package

2) Install gcc and other compilation dependent environments:

yum -y install gcc pcre-devel zlib-devel

3) Enter the installation package directory and execute the following command to configure the installation path:

./configure --prefix=/home/nginx

By default, the installation path is /usr/local/nginx, because the production environment machines will mount the disk separately, and the access log of Nginx will become larger over time. In order not to affect the disk where the operating system is located, here we configure it to mount separately Disk directory.

4) After executing the configuration command, a Makefile file will be generated in the current directory, continue to execute the following command to install:

5) Because the installation directory is configured, Nginx will be installed in the /home/nginx directory and start Nginx:

cd /home/nginx/sbin
./nginx

Both master and slave machines install Nginx in accordance with the above steps.

Keepalive deployment

1) yum install keepalive:

yum -y install keepalived

2) Modify the configuration file:

vi /etc/keepalived/keepalived.conf

main keepalived.conf

global_defs {
   router_id LVS_DEVEL
}

vrrp_script chk_nginx {
    script "/home/shell/check_nginx_pid.sh" #Nginx process detection script
    interval 2
    weight 2
}

vrrp_instance VI_1 {
    state MASTER   
    interface ens33  #NIC device
    virtual_router_id 51  #Virtual routing number, master and slave must be consistent
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }

    track_script {
        chk_nginx
    }
    virtual_ipaddress {
        192.168.1.200 #Virtual ip settings
    }
}

Because Keepalived transfers the virtual IP based on whether the Keepalived process is alive or not, if Nginx hangs, but the Keepalived process is still there, the IP will not be transferred, so if Nginx hangs and cannot be restarted, you need to check the script Close the Keepalived process:

check_nginx_pid.sh

#!/bin/bash
A=`ps -C nginx --no-header |wc -l`
if [ $A -eq 0 ];then    #Start nginx if nginx is not started
      /home/nginx/sbin/nginx                #restart nginx
      if [ `ps -C nginx --no-header |wc -l` -eq 0 ];then    #nginx restart failure, then stop keepalived service, VIP transfer
              systemctl stop keepalived
      fi
fi

Need to give check_nginx_pid.sh script permissions:

chmod 777 check_nginx_pid.sh

Standby keepalive.conf

global_defs {
   router_id LVS_DEVEL
}

vrrp_script chk_nginx {
    script "/home/shell/check_nginx_pid.sh" #nginxProcess detection script
    interval 2
    weight 2
}

vrrp_instance VI_1 {
    state BACKUP   
    interface ens33  #NIC device
    virtual_router_id 51 #Virtual routing number, master and slave must be consistent
    priority 90 #The priority here is less than MASTER
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }

    track_script {
        chk_nginx
    }
    virtual_ipaddress {
        192.168.1.200 #Virtual ip settings
    }
}

3) After master-slave keepalived is configured, it can be started by command:

systemctl start keepalived

4) Check whether the startup is successful, and whether the current machine is MASTER or SLAVE, you can check the log:

tail -f /var/log/message

5) Use the following command to check whether the virtual ip is successfully configured:

ip addr

Master-slave switching test

Stop Keepalived on the host to check whether the virtual IP is transferred to the SLAVE host. If the transfer is successful, it means that subsequent requests are processed through Nginx on the standby machine, that is, the high availability of Nginx is realized.

MASTER host execution:

systemctl stop keepalived

Check whether the virtual IP is transferred from the machine:

ip addr

View the log during the switching process:

tail -f /var/log/message

A successful transfer indicates that the environment is set up.

Other instructions

Keepalive.conf weight parameter description

Attentive students may find that there is a weight parameter in vrrp_script. What does this parameter mean?

The explanation in the official website is as follows Keepalived for Linux:

# adjust priority by this weight, (default: 0)
# For description of reverse, see track_script.
# 'weight 0 reverse' will cause the vrrp instance to be down when the
# script is up, and vice versa.
weight \ [reverse]

The first sentence means that the parameter will adjust the priority parameter, and priority is used to elect the MASTER. So this parameter will affect the keepalived MASTER election:

for electing MASTER, highest priority wins.
to be MASTER, make this 50 more than on other machines.
priority 100

The latter means that if the weight is set to weight 0 reverse, if the detection script returns to fail, keepalived will be down and the switch will be completed.

How does the weight adjust the priority? Continue to see the official website description:

vrrp tracking scripts that will cause vrrp instances to go down it
they exit a non-zero exist status, or if a weight is specified will add
or subtract the weight to/from the priority of that vrrp instance.

That is to say, Keepalived will increase or decrease the priority according to the script detection result, and then increase or decrease the priority according to the configured weight, which will affect the MASTER election.

In the actual test, when the weight is greater than 0, the priority will increase the weight when the script detection succeeds, and the priority will decrease the weight when the script detection fails.

When weight <0, when the script detection succeeds, the priority remains unchanged, and when the script detection fails, the priority will reduce the weight.

In the actual test, the priority here will not always change, that is to say, the success or failure will always be detected, and the priority will not increase or decrease all the time. This should be the internal optimization of Keepalive.

Solutions to common problems of master-slave switching failure

  1. Confirm whether the firewall and selinux are closed
  2. The specific logs during the transfer process can be viewed in the /var/log/message file

A production environment configuration for Nginx reverse proxy and load balancing

nginx.conf:

user  root;
worker_processes  8; #Working thread, configured as the number of cpu cores

events {
    worker_connections  1024; #The maximum number of connections that each worker thread can handle, including the connection with the client and the connection with the proxy server
}

http {
    include       mime.types;
    default_type  application/octet-stream;

    #log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
    #                  '$status $body_bytes_sent "$http_referer" '
    #                  '"$http_user_agent" "$http_x_forwarded_for"';

    #access_log  logs/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65; #Keepalive_timeout parameter connected with the client

    #gzip  on;

    upstream api_server {
        server 192.168.1.106:8848;
        server 192.168.1.107:8848;
        server 192.168.1.108:8848;
        keepalive 15;  #Number of idle connections maintained with the background server
    }

    server {
        listen       8848;
        server_name  192.168.1.200; # Configure here as the address of the virtual IP, because the client accesses Nginx through the virtual IP

        location / {
           proxy_pass http://api_server;
           proxy_http_version 1.1; #Set the http 1.1 protocol to maintain a long connection with the back-end http service to prevent excessive time_wait
           proxy_set_header Connection "";
        }
    }
}

The above is the production environment configuration of nginx-1.20.1, there are many parameters that are not involved, in fact, just keep the system default.

created at:08-08-2021
edited at: 08-08-2021: