Linux高可用(HA)之LVS+Keepalived 实现高可用前端负载均衡器

摘要

Keepalived 是一个基于VRRP协议来实现的LVS服务高可用方案,可以利用其来避免单点故障。一个LVS服务会有2台服务器运行Keepalived,一台为主服务器(MASTER),一台为备份服务器(BACKUP),但是对外表现为一个虚拟IP,主服务器会发送特定的消息给备份服务器,当备份服务器收不到这个消息的时候,即主服务器宕机的时候, 备份服务器就会接管虚拟IP,继续提供服务,从而保证了高可用性。Keepalived是VRRP的完美实现

准备服务器4台:
node1 172.16.6.100 ZhongH100.wxjr.com.cn
node1 172.16.6.101 ZhongH101.wxjr.com.cn
master 172.16.6.102 ZhongH102.wxjr.com.cn
slave 172.16.6.103 ZhongH103.wxjr.com.cn
vip 172.16.7.200
系统都是CentOS 6.6 x86_64
注意:

[root@ZhongH100 ~]还是[root@ZhongH101 ~]或者是

[root@ZhongH102 ~][root@ZhongH103 ~]
如果是[root@ZhongH ~]那么就是node1 node2两台机器都需要运行的


一、基本环境配置
node1 node2主机名互解 ssh互信,4台机器都做时间同步

1、node1

[root@ZhongH100 ~]# echo "172.16.6.101 ZhongH101.wxjr.com.cn" >> /etc/hosts
[root@ZhongH100 ~]# ssh-keygen  -t rsa -f ~/.ssh/id_rsa  -P ''
Generating public/private rsa key pair.
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
91:8f:26:e0:d6:8b:5d:1f:08:5d:2b:cc:be:a8:77:69 root@ZhongH100.wxjr.com.cn
The key's randomart image is:
+--[ RSA 2048]----+
|          .      |
|       + o .     |
|    . . B .      |
|   . o o *       |
|    o o S o      |
|   . o * o .     |
|    . + ...      |
|     .. E        |
|    .. o         |
+-----------------+
[root@ZhongH100 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@ZhongH101.wxjr.com.cn
The authenticity of host 'zhongh101.wxjr.com.cn (172.16.6.101)' can't be established.
RSA key fingerprint is e9:95:aa:7f:39:5b:52:a7:9b:5e:fe:98:19:82:14:e3.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'zhongh101.wxjr.com.cn,172.16.6.101' (RSA) to the list of known hosts.
root@zhongh101.wxjr.com.cn's password:
Now try logging into the machine, with "ssh 'root@ZhongH101.wxjr.com.cn'", and check in:

  .ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.

[root@ZhongH100 ~]# echo "*/5 * * * * /usr/sbin/ntpdate pool.ntp.org >/dev/null 2>&1" >> /var/spool/cron/root
[root@ZhongH100 ~]# 

2、node2

[root@ZhongH101 ~]# echo "172.16.6.100 ZhongH100.wxjr.com.cn" >> /etc/hosts
[root@ZhongH101 ~]# ssh-keygen  -t rsa -f ~/.ssh/id_rsa  -P ''
Generating public/private rsa key pair.
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
26:fa:25:43:f8:32:c8:66:9c:92:f6:f7:c4:cb:66:4d root@ZhongH101.wxjr.com.cn
The key's randomart image is:
+--[ RSA 2048]----+
|                 |
|                 |
|                 |
|     .           |
|    . o S        |
| + o +.o E       |
|o.B + +o+        |
|.+.  =o*..       |
|   .. =+         |
+-----------------+
[root@ZhongH101 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@ZhongH100.wxjr.com.cn
The authenticity of host 'zhongh100.wxjr.com.cn (172.16.6.100)' can't be established.
RSA key fingerprint is 90:26:f4:28:31:04:03:6c:9f:ec:e4:09:04:32:92:ee.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'zhongh100.wxjr.com.cn,172.16.6.100' (RSA) to the list of known hosts.
root@zhongh100.wxjr.com.cn's password:
Now try logging into the machine, with "ssh 'root@ZhongH100.wxjr.com.cn'", and check in:

  .ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.

[root@ZhongH101 ~]# echo "*/5 * * * * /usr/sbin/ntpdate pool.ntp.org >/dev/null 2>&1" >> /var/spool/cron/root
[root@ZhongH101 ~]# 

3、master

[root@ZhongH102 ~]# echo "*/5 * * * * /usr/sbin/ntpdate pool.ntp.org >/dev/null 2>&1" >> /var/spool/cron/root

4、slave

[root@ZhongH103 ~]# echo "*/5 * * * * /usr/sbin/ntpdate pool.ntp.org >/dev/null 2>&1" >> /var/spool/cron/root

二、node1 node2上安装Nginx

1、node1

[root@ZhongH100 ~]# cd /tmp && wget http://nginx.org/download/nginx-1.9.1.tar.gz
[root@ZhongH100 /tmp]# tar xf nginx-1.9.1.tar.gz
[root@ZhongH100 /tmp]# cd nginx-1.9.1
[root@ZhongH100 /tmp/nginx-1.9.1]# Username="www" && for i in `seq 1000 1500`;do [ -z "$(awk -F: '{print$3,$4}' /etc/passwd | grep "$i")" -a -z "$(awk -F: '{print$3}' /etc/group | grep "$i")" ] && UGID=$i && break;done && groupadd -g $UGID $Username && useradd -M -u $UGID -g $UGID -s /sbin/nologin $Username
[root@ZhongH100 /tmp/nginx-1.9.1]# yum install pcre-devel pcre -y
[root@ZhongH100 /tmp/nginx-1.9.1]# mkdir -p {/tmp/nginx,/var/run/nginx,/var/lock}
[root@ZhongH100 /tmp/nginx-1.9.1]# ./configure --prefix=/usr/local/nginx/ --user=www --group=www \
--error-log-path=/tmp/nginx/error.log --http-log-path=/tmp/nginx/access.log \
--pid-path=/var/run/nginx/nginx.pid --lock-path=/var/lock/nginx.lock \
--with-pcre --with-http_ssl_module --with-http_flv_module \
--with-http_spdy_module --with-http_gzip_static_module \
--with-http_stub_status_module --http-client-body-temp-path=/usr/local/nginx/client/ \
--http-proxy-temp-path=/usr/local/nginx/proxy/ --http-fastcgi-temp-path=/usr/local/nginx/fcgi/ \
--http-uwsgi-temp-path=/usr/local/nginx/uwsgi --http-scgi-temp-path=/usr/local/nginx/scgi
[root@ZhongH100 /tmp/nginx-1.9.1]# make -j $(awk '/processor/{i++}END{print i}' /proc/cpuinfo) && make install && echo $?
[root@ZhongH100 /tmp/nginx-1.9.1]# echo "export PATH=/usr/local/nginx/sbin:\$PATH" > /etc/profile.d/nginx1.9.1.sh
[root@ZhongH100 /tmp/nginx-1.9.1]# . /etc/profile.d/nginx1.9.1.sh
[root@ZhongH100 /tmp/nginx-1.9.1]# nginx -V
nginx version: nginx/1.9.1
built by gcc 4.4.7 20120313 (Red Hat 4.4.7-11) (GCC)
built with OpenSSL 1.0.1e-fips 11 Feb 2013
TLS SNI support enabled
configure arguments: --prefix=/usr/local/nginx/ --user=www --group=www --error-log-path=/tmp/nginx/error.log --http-log-path=/tmp/nginx/access.log --pid-path=/var/run/nginx/nginx.pid --lock-path=/var/lock/nginx.lock --with-pcre --with-http_ssl_module --with-http_flv_module --with-http_spdy_module --with-http_gzip_static_module --with-http_stub_status_module --http-client-body-temp-path=/usr/local/nginx/client/ --http-proxy-temp-path=/usr/local/nginx/proxy/ --http-fastcgi-temp-path=/usr/local/nginx/fcgi/ --http-uwsgi-temp-path=/usr/local/nginx/uwsgi --http-scgi-temp-path=/usr/local/nginx/scgi
[root@ZhongH100 /tmp/nginx-1.9.1]# wget http://www.dwhd.org/script/Nginx-init-CentOS -O /etc/rc.d/init.d/nginx
[root@ZhongH100 /tmp/nginx-1.9.1]# chmod +x /etc/rc.d/init.d/nginx
[root@ZhongH100 /tmp/nginx-1.9.1]# sed -i '/<h1>Welcome to nginx!<\/h1>/a <h1> This is node1!</h1>' /usr/local/nginx/html/index.html
[root@ZhongH100 /tmp/nginx-1.9.1]# /etc/init.d/nginx start
正在启动 nginx:                                           [确定]
[root@ZhongH100 /tmp/nginx-1.9.1]# netstat -tnlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address               Foreign Address             State       PID/Program name
tcp        0      0 127.0.0.1:25                0.0.0.0:*                   LISTEN      1871/master
tcp        0      0 127.0.0.1:6010              0.0.0.0:*                   LISTEN      1973/sshd
tcp        0      0 0.0.0.0:111                 0.0.0.0:*                   LISTEN      1466/rpcbind
tcp        0      0 0.0.0.0:80                  0.0.0.0:*                   LISTEN      6388/nginx
tcp        0      0 0.0.0.0:25552               0.0.0.0:*                   LISTEN      1591/rpc.statd
tcp        0      0 0.0.0.0:22                  0.0.0.0:*                   LISTEN      1791/sshd
tcp        0      0 127.0.0.1:631               0.0.0.0:*                   LISTEN      1620/cupsd
tcp        0      0 ::1:25                      :::*                        LISTEN      1871/master
tcp        0      0 ::1:6010                    :::*                        LISTEN      1973/sshd
tcp        0      0 :::111                      :::*                        LISTEN      1466/rpcbind
tcp        0      0 :::38226                    :::*                        LISTEN      1591/rpc.statd
tcp        0      0 :::22                       :::*                        LISTEN      1791/sshd
tcp        0      0 ::1:631                     :::*                        LISTEN      1620/cupsd
[root@ZhongH100 /tmp/nginx-1.9.1]# scp /tmp/nginx-1.9.1.tar.gz ZhongH101.wxjr.com.cn:/tmp
nginx-1.9.1.tar.gz                                                                                  100%  835KB 835.4KB/s   00:00
[root@ZhongH100 /tmp/nginx-1.9.1]# scp /etc/rc.d/init.d/nginx ZhongH101.wxjr.com.cn:/etc/rc.d/init.d/
nginx                                                                                               100% 2611     2.6KB/s   00:00
[root@ZhongH100 /tmp/nginx-1.9.1]# chkconfig --add nginx && chkconfig nginx on && chkconfig --list nginx
nginx           0:关闭  1:关闭  2:启用  3:启用  4:启用  5:启用  6:关闭
[root@ZhongH100 /tmp/nginx-1.9.1]# 

Linux高可用(HA)之LVS+Keepalived 实现高可用前端负载均衡器
Linux高可用(HA)之LVS+Keepalived 实现高可用前端负载均衡器

2、node2

[root@ZhongH101 ~]# cd /tmp/
[root@ZhongH101 /tmp]# tar xf nginx-1.9.1.tar.gz
[root@ZhongH101 /tmp]# cd nginx-1.9.1
[root@ZhongH101 /tmp/nginx-1.9.1]# Username="www" && for i in `seq 1000 1500`;do [ -z "$(awk -F: '{print$3,$4}' /etc/passwd | grep "$i")" -a -z "$(awk -F: '{print$3}' /etc/group | grep "$i")" ] && UGID=$i && break;done && groupadd -g $UGID $Username && useradd -M -u $UGID -g $UGID -s /sbin/nologin $Username
[root@ZhongH101 /tmp/nginx-1.9.1]# yum install pcre-devel pcre -y
[root@ZhongH100 /tmp/nginx-1.9.1]# mkdir -p {/tmp/nginx,/var/run/nginx,/var/lock}
[root@ZhongH101 /tmp/nginx-1.9.1]# make -j $(awk '/processor/{i++}END{print i}' /proc/cpuinfo) && make install && echo $?
[root@ZhongH100 /tmp/nginx-1.9.1]# ./configure --prefix=/usr/local/nginx/ --user=www --group=www \
--error-log-path=/tmp/nginx/error.log --http-log-path=/tmp/nginx/access.log \
--pid-path=/var/run/nginx/nginx.pid --lock-path=/var/lock/nginx.lock \
--with-pcre --with-http_ssl_module --with-http_flv_module \
--with-http_spdy_module --with-http_gzip_static_module \
--with-http_stub_status_module --http-client-body-temp-path=/usr/local/nginx/client/ \
--http-proxy-temp-path=/usr/local/nginx/proxy/ --http-fastcgi-temp-path=/usr/local/nginx/fcgi/ \
--http-uwsgi-temp-path=/usr/local/nginx/uwsgi --http-scgi-temp-path=/usr/local/nginx/scgi
[root@ZhongH101 /tmp/nginx-1.9.1]# echo "export PATH=/usr/local/nginx/sbin:\$PATH" > /etc/profile.d/nginx1.9.1.sh
[root@ZhongH101 /tmp/nginx-1.9.1]# . /etc/profile.d/nginx1.9.1.sh
[root@ZhongH101 /tmp/nginx-1.9.1]# nginx -V
nginx version: nginx/1.9.1
built by gcc 4.4.7 20120313 (Red Hat 4.4.7-11) (GCC)
built with OpenSSL 1.0.1e-fips 11 Feb 2013
TLS SNI support enabled
configure arguments: --prefix=/usr/local/nginx/ --user=www --group=www --error-log-path=/tmp/nginx/error.log --http-log-path=/tmp/nginx/access.log --pid-path=/var/run/nginx/nginx.pid --lock-path=/var/lock/nginx.lock --with-pcre --with-http_ssl_module --with-http_flv_module --with-http_spdy_module --with-http_gzip_static_module --with-http_stub_status_module --http-client-body-temp-path=/usr/local/nginx/client/ --http-proxy-temp-path=/usr/local/nginx/proxy/ --http-fastcgi-temp-path=/usr/local/nginx/fcgi/ --http-uwsgi-temp-path=/usr/local/nginx/uwsgi --http-scgi-temp-path=/usr/local/nginx/scgi
[root@ZhongH101 /tmp/nginx-1.9.1]# sed -i '/<h1>Welcome to nginx!<\/h1>/a <h1> This is node2!</h1>' /usr/local/nginx/html/index.html
[root@ZhongH101 /tmp/nginx-1.9.1]# /etc/init.d/nginx start
正在启动 nginx:                                           [确定]
[root@ZhongH101 /tmp/nginx-1.9.1]# netstat -tnlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address               Foreign Address             State       PID/Program name
tcp        0      0 127.0.0.1:25                0.0.0.0:*                   LISTEN      1874/master
tcp        0      0 127.0.0.1:6010              0.0.0.0:*                   LISTEN      2649/sshd
tcp        0      0 0.0.0.0:7147                0.0.0.0:*                   LISTEN      1594/rpc.statd
tcp        0      0 0.0.0.0:111                 0.0.0.0:*                   LISTEN      1469/rpcbind
tcp        0      0 0.0.0.0:80                  0.0.0.0:*                   LISTEN      6245/nginx
tcp        0      0 0.0.0.0:22                  0.0.0.0:*                   LISTEN      1796/sshd
tcp        0      0 127.0.0.1:631               0.0.0.0:*                   LISTEN      1623/cupsd
tcp        0      0 ::1:25                      :::*                        LISTEN      1874/master
tcp        0      0 ::1:6010                    :::*                        LISTEN      2649/sshd
tcp        0      0 :::13614                    :::*                        LISTEN      1594/rpc.statd
tcp        0      0 :::111                      :::*                        LISTEN      1469/rpcbind
tcp        0      0 :::22                       :::*                        LISTEN      1796/sshd
tcp        0      0 ::1:631                     :::*                        LISTEN      1623/cupsd
[root@ZhongH100 /tmp/nginx-1.9.1]# chkconfig --add nginx && chkconfig nginx on && chkconfig --list nginx
nginx           0:关闭  1:关闭  2:启用  3:启用  4:启用  5:启用  6:关闭
[root@ZhongH101 /tmp/nginx-1.9.1]# 

Linux高可用(HA)之LVS+Keepalived 实现高可用前端负载均衡器
Linux高可用(HA)之LVS+Keepalived 实现高可用前端负载均衡器


三、配置node1和node2

1、创建配置文件

[root@ZhongH100 /tmp/nginx-1.9.1]# mkdir -pv ~/src
mkdir: 已创建目录 "/root/src"
[root@ZhongH100 /tmp/nginx-1.9.1]# cd ~/src
[root@ZhongH100 ~/src]# wget -q http://www.dwhd.org/script/realserver.sh
[root@ZhongH100 ~/src]# sed -ri 's/^(VIP=).*/\1172.16.7.200/' ~/src/realserver.sh
[root@ZhongH100 ~/src]# chmod +x ~/src/realserver.sh
[root@ZhongH100 ~/src]# ~/src/realserver.sh start

2、检查配置

[root@ZhongH100 ~/src]# ifconfig
eth0      Link encap:Ethernet  HWaddr 00:0C:29:E6:29:99
          inet addr:172.16.6.100  Bcast:172.16.255.255  Mask:255.255.0.0
          inet6 addr: fe80::20c:29ff:fee6:2999/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:104496 errors:0 dropped:0 overruns:0 frame:0
          TX packets:6818 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:22470315 (21.4 MiB)  TX bytes:1597671 (1.5 MiB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)

lo:0      Link encap:Local Loopback
          inet addr:172.16.7.200  Mask:255.255.255.255
          UP LOOPBACK RUNNING  MTU:65536  Metric:1

[root@ZhongH100 ~/src]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet 172.16.7.200/32 brd 172.16.7.200 scope global lo:0
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:e6:29:99 brd ff:ff:ff:ff:ff:ff
    inet 172.16.6.100/16 brd 172.16.255.255 scope global eth0
    inet6 fe80::20c:29ff:fee6:2999/64 scope link
       valid_lft forever preferred_lft forever
3: pan0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 1a:d4:c7:50:00:e8 brd ff:ff:ff:ff:ff:ff
[root@ZhongH100 ~/src]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
172.16.7.200    0.0.0.0         255.255.255.255 UH    0      0        0 lo
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 eth0
172.16.0.0      0.0.0.0         255.255.0.0     U     0      0        0 eth0
0.0.0.0         172.16.0.1      0.0.0.0         UG    0      0        0 eth0
[root@ZhongH100 ~/src]# cat /proc/sys/net/ipv4/conf/lo/arp_ignore
1
[root@ZhongH100 ~/src]# cat /proc/sys/net/ipv4/conf/lo/arp_announce
2
[root@ZhongH100 ~/src]# cat /proc/sys/net/ipv4/conf/all/arp_ignore
1
[root@ZhongH100 ~/src]# cat /proc/sys/net/ipv4/conf/lo/arp_announce
2
[root@ZhongH100 ~/src]#
[root@ZhongH100 ~/src]# ssh root@172.16.6.101 "mkdir ~/src"
[root@ZhongH100 ~/src]# scp ~/src/realserver.sh 172.16.6.101:~/src
realserver.sh                                                                                       100% 1862     1.8KB/s   00:00
[root@ZhongH100 ~/src]# 

Linux高可用(HA)之LVS+Keepalived 实现高可用前端负载均衡器

3、配置node2

[root@ZhongH101 /tmp/nginx-1.9.1]# cd ~/src/
[root@ZhongH101 ~/src]# ~/src/realserver.sh start
[root@ZhongH101 ~/src]# ifconfig
eth0      Link encap:Ethernet  HWaddr 00:0C:29:14:0F:EA
          inet addr:172.16.6.101  Bcast:172.16.255.255  Mask:255.255.0.0
          inet6 addr: fe80::20c:29ff:fe14:fea/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:104661 errors:0 dropped:0 overruns:0 frame:0
          TX packets:5484 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:22481169 (21.4 MiB)  TX bytes:513481 (501.4 KiB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)

lo:0      Link encap:Local Loopback
          inet addr:172.16.7.200  Mask:255.255.255.255
          UP LOOPBACK RUNNING  MTU:65536  Metric:1

[root@ZhongH101 ~/src]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet 172.16.7.200/32 brd 172.16.7.200 scope global lo:0
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:14:0f:ea brd ff:ff:ff:ff:ff:ff
    inet 172.16.6.101/16 brd 172.16.255.255 scope global eth0
    inet6 fe80::20c:29ff:fe14:fea/64 scope link
       valid_lft forever preferred_lft forever
3: pan0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 2e:f4:c7:f5:57:ad brd ff:ff:ff:ff:ff:ff
[root@ZhongH101 ~/src]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
172.16.7.200    0.0.0.0         255.255.255.255 UH    0      0        0 lo
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 eth0
172.16.0.0      0.0.0.0         255.255.0.0     U     0      0        0 eth0
0.0.0.0         172.16.0.1      0.0.0.0         UG    0      0        0 eth0
[root@ZhongH101 ~/src]# 

Linux高可用(HA)之LVS+Keepalived 实现高可用前端负载均衡器


四、配置master和slave

1、配置master和slave主机名互解和ssh互信

[root@ZhongH102 ~]$ ssh-keygen  -t rsa -f ~/.ssh/id_rsa  -P ''
Generating public/private rsa key pair.
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
11:92:7c:db:a0:a2:a9:ef:e3:39:b0:e4:25:1c:a8:89 root@ZhongH102.wxjr.com.cn
The key's randomart image is:
+--[ RSA 2048]----+
|     ....        |
|      o.o.       |
|.      o.+       |
|..  . . ...      |
|+..o .  S        |
|E+o.             |
|o+o              |
|o.o.             |
| +=o             |
+-----------------+
[root@ZhongH102 ~]$ echo "172.16.6.103 ZhongH103.wxjr.com.cn" >> /etc/hosts
[root@ZhongH102 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub root@ZhongH103.wxjr.com.cn
The authenticity of host 'zhongh103.wxjr.com.cn (172.16.6.103)' can't be established.
RSA key fingerprint is 6c:15:34:71:be:a7:c8:cb:8c:15:9a:94:ec:92:7b:ee.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'zhongh103.wxjr.com.cn,172.16.6.103' (RSA) to the list of known hosts.
root@zhongh103.wxjr.com.cn's password:
Now try logging into the machine, with "ssh 'root@ZhongH103.wxjr.com.cn'", and check in:

  .ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.

[root@ZhongH102 ~]$ 
[root@ZhongH103 ~]$ ssh-keygen  -t rsa -f ~/.ssh/id_rsa  -P ''
Generating public/private rsa key pair.
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
11:92:7c:db:a0:a2:a9:ef:e3:39:b0:e4:25:1c:a8:89 root@ZhongH103.wxjr.com.cn
The key's randomart image is:
+--[ RSA 2048]----+
|     ....        |
|      o.o.       |
|.      o.+       |
|..  . . ...      |
|+..o .  S        |
|E+o.             |
|o+o              |
|o.o.             |
| +=o             |
+-----------------+
[root@ZhongH103 ~]$ echo "172.16.6.102 ZhongH102.wxjr.com.cn" >> /etc/hosts
[root@ZhongH103 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub root@ZhongH102.wxjr.com.cn
The authenticity of host 'zhongh102.wxjr.com.cn (172.16.6.102)' can't be established.
RSA key fingerprint is 6c:15:34:71:be:a7:c8:cb:8c:15:9a:94:ec:92:7b:ee.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'zhongh103.wxjr.com.cn,172.16.6.102' (RSA) to the list of known hosts.
root@zhongh103.wxjr.com.cn's password:
Now try logging into the machine, with "ssh 'root@ZhongH102.wxjr.com.cn'", and check in:

  .ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.

[root@ZhongH103 ~]$ 

2、master安装Keepalived和ipvsadm

[root@ZhongH102 ~]# yum -y install openssl openssl-devel popt-devel ipvsadm
[root@ZhongH102 ~]# cd /tmp/ && wget http://keepalived.org/software/keepalived-1.2.16.tar.gz
[root@ZhongH102 /tmp]# tar xf keepalived-1.2.16.tar.gz
[root@ZhongH102 /tmp]# cd keepalived-1.2.16
[root@ZhongH102 /tmp/keepalived-1.2.16]# ./configure --prefix=/usr/local/keepalived
[root@ZhongH102 /tmp/keepalived-1.2.16]# make && make install && echo $?
[root@ZhongH102 /tmp/keepalived-1.2.16]# mkdir -pv /etc/keepalived
mkdir: 已创建目录 "/etc/keepalived"
[root@ZhongH102 /tmp/keepalived-1.2.16]# cp -a /usr/local/keepalived/sbin/keepalived /usr/sbin/
[root@ZhongH102 /tmp/keepalived-1.2.16]# cp -a /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
[root@ZhongH102 /tmp/keepalived-1.2.16]# cp -a /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/
[root@ZhongH102 /tmp/keepalived-1.2.16]# cp -a /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/
[root@ZhongH102 /tmp/keepalived-1.2.16]# scp /tmp/keepalived-1.2.16.tar.gz 172.16.6.103:/tmp
keepalived-1.2.16.tar.gz                                                                            100%  339KB 338.8KB/s   00:00
[root@ZhongH102 /tmp/keepalived-1.2.16]# 

3、master配置Keepalived

[root@ZhongH102 /tmp/keepalived-1.2.16]# cat /etc/keepalived/keepalived.conf #具体修改请参考下面的
! Configuration File for keepalived

global_defs {
	notification_email {
		admin@dwhd.org #配置管理员邮箱
	}
	notification_email_from Keepalived #配置发件人
	smtp_server 127.0.0.1 #配置邮件服务器
	smtp_connect_timeout 30
	router_id LVS_DEVEL
}

vrrp_instance VI_1 {
	state MASTER #配置模式
	interface eth0
	virtual_router_id 51
	priority 101 #配置优先级,数值越大越优先
	advert_int 1
	authentication {
		auth_type PASS
		auth_pass YjVhZGE1NjNlYzQ4
	}
	virtual_ipaddress {
		172.16.7.200 #配置虚拟IP地址
	}
}

virtual_server 172.16.7.200 80 { #配置realaserver
	delay_loop 6
	lb_algo rr
	lb_kind DR
	nat_mask 255.255.255.0
	#persistence_timeout 50
	protocol TCP

	real_server 172.16.6.100 80 {
		weight 1
		HTTP_GET { #监控配置
			url {
				path /
				status_code 200
				#digest ff20ad2481f97b1754ef3e12ecd3a9cc
			}
			connect_timeout 2
			nb_get_retry 3
			delay_before_retry 1
		}
	}

	real_server 172.16.6.101 80 {
		weight 1
		HTTP_GET {
			url {
				path /
				status_code 200
			}
			connect_timeout 2
			nb_get_retry 3
			delay_before_retry 1
		}
	}
}

4、slave安装和Keepalived

[root@ZhongH103 ~]# cd /tmp/
[root@ZhongH103 /tmp]# tar xf keepalived-1.2.16.tar.gz
[root@ZhongH103 /tmp]# cd keepalived-1.2.16
[root@ZhongH103 /tmp/keepalived-1.2.16]# yum -y install openssl openssl-devel popt-devel ipvsadm
[root@ZhongH103 /tmp/keepalived-1.2.16]# ./configure --prefix=/usr/local/keepalived
[root@ZhongH103 /tmp/keepalived-1.2.16]# make && make install && echo $?
[root@ZhongH103 /tmp/keepalived-1.2.16]# mkdir -pv /etc/keepalived
mkdir: 已创建目录 "/etc/keepalived"
[root@ZhongH103 /tmp/keepalived-1.2.16]# cp -a /usr/local/keepalived/sbin/keepalived /usr/sbin/
[root@ZhongH103 /tmp/keepalived-1.2.16]# cp -a /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
[root@ZhongH103 /tmp/keepalived-1.2.16]# cp -a /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/
[root@ZhongH103 /tmp/keepalived-1.2.16]# scp 172.16.6.102:/etc/keepalived/keepalived.conf /etc/keepalived/
keepalived.conf                                                                                     100%  934     0.9KB/s   00:00
[root@ZhongH103 /tmp/keepalived-1.2.16]# sed -i 's/priority 101/priority 100/' /etc/keepalived/keepalived.conf #修改优先级
[root@ZhongH103 /tmp/keepalived-1.2.16]# sed -i 's/state MASTER/state BACKUP/' /etc/keepalived/keepalived.conf #修改为BACKUP

5、启动master和slave的Keepalived

[root@ZhongH103 /tmp/keepalived-1.2.16]# /etc/init.d/keepalived start
正在启动 keepalived:                                      [确定]
[root@ZhongH103 /tmp/keepalived-1.2.16]# ssh root@172.16.6.102 "/etc/init.d/keepalived start"
正在启动 keepalived:[确定]
[root@ZhongH103 /tmp/keepalived-1.2.16]# 

6、查看LVS状态

[root@ZhongH103 /tmp/keepalived-1.2.16]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.16.7.200:80 rr
  -> 172.16.6.100:80              Route   1      0          0
  -> 172.16.6.101:80              Route   1      0          0
[root@ZhongH103 /tmp/keepalived-1.2.16]# ssh root@172.16.6.102 "ipvsadm -L -n"
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.16.7.200:80 rr
  -> 172.16.6.100:80              Route   1      0          0
  -> 172.16.6.101:80              Route   1      0          0
[root@ZhongH103 /tmp/keepalived-1.2.16]# 

7、测试web
Linux高可用(HA)之LVS+Keepalived 实现高可用前端负载均衡器
Linux高可用(HA)之LVS+Keepalived 实现高可用前端负载均衡器

8、模拟故障

[root@ZhongH101 ~/src]# service nginx stop
停止 nginx:                                               [确定]
[root@ZhongH101 ~/src]# 
[root@ZhongH103 /tmp/keepalived-1.2.16]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.16.7.200:80 rr
  -> 172.16.6.100:80              Route   1      0          0
[root@ZhongH103 /tmp/keepalived-1.2.16]# 

邮件都来了
Linux高可用(HA)之LVS+Keepalived 实现高可用前端负载均衡器
Linux高可用(HA)之LVS+Keepalived 实现高可用前端负载均衡器
Linux高可用(HA)之LVS+Keepalived 实现高可用前端负载均衡器

重新启动node2

[root@ZhongH101 ~/src]# service nginx start
正在启动 nginx:                                           [确定]
[root@ZhongH101 ~/src]# 

Linux高可用(HA)之LVS+Keepalived 实现高可用前端负载均衡器
Linux高可用(HA)之LVS+Keepalived 实现高可用前端负载均衡器
Linux高可用(HA)之LVS+Keepalived 实现高可用前端负载均衡器

再次检查LVS状态

[root@ZhongH103 /tmp/keepalived-1.2.16]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.16.7.200:80 rr
  -> 172.16.6.100:80              Route   1      0          0
  -> 172.16.6.101:80              Route   1      0          0  

关闭master上的Keepalived

[root@ZhongH102 /tmp/ipvsadm-1.26]# service keepalived stop
停止 keepalived:                                          [确定]
[root@ZhongH102 /tmp/ipvsadm-1.26]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
[root@ZhongH102 /tmp/ipvsadm-1.26]# 

查看slave状态

[root@ZhongH103 /tmp/keepalived-1.2.16]# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:8c:86:18 brd ff:ff:ff:ff:ff:ff
    inet 172.16.6.103/16 brd 172.16.255.255 scope global eth0
    inet 172.16.7.200/32 scope global eth0
    inet6 fe80::20c:29ff:fe8c:8618/64 scope link
       valid_lft forever preferred_lft forever
3: pan0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 1a:6b:45:f5:17:da brd ff:ff:ff:ff:ff:ff
[root@ZhongH103 /tmp/keepalived-1.2.16]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.16.7.200:80 rr
  -> 172.16.6.100:80              Route   1      0          0
  -> 172.16.6.101:80              Route   1      0          0
[root@ZhongH103 /tmp/keepalived-1.2.16]# 

Linux高可用(HA)之LVS+Keepalived 实现高可用前端负载均衡器

再来看看web
Linux高可用(HA)之LVS+Keepalived 实现高可用前端负载均衡器
注,大家可以看到,经过上面的演示我们现在LVS的高可用即前端负载均衡的高可用,同时实现对后端realserver监控,也实现后端realserver宕机时会给管理员发送邮件。但还有几个问题我们还没有解决,问题如下:
所有realserver都down机,怎么处理?是不是用户就没法打开,还是提供一下维护页面。
怎么完成维护模式keepalived切换?
如何在keepalived故障时,发送警告邮件给指定的管理员?


五、所有realserver都down机,怎么处理?
问题:在集群中如果所有real server全部宕机了,客户端访问时就会出现错误页面,这样是很不友好的,我们得提供一个维护页面来提醒用户,服务器正在维护,什么时间可以访问等,下面我们就来解决一下这个问题。解决方案有两种,一种是提供一台备用的real server当所有的服务器宕机时,提供维护页面,但这样做有点浪费服务器。另一种就是在负载均衡器上提供维护页面,这样是比较靠谱的,也比较常用。下面我们就来具体操作一下。

1、master和slave上安装nginx

[root@ZhongH102 /tmp/ipvsadm-1.26]# cd /tmp && wget http://nginx.org/download/nginx-1.9.1.tar.gz
[root@ZhongH102 /tmp]# tar xf nginx-1.9.1.tar.gz
[root@ZhongH102 /tmp]# cd nginx-1.9.1
[root@ZhongH102 /tmp/nginx-1.9.1]# Username="www" && for i in `seq 1000 1500`;do [ -z "$(awk -F: '{print$3,$4}' /etc/passwd | grep "$i")" -a -z "$(awk -F: '{print$3}' /etc/group | grep "$i")" ] && UGID=$i && break;done && groupadd -g $UGID $Username && useradd -M -u $UGID -g $UGID -s /sbin/nologin $Username
[root@ZhongH102 /tmp/nginx-1.9.1]# yum install pcre-devel pcre -y
[root@ZhongH102 /tmp/nginx-1.9.1]# mkdir -p {/tmp/nginx,/var/run/nginx,/var/lock}
[root@ZhongH102 /tmp/nginx-1.9.1]# ./configure --prefix=/usr/local/nginx/ --user=www --group=www \
--error-log-path=/tmp/nginx/error.log --http-log-path=/tmp/nginx/access.log \
--pid-path=/var/run/nginx/nginx.pid --lock-path=/var/lock/nginx.lock \
--with-pcre --with-http_ssl_module --with-http_flv_module \
--with-http_spdy_module --with-http_gzip_static_module \
--with-http_stub_status_module --http-client-body-temp-path=/usr/local/nginx/client/ \
--http-proxy-temp-path=/usr/local/nginx/proxy/ --http-fastcgi-temp-path=/usr/local/nginx/fcgi/ \
--http-uwsgi-temp-path=/usr/local/nginx/uwsgi --http-scgi-temp-path=/usr/local/nginx/scgi
[root@ZhongH102 /tmp/nginx-1.9.1]# make -j $(awk '/processor/{i++}END{print i}' /proc/cpuinfo) && make install && echo $?
[root@ZhongH102 /tmp/nginx-1.9.1]# echo "export PATH=/usr/local/nginx/sbin:\$PATH" > /etc/profile.d/nginx1.9.1.sh
[root@ZhongH102 /tmp/nginx-1.9.1]# . /etc/profile.d/nginx1.9.1.sh
[root@ZhongH102 /tmp/nginx-1.9.1]# nginx -V
nginx version: nginx/1.9.1
built by gcc 4.4.7 20120313 (Red Hat 4.4.7-11) (GCC)
built with OpenSSL 1.0.1e-fips 11 Feb 2013
TLS SNI support enabled
configure arguments: --prefix=/usr/local/nginx/ --user=www --group=www --error-log-path=/tmp/nginx/error.log --http-log-path=/tmp/nginx/access.log --pid-path=/var/run/nginx/nginx.pid --lock-path=/var/lock/nginx.lock --with-pcre --with-http_ssl_module --with-http_flv_module --with-http_spdy_module --with-http_gzip_static_module --with-http_stub_status_module --http-client-body-temp-path=/usr/local/nginx/client/ --http-proxy-temp-path=/usr/local/nginx/proxy/ --http-fastcgi-temp-path=/usr/local/nginx/fcgi/ --http-uwsgi-temp-path=/usr/local/nginx/uwsgi --http-scgi-temp-path=/usr/local/nginx/scgi
[root@ZhongH102 /tmp/nginx-1.9.1]# wget http://www.dwhd.org/script/Nginx-init-CentOS -O /etc/rc.d/init.d/nginx
[root@ZhongH102 /tmp/nginx-1.9.1]# chmod +x /etc/rc.d/init.d/nginx
[root@ZhongH102 /tmp/nginx-1.9.1]# mv /usr/local/nginx/html/index.html /usr/local/nginx/html/index.html_backup
[root@ZhongH102 /tmp/nginx-1.9.1]# echo "<html><body><h1>If you see this page,Website is currently under maintenance, please come back later\!</h1></body></html>" > /usr/local/nginx/html/index.html
[root@ZhongH102 /tmp/nginx-1.9.1]# scp /tmp/nginx-1.9.1.tar.gz 172.16.6.103:/tmp
nginx-1.9.1.tar.gz                                                                                  100%  835KB 835.4KB/s   00:00
[root@ZhongH102 /tmp/nginx-1.9.1]# scp /etc/init.d/nginx 172.16.6.103:/etc/init.d/
nginx                                                                                               100% 2611     2.6KB/s   00:00
[root@ZhongH102 /tmp/nginx-1.9.1]# /etc/init.d/nginx start
正在启动 nginx:                                           [确定]
[root@ZhongH102 /tmp/nginx-1.9.1]# 

2、修改master的Keepalived配置文件

[root@ZhongH102 /tmp/nginx-1.9.1]# sed -i '$ i \\tsorry_server 127.0.0.1 80' /etc/keepalived/keepalived.conf

3、slave

[root@ZhongH103 /tmp/keepalived-1.2.16]# cd /tmp/
[root@ZhongH103 /tmp]# tar xf nginx-1.9.1.tar.gz
[root@ZhongH103 /tmp]# cd nginx-1.9.1
[root@ZhongH103 /tmp/nginx-1.9.1]# Username="www" && for i in `seq 1000 1500`;do [ -z "$(awk -F: '{print$3,$4}' /etc/passwd | grep "$i")" -a -z "$(awk -F: '{print$3}' /etc/group | grep "$i")" ] && UGID=$i && break;done && groupadd -g $UGID $Username && useradd -M -u $UGID -g $UGID -s /sbin/nologin $Username
[root@ZhongH103 /tmp/nginx-1.9.1]# yum install pcre-devel pcre -y
[root@ZhongH103 /tmp/nginx-1.9.1]# mkdir -pv {/tmp/nginx,/var/run/nginx,/var/lock}
mkdir: 已创建目录 "/tmp/nginx"
mkdir: 已创建目录 "/var/run/nginx"
[root@ZhongH103 /tmp/nginx-1.9.1]# ./configure --prefix=/usr/local/nginx/ --user=www --group=www \
--error-log-path=/tmp/nginx/error.log --http-log-path=/tmp/nginx/access.log \
--pid-path=/var/run/nginx/nginx.pid --lock-path=/var/lock/nginx.lock \
--with-pcre --with-http_ssl_module --with-http_flv_module \
--with-http_spdy_module --with-http_gzip_static_module \
--with-http_stub_status_module --http-client-body-temp-path=/usr/local/nginx/client/ \
--http-proxy-temp-path=/usr/local/nginx/proxy/ --http-fastcgi-temp-path=/usr/local/nginx/fcgi/ \
--http-uwsgi-temp-path=/usr/local/nginx/uwsgi --http-scgi-temp-path=/usr/local/nginx/scgi
[root@ZhongH103 /tmp/nginx-1.9.1]# make -j $(awk '/processor/{i++}END{print i}' /proc/cpuinfo) && make install && echo $?
[root@ZhongH103 /tmp/nginx-1.9.1]# echo "export PATH=/usr/local/nginx/sbin:\$PATH" > /etc/profile.d/nginx1.9.1.sh
[root@ZhongH103 /tmp/nginx-1.9.1]# . /etc/profile.d/nginx1.9.1.sh
[root@ZhongH103 /tmp/nginx-1.9.1]# nginx -V
nginx version: nginx/1.9.1
built by gcc 4.4.7 20120313 (Red Hat 4.4.7-11) (GCC)
built with OpenSSL 1.0.1e-fips 11 Feb 2013
TLS SNI support enabled
configure arguments: --prefix=/usr/local/nginx/ --user=www --group=www --error-log-path=/tmp/nginx/error.log --http-log-path=/tmp/nginx/access.log --pid-path=/var/run/nginx/nginx.pid --lock-path=/var/lock/nginx.lock --with-pcre --with-http_ssl_module --with-http_flv_module --with-http_spdy_module --with-http_gzip_static_module --with-http_stub_status_module --http-client-body-temp-path=/usr/local/nginx/client/ --http-proxy-temp-path=/usr/local/nginx/proxy/ --http-fastcgi-temp-path=/usr/local/nginx/fcgi/ --http-uwsgi-temp-path=/usr/local/nginx/uwsgi --http-scgi-temp-path=/usr/local/nginx/scgi
[root@ZhongH103 /tmp/nginx-1.9.1]# mv /usr/local/nginx/html/index.html /usr/local/nginx/html/index.html_backup
[root@ZhongH103 /tmp/nginx-1.9.1]# echo '<html><body><h1>If you see this page,Website is currently under maintenance, please come back later!</h1></body></html>' > /usr/local/nginx/html/index.html
[root@ZhongH103 /tmp/nginx-1.9.1]# /etc/init.d/nginx start
正在启动 nginx:                                           [确定]
[root@ZhongH103 /tmp/nginx-1.9.1]# ss -tnlp | grep nginx
LISTEN     0      511                       *:80                       *:*      users:(("nginx",7829,6),("nginx",7830,6))
[root@ZhongH103 /tmp/nginx-1.9.1]# 

4、修改slave的Keepalived配置文件

[root@ZhongH103 /tmp/nginx-1.9.1]# sed -i '$ i \\tsorry_server 127.0.0.1 80' /etc/keepalived/keepalived.conf

Linux高可用(HA)之LVS+Keepalived 实现高可用前端负载均衡器
Linux高可用(HA)之LVS+Keepalived 实现高可用前端负载均衡器

5、关闭node1 node2的Nginx,master slave的Keepalived

[root@ZhongH100 ~/src]# service nginx stop
停止 nginx:                                               [确定]
[root@ZhongH100 ~/src]# ssh root@172.16.6.101 "service nginx stop"
停止 nginx:[确定]
[root@ZhongH100 ~/src]# 
[root@ZhongH102 /tmp/nginx-1.9.1]# service keepalived restart
停止 keepalived:                                          [失败]
正在启动 keepalived:                                      [确定]
[root@ZhongH102 /tmp/nginx-1.9.1]# ssh root@172.16.6.103 "service keepalived restart"
停止 keepalived:[确定]
正在启动 keepalived:[确定]
[root@ZhongH102 /tmp/nginx-1.9.1]# 

6、检查LVS

[root@ZhongH102 /tmp/nginx-1.9.1]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.16.7.200:80 rr
  -> 127.0.0.1:80                 Local   1      0          0
[root@ZhongH102 /tmp/nginx-1.9.1]# ssh root@172.16.6.103 "ipvsadm -L -n"
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.16.7.200:80 rr
  -> 127.0.0.1:80                 Local   1      0          0
[root@ZhongH102 /tmp/nginx-1.9.1]# 

7、测试web服务
Linux高可用(HA)之LVS+Keepalived 实现高可用前端负载均衡器


六、怎么完成维护模式keepalived切换?
问题:我们一般进行主从切换测试时都是关闭keepalived或关闭网卡接口,有没有一种方法能实现在不关闭keepalived下或网卡接口来实现维护呢?方法肯定是有的,在keepalived新版本中,支持脚本vrrp_srcipt,具体如何使用大家可以man keepalived.conf查看。下面我们来演示一下具体怎么实现。

1、定义脚本

vrrp_srcipt chk_schedown {
   script "[ -e /etc/keepalived/down ] && exit 1 || exit 0"
   interval 1 #监控间隔
   weight -5 #减小优先级
   fall 2 #监控失败次数
   rise 1 #监控成功次数
}

2、执行脚本

track_script {
   chk_schedown #执行chk_schedown脚本
}

3、修改Keepalived的配置文件
1)、master

[root@ZhongH102 /tmp/nginx-1.9.1]# cat /etc/keepalived/keepalived.conf  #可以参照修改
! Configuration File for keepalived

global_defs {
	notification_email {
		admin@dwhd.org
	}
	notification_email_from Keepalived
	smtp_server 127.0.0.1
	smtp_connect_timeout 30
	router_id LVS_DEVEL
}

vrrp_script chk_schedown { #定义vrrp执行脚本
	script "[ -e /etc/keepalived/down ] && exit 1 || exit 0" #查看是否有down文件,有就进入维护模式
	interval 1 #监控间隔时间
	weight -5 #降低优先级
	fall 2 #失败次数
	rise 1 #成功数次
}

vrrp_instance VI_1 {
	state MASTER
	interface eth0
	virtual_router_id 51
	priority 101
	advert_int 1
	authentication {
		auth_type PASS
		auth_pass YjVhZGE1NjNlYzQ4
	}
	virtual_ipaddress {
		172.16.7.200
	}
	track_script { #执行脚本
		chk_schedown
	}
}

virtual_server 172.16.7.200 80 {
	delay_loop 6
	lb_algo rr
	lb_kind DR
	nat_mask 255.255.255.0
	#persistence_timeout 50
	protocol TCP

	real_server 172.16.6.100 80 {
		weight 1
		HTTP_GET {
			url {
				path /
				status_code 200
				#digest ff20ad2481f97b1754ef3e12ecd3a9cc
			}
			connect_timeout 2
			nb_get_retry 3
			delay_before_retry 1
		}
	}

	real_server 172.16.6.101 80 {
		weight 1
		HTTP_GET {
			url {
				path /
				status_code 200
			}
			connect_timeout 2
			nb_get_retry 3
			delay_before_retry 1
		}
	}
	sorry_server 127.0.0.1 80
}
[root@ZhongH102 /tmp/nginx-1.9.1]#

2)、slave

[root@ZhongH103 /tmp/nginx-1.9.1]# scp 172.16.6.102:/etc/keepalived/keepalived.conf /etc/keepalived/
keepalived.conf                                                                                     100% 1295     1.3KB/s   00:00
[root@ZhongH103 /tmp/nginx-1.9.1]# sed -i 's/priority 101/priority 100/' /etc/keepalived/keepalived.conf #修改优先级
[root@ZhongH103 /tmp/nginx-1.9.1]# sed -i 's/state MASTER/state BACKUP/' /etc/keepalived/keepalived.conf #修改为BACKUP
[root@ZhongH103 /tmp/nginx-1.9.1]# 

4、测试

1)、重启Keepalived,创建down文件

[root@ZhongH102 /etc/keepalived]# cd /etc/keepalived/
[root@ZhongH102 /etc/keepalived]# /etc/init.d/keepalived restart
停止 keepalived:                                          [确定]
正在启动 keepalived:                                      [确定]
[root@ZhongH102 /etc/keepalived]# touch /etc/keepalived/down

2)、查看IP和日志

[root@ZhongH102 /etc/keepalived]# ip add show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:fa:1a:f1 brd ff:ff:ff:ff:ff:ff
    inet 172.16.6.102/16 brd 172.16.255.255 scope global eth0
    inet6 fe80::20c:29ff:fefa:1af1/64 scope link
       valid_lft forever preferred_lft forever
3: pan0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 32:94:a4:20:1d:5f brd ff:ff:ff:ff:ff:ff
[root@ZhongH102 ~]# tail -f /var/log/messages
May 29 21:04:28 ZhongH102 Keepalived_healthcheckers[9005]: Removing alive servers from the pool for VS [172.16.7.200]:80
May 29 21:04:28 ZhongH102 Keepalived_healthcheckers[9005]: Remote SMTP server [127.0.0.1]:25 connected.
May 29 21:04:29 ZhongH102 Keepalived_healthcheckers[9005]: SMTP alert successfully sent.
May 29 21:04:30 ZhongH102 Keepalived_vrrp[9006]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 172.16.7.200
May 29 21:12:47 ZhongH102 dhclient[1325]: DHCPREQUEST on eth0 to 172.16.0.1 port 67 (xid=0x5302bf8)
May 29 21:12:47 ZhongH102 dhclient[1325]: DHCPACK from 172.16.0.1 (xid=0x5302bf8)
May 29 21:12:48 ZhongH102 dhclient[1325]: bound to 172.16.6.102 -- renewal in 708 seconds.
May 29 21:24:36 ZhongH102 dhclient[1325]: DHCPREQUEST on eth0 to 172.16.0.1 port 67 (xid=0x5302bf8)
May 29 21:24:36 ZhongH102 dhclient[1325]: DHCPACK from 172.16.0.1 (xid=0x5302bf8)
May 29 21:24:37 ZhongH102 dhclient[1325]: bound to 172.16.6.102 -- renewal in 830 seconds.
May 29 21:32:37 ZhongH102 Keepalived[9004]: Stopping Keepalived v1.2.16 (05/29,2015)
May 29 21:32:37 ZhongH102 Keepalived_vrrp[9006]: VRRP_Instance(VI_1) sending 0 priority
May 29 21:32:37 ZhongH102 Keepalived_vrrp[9006]: VRRP_Instance(VI_1) removing protocol VIPs.
May 29 21:32:37 ZhongH102 Keepalived_healthcheckers[9005]: Netlink reflector reports IP 172.16.7.200 removed
May 29 21:32:37 ZhongH102 kernel: IPVS: __ip_vs_del_service: enter
May 29 21:32:37 ZhongH102 Keepalived[9454]: Starting Keepalived v1.2.16 (05/29,2015)
May 29 21:32:37 ZhongH102 Keepalived[9455]: Starting Healthcheck child process, pid=9456
May 29 21:32:37 ZhongH102 Keepalived[9455]: Starting VRRP child process, pid=9457
May 29 21:32:37 ZhongH102 Keepalived_vrrp[9457]: Netlink reflector reports IP 172.16.6.102 added
May 29 21:32:37 ZhongH102 Keepalived_vrrp[9457]: Netlink reflector reports IP fe80::20c:29ff:fefa:1af1 added
May 29 21:32:37 ZhongH102 Keepalived_vrrp[9457]: Registering Kernel netlink reflector
May 29 21:32:37 ZhongH102 Keepalived_vrrp[9457]: Registering Kernel netlink command channel
May 29 21:32:37 ZhongH102 Keepalived_vrrp[9457]: Registering gratuitous ARP shared channel
May 29 21:32:37 ZhongH102 Keepalived_healthcheckers[9456]: Netlink reflector reports IP 172.16.6.102 added
May 29 21:32:37 ZhongH102 Keepalived_healthcheckers[9456]: Netlink reflector reports IP fe80::20c:29ff:fefa:1af1 added
May 29 21:32:37 ZhongH102 Keepalived_healthcheckers[9456]: Registering Kernel netlink reflector
May 29 21:32:37 ZhongH102 Keepalived_healthcheckers[9456]: Registering Kernel netlink command channel
May 29 21:32:37 ZhongH102 Keepalived_vrrp[9457]: Opening file '/etc/keepalived/keepalived.conf'.
May 29 21:32:37 ZhongH102 Keepalived_vrrp[9457]: Truncating auth_pass to 8 characters
May 29 21:32:37 ZhongH102 Keepalived_vrrp[9457]: Configuration is using : 65655 Bytes
May 29 21:32:37 ZhongH102 Keepalived_vrrp[9457]: Using LinkWatch kernel netlink reflector...
May 29 21:32:37 ZhongH102 Keepalived_vrrp[9457]: VRRP sockpool: [ifindex(2), proto(112), unicast(0), fd(10,11)]
May 29 21:32:38 ZhongH102 Keepalived_healthcheckers[9456]: Opening file '/etc/keepalived/keepalived.conf'.
May 29 21:32:38 ZhongH102 Keepalived_healthcheckers[9456]: Configuration is using : 17035 Bytes
May 29 21:32:38 ZhongH102 Keepalived_healthcheckers[9456]: Using LinkWatch kernel netlink reflector...
May 29 21:32:38 ZhongH102 Keepalived_healthcheckers[9456]: Activating healthchecker for service [172.16.6.100]:80
May 29 21:32:38 ZhongH102 Keepalived_healthcheckers[9456]: Activating healthchecker for service [172.16.6.101]:80
May 29 21:32:38 ZhongH102 Keepalived_vrrp[9457]: VRRP_Instance(VI_1) Transition to MASTER STATE
May 29 21:32:38 ZhongH102 Keepalived_vrrp[9457]: VRRP_Instance(VI_1) Received lower prio advert, forcing new election
May 29 21:32:39 ZhongH102 Keepalived_vrrp[9457]: VRRP_Instance(VI_1) Entering MASTER STATE
May 29 21:32:39 ZhongH102 Keepalived_vrrp[9457]: VRRP_Instance(VI_1) setting protocol VIPs.
May 29 21:32:39 ZhongH102 Keepalived_vrrp[9457]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 172.16.7.200
May 29 21:32:39 ZhongH102 Keepalived_healthcheckers[9456]: Netlink reflector reports IP 172.16.7.200 added
May 29 21:32:39 ZhongH102 Keepalived_vrrp[9457]: VRRP_Instance(VI_1) Received higher prio advert
May 29 21:32:39 ZhongH102 Keepalived_vrrp[9457]: VRRP_Instance(VI_1) Entering BACKUP STATE
May 29 21:32:39 ZhongH102 Keepalived_vrrp[9457]: VRRP_Instance(VI_1) removing protocol VIPs.
May 29 21:32:39 ZhongH102 Keepalived_healthcheckers[9456]: Netlink reflector reports IP 172.16.7.200 removed
May 29 21:32:41 ZhongH102 Keepalived_healthcheckers[9456]: Error connecting server [172.16.6.101]:80.
May 29 21:32:41 ZhongH102 Keepalived_healthcheckers[9456]: Removing service [172.16.6.101]:80 from VS [172.16.7.200]:80
May 29 21:32:41 ZhongH102 Keepalived_healthcheckers[9456]: Remote SMTP server [127.0.0.1]:25 connected.
May 29 21:32:41 ZhongH102 Keepalived_healthcheckers[9456]: SMTP alert successfully sent.
May 29 21:32:44 ZhongH102 Keepalived_healthcheckers[9456]: Error connecting server [172.16.6.100]:80.
May 29 21:32:44 ZhongH102 Keepalived_healthcheckers[9456]: Removing service [172.16.6.100]:80 from VS [172.16.7.200]:80
May 29 21:32:44 ZhongH102 Keepalived_healthcheckers[9456]: Lost quorum 1-0=1 > 0 for VS [172.16.7.200]:80
May 29 21:32:44 ZhongH102 Keepalived_healthcheckers[9456]: Adding sorry server [127.0.0.1]:80 to VS [172.16.7.200]:80
May 29 21:32:44 ZhongH102 Keepalived_healthcheckers[9456]: Removing alive servers from the pool for VS [172.16.7.200]:80
May 29 21:32:44 ZhongH102 Keepalived_healthcheckers[9456]: Remote SMTP server [127.0.0.1]:25 connected.
May 29 21:32:44 ZhongH102 Keepalived_healthcheckers[9456]: SMTP alert successfully sent.

3)、slave

[root@ZhongH103 /tmp/nginx-1.9.1]# ip addr show #可以看到VIP已经转移过来了
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:8c:86:18 brd ff:ff:ff:ff:ff:ff
    inet 172.16.6.103/16 brd 172.16.255.255 scope global eth0
    inet 172.16.7.200/32 scope global eth0
    inet6 fe80::20c:29ff:fe8c:8618/64 scope link
       valid_lft forever preferred_lft forever
3: pan0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 1a:6b:45:f5:17:da brd ff:ff:ff:ff:ff:ff
[root@ZhongH103 /tmp/nginx-1.9.1]# 

Linux高可用(HA)之LVS+Keepalived 实现高可用前端负载均衡器
好了,自写监测脚本,完成维护模式切换,到这里就演示成功,下面我们来解决最后一个问题,就是keepalived主从切换的邮件通告。


七、如何在keepalived故障时(或主备切换时),发送警告邮件给指定的管理员?

1、keepalived通知脚本进阶示例
下面的脚本可以接受选项,其中
-s, --service SERVICE,...:指定服务脚本名称,当状态切换时可自动启动、重启或关闭此服务;
-a, --address VIP: 指定相关虚拟路由器的VIP地址;
-m, --mode {mm|mb}:指定虚拟路由的模型,mm表示主主,mb表示主备;它们表示相对于同一种服务而方,其VIP的工作类型;
-n, --notify {master|backup|fault}:指定通知的类型,即vrrp角色切换的目标角色;
-h, --help:获取脚本的使用帮助;
下面可以下载示例脚本

[root@ZhongH102 /etc/keepalived]# wget http://www.dwhd.org/script/Keepalived_notify.sh -O /etc/keepalived/notify.sh
#!/bin/bash
#########################################################################
# File Name: Keepalived_notify.sh
# Author: LookBack
# Email: admin#05hd.com
# Version:
# Created Time: 2015年05月29日 星期五 21时46分35秒
#########################################################################

# description: An example of notify script
# Usage: notify.sh -m|--mode {mm|mb} -s|--service SERVICE1,... -a|--address VIP  -n|--notify {master|backup|falut} -h|--help
contact='1521076067@163.com'
helpflag=0
serviceflag=0
modeflag=0
addressflag=0
notifyflag=0
Usage() {
  echo "Usage: notify.sh [-m|--mode {mm|mb}] [-s|--service SERVICE1,...] <-a|--address VIP>  <-n|--notify {master|backup|falut}>"
  echo "Usage: notify.sh -h|--help"
}
ParseOptions() {
	local I=1;
	if [ $# -gt 0 ]; then
		while [ $I -le $# ]; do
			case $1 in
			-s|--service)
				[ $# -lt 2 ] && return 3
				serviceflag=1
				services=(`echo $2|awk -F"," '{for(i=1;i<=NF;i++) print $i}'`)
				shift 2 ;;
			-h|--help)
				helpflag=1
				return 0
				shift
				;;
			-a|--address)
				[ $# -lt 2 ] && return 3
				addressflag=1
				vip=$2
				shift 2
				;;
			-m|--mode)
				[ $# -lt 2 ] && return 3
				mode=$2
				shift 2
				;;
			-n|--notify)
				[ $# -lt 2 ] && return 3
				notifyflag=1
				notify=$2
				shift 2
				;;
			*)
				echo "Wrong options..."
				Usage
				return 7
				;;
			esac
		done
		return 0
	fi
}
#workspace=$(dirname $0)
RestartService() {
	if [ ${#@} -gt 0 ]; then
		for I in $@; do
			if [ -x /etc/rc.d/init.d/$I ]; then
				/etc/rc.d/init.d/$I restart
			else
				echo "$I is not a valid service..."
			fi
		done
	fi
}
StopService() {
	if [ ${#@} -gt 0 ]; then
		for I in $@; do
			if [ -x /etc/rc.d/init.d/$I ]; then
				/etc/rc.d/init.d/$I stop
			else
				echo "$I is not a valid service..."
			fi
		done
	fi
}
Notify() {
	mailsubject="`hostname` to be $1: $vip floating"
	mailbody="`date '+%F %H:%M:%S'`, vrrp transition, `hostname` changed to be $1."
	echo $mailbody | mail -s "$mailsubject" $contact
}
# Main Function
ParseOptions $@
[ $? -ne 0 ] && Usage && exit 5
[ $helpflag -eq 1 ] && Usage && exit 0
if [ $addressflag -ne 1 -o $notifyflag -ne 1 ]; then
	Usage
	exit 2
fi
mode=${mode:-mb}
case $notify in
'master')
	if [ $serviceflag -eq 1 ]; then
		RestartService ${services[*]}
	fi
	Notify master
	;;
'backup')
	if [ $serviceflag -eq 1 ]; then
		if [ "$mode" == 'mb' ]; then
			StopService ${services[*]}
		else
			RestartService ${services[*]}
		fi
	fi
	Notify backup
	;;
'fault')
	Notify fault
	;;
*)
	Usage
	exit 4
	;;
esac

2、在keepalived.conf配置文件中,其调用方法如下所示:
notify_master "/etc/keepalived/notify.sh -n master -a 192.168.18.200"
notify_backup "/etc/keepalived/notify.sh -n backup -a 192.168.18.200"
notify_fault "/etc/keepalived/notify.sh -n fault -a 192.168.18.200"

3、修改配置文件

1)、master

[root@ZhongH102 /etc/keepalived]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
	notification_email {
			admin@dwhd.org
	}
	notification_email_from Keepalived
	smtp_server 127.0.0.1
	smtp_connect_timeout 30
	router_id LVS_DEVEL
}

vrrp_script chk_schedown { #定义vrrp执行脚本
	script "[ -e /etc/keepalived/down ] && exit 1 || exit 0" #查看是否有down文件,有就进入维护模式
	interval 1 #监控间隔时间
	weight -5 #降低优先级
	fall 2 #失败次数
	rise 1 #成功数次
}

vrrp_instance VI_1 {
	state MASTER
	interface eth0
	virtual_router_id 51
	priority 101
	advert_int 1
	authentication {
			auth_type PASS
			auth_pass YjVhZGE1NjNlYzQ4
	}
	virtual_ipaddress {
			172.16.7.200
	}
	track_script { #执行脚本
			chk_schedown
	}
	#增加以下三行
	notify_master "/etc/keepalived/notify.sh -n master -a 172.16.7.200"
	notify_backup "/etc/keepalived/notify.sh -n backup -a 172.16.7.200"
	notify_fault "/etc/keepalived/notify.sh -n fault -a 172.16.7.200"
}

virtual_server 172.16.7.200 80 {
	delay_loop 6
	lb_algo rr
	lb_kind DR
	nat_mask 255.255.255.0
	#persistence_timeout 50
	protocol TCP

	real_server 172.16.6.100 80 {
		weight 1
		HTTP_GET {
			url {
				path /
				status_code 200
				#digest ff20ad2481f97b1754ef3e12ecd3a9cc
			}
			connect_timeout 2
			nb_get_retry 3
			delay_before_retry 1
		}
	}

	real_server 172.16.6.101 80 {
		weight 1
		HTTP_GET {
			url {
				path /
				status_code 200
			}
			connect_timeout 2
			nb_get_retry 3
			delay_before_retry 1
		}
	}
	sorry_server 127.0.0.1 80
}
[root@ZhongH102 /etc/keepalived]#

2)、slave

[root@ZhongH103 /tmp/nginx-1.9.1]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
        notification_email {
                admin@dwhd.org
        }
        notification_email_from Keepalived
        smtp_server 127.0.0.1
        smtp_connect_timeout 30
        router_id LVS_DEVEL
}

vrrp_script chk_schedown { #定义vrrp执行脚本
        script "[ -e /etc/keepalived/down ] && exit 1 || exit 0" #查看是否有down文件,有就进入维护模式
        interval 1 #监控间隔时间
        weight -5 #降低优先级
        fall 2 #失败次数
        rise 1 #成功数次
}

vrrp_instance VI_1 {
        state BACKUP
        interface eth0
        virtual_router_id 51
        priority 100
        advert_int 1
        authentication {
                auth_type PASS
                auth_pass YjVhZGE1NjNlYzQ4
        }
        virtual_ipaddress {
                172.16.7.200
        }
        track_script { #执行脚本
                chk_schedown
        }
        #增加以下三行
        notify_master "/etc/keepalived/notify.sh -n master -a 172.16.7.200"
        notify_backup "/etc/keepalived/notify.sh -n backup -a 172.16.7.200"
        notify_fault "/etc/keepalived/notify.sh -n fault -a 172.16.7.200"
}

virtual_server 172.16.7.200 80 {
        delay_loop 6
        lb_algo rr
        lb_kind DR
        nat_mask 255.255.255.0
        #persistence_timeout 50
        protocol TCP

        real_server 172.16.6.100 80 {
                weight 1
                HTTP_GET {
                        url {
                                path /
                                status_code 200
                                #digest ff20ad2481f97b1754ef3e12ecd3a9cc
                        }
                        connect_timeout 2
                        nb_get_retry 3
                        delay_before_retry 1
                }
        }

        real_server 172.16.6.101 80 {
                weight 1
                HTTP_GET {
                        url {
                                path /
                                status_code 200
                        }
                        connect_timeout 2
                        nb_get_retry 3
                        delay_before_retry 1
                }
        }
        sorry_server 127.0.0.1 80
}
[root@ZhongH103 /tmp/nginx-1.9.1]# 

3)、测试脚本

[root@ZhongH103 /etc/keepalived]# cd /etc/keepalived/
[root@ZhongH103 /etc/keepalived]# ./notify.sh
Usage: notify.sh [-m|--mode {mm|mb}] [-s|--service SERVICE1,...] <-a|--address VIP>  <-n|--notify {master|backup|falut}>
Usage: notify.sh -h|--help
[root@ZhongH103 /etc/keepalived]# ./notify.sh --help
Usage: notify.sh [-m|--mode {mm|mb}] [-s|--service SERVICE1,...] <-a|--address VIP>  <-n|--notify {master|backup|falut}>
Usage: notify.sh -h|--help
[root@ZhongH103 /etc/keepalived]# ./notify.sh -m mb -a 1.1.1.1 -n master
[root@ZhongH103 /etc/keepalived]# 

4)、查看邮件
Linux高可用(HA)之LVS+Keepalived 实现高可用前端负载均衡器

5)、模拟故障

[root@ZhongH103 /etc/keepalived]# /etc/init.d/keepalived restart #重启slave的Keepalived
停止 keepalived:                                          [确定]
正在启动 keepalived:                                      [确定]
[root@ZhongH103 /etc/keepalived]# ssh root@172.16.6.102 "/etc/init.d/keepalived restart" #重启master的Keepalived
停止 keepalived:[确定]
正在启动 keepalived:[确定]
[root@ZhongH103 /etc/keepalived]# 
[root@ZhongH103 /etc/keepalived]# ip addr   #查看Ip
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:8c:86:18 brd ff:ff:ff:ff:ff:ff
    inet 172.16.6.103/16 brd 172.16.255.255 scope global eth0
    inet 172.16.7.200/32 scope global eth0
    inet6 fe80::20c:29ff:fe8c:8618/64 scope link
       valid_lft forever preferred_lft forever
3: pan0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 1a:6b:45:f5:17:da brd ff:ff:ff:ff:ff:ff
[root@ZhongH103 /etc/keepalived]# touch /etc/keepalived/down #进入维护模式
[root@ZhongH103 /etc/keepalived]# ls -l /etc/keepalived/down
-rw-r--r-- 1 root root 0 5月  29 22:02 /etc/keepalived/down
[root@ZhongH103 /etc/keepalived]# ip addr show #再次查看ip
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:8c:86:18 brd ff:ff:ff:ff:ff:ff
    inet 172.16.6.103/16 brd 172.16.255.255 scope global eth0
    inet6 fe80::20c:29ff:fe8c:8618/64 scope link
       valid_lft forever preferred_lft forever
3: pan0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 1a:6b:45:f5:17:da brd ff:ff:ff:ff:ff:ff
[root@ZhongH103 /etc/keepalived]# 

Linux高可用(HA)之LVS+Keepalived 实现高可用前端负载均衡器
切换到master上查看

[root@ZhongH102 /etc/keepalived]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:fa:1a:f1 brd ff:ff:ff:ff:ff:ff
    inet 172.16.6.102/16 brd 172.16.255.255 scope global eth0
    inet 172.16.7.200/32 scope global eth0
    inet6 fe80::20c:29ff:fefa:1af1/64 scope link
       valid_lft forever preferred_lft forever
3: pan0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 32:94:a4:20:1d:5f brd ff:ff:ff:ff:ff:ff
[root@ZhongH102 /etc/keepalived]# 

Linux高可用(HA)之LVS+Keepalived 实现高可用前端负载均衡器
6)、查看邮件
Linux高可用(HA)之LVS+Keepalived 实现高可用前端负载均衡器
Linux高可用(HA)之LVS+Keepalived 实现高可用前端负载均衡器
注,大家可以看到,主备切换时,会发送邮件报警,好了到这里所有演示全部完成。

您可以选择一种方式赞助本站

支付宝扫一扫赞助

微信钱包扫描赞助

lookback

发表评论

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen: