上篇博客介绍了lvs,前面提到lvs是基于tcp4层,不具备健康检查功能,对于前端的访问,lvs不管后端服务状态,直接把请求扔给后端。如果后端服务不可用,lvs仍然会把访问请求扔给不可用的后端,对于高可用服务来说,无疑是不可接受的。keepalive为lvs应运而生,keepalive可对后端的服务进行健康检查,还可以对提供高可用的lvs(主备节点)健康检查,出现故障时,自动切换。但keepalive的使用又不只局限用于lvs。它能对服务健康检查,从而实现服务高可用。
keepalive是基于vrrp协议在linux主机上以守护进程方式,根据配置文件实现健康检查。
再说,VRRP是一种选择协议,它可以把一个虚拟路由器的责任动态分配到局域网上的VRRP路由器中的一台。控制虚拟路由器IP地址的VRRP路由器称为主路由器,它负责转发数据包到这些虚拟IP地址。一旦主路由器不可用,这种选择过程就提供了动态的故障转移机制,这就允许虚拟路由器的IP地址可以作为终端主机的默认第一跳路由器。
keepalive通过组播,单播等方式(自定义),实现keepalive主备推选。工作模式分为抢占和非抢占(通过参数nopreempt来控制)。
抢占模式:主服务正常工作时,虚拟IP会在主上,备不提供服务,当主服务优先级低于备的时候,备会自动抢占虚拟IP,这时,主不提供服务,备提供服务。也就是说,工作在抢占模式下,不分主备,只管优先级。
非抢占模式:当keepalive启动后,主始终是主,不管主备优先级,只有主不提供服务时,备才接管虚拟IP,提供服务。
keepalive正常工作需要注意的
1、主机的主机名,各节点之间能通过主机名解析
2、各节点之间的时间需要保持一致
3、iptables与selinux
keepalive配置文件:/etc/keepalived/keepalived.conf
分3个部分:
全局段:global configuration
keepalive配置段:vrrpd configuration
lvs配置段:lvs configuration
配置keepalive需要注意:
1、各个节点之间,尽量能通过主机解析
2、各个节点之后的时间,尽量保持一致
3、关闭iptables和selinux,免得带来影响
首先实现keepalive+nginx高可用,一主一备
正常情况下,只有其中一主可以对外工作,主down掉时,备顶替主对外提供服务
10.0.0.50的keepalive配置文件
! Configuration File for keepalived global_defs { notification_email { root@localhost } notification_email_from keepalive@localhost smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id LVS_DEVEL vrrp_mcast_group4 224.0.0.18 } vrrp_script chk_down { #用于关闭keepalive服务 script "[[ -f /etc/keepalived/down ]]&& exit 1 ||exit 0" interval 1 weight -2 } vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 199 priority 100 advert_int 1 authentication { auth_type PASS auth_pass c09060332be9bb06 } virtual_ipaddress { 172.20.0.100 } track_interface { eth0 } track_script { chk_down } }
10.0.0.51的keepalive配置文件
! Configuration File for keepalived global_defs { notification_email { root@localhost } notification_email_from keepalive@localhost smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id LVS_DEVEL vrrp_mcast_group4 224.0.0.18 } vrrp_script chk_down { script "[[ -f /etc/keepalived/down ]]&& exit 1||exit 0" interval 1 weight -2 } vrrp_instance VI_1 { state BACKUP interface eth0 virtual_router_id 199 priority 99 advert_int 1 authentication { auth_type PASS auth_pass c09060332be9bb06 } virtual_ipaddress { 172.20.0.100 } track_interface { eth0 } track_script { chk_down } }
重启服务后,效果如下
[root@lvs1 ~]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:c6:0c:57 brd ff:ff:ff:ff:ff:ff inet 10.0.0.50/24 brd 10.0.0.255 scope global eth0 inet 172.20.0.100/32 scope global eth0 #已经在10.0.0.50上得到VIP了 inet6 fe80::20c:29ff:fec6:c57/64 scope link valid_lft forever preferred_lft forever ###远程主机上curl一下vip [root@centos6-tes3 ~]# curl 172.20.0.100 this is web from IP 50 [root@centos6-tes3 ~]# curl 172.20.0.100 this is web from IP 50 #能够得到访问到
down掉10.0.0.50上的keepalive服务
[root@lvs1 keepalived]# touch down [root@lvs1 keepalived]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:c6:0c:57 brd ff:ff:ff:ff:ff:ff inet 10.0.0.50/24 brd 10.0.0.255 scope global eth0 inet6 fe80::20c:29ff:fec6:c57/64 scope link valid_lft forever preferred_lft forever #已经没有vip地址,去远程主机上curl下 [root@centos6-tes3 ~]# curl 172.20.0.100 this is web from IP 51 [root@centos6-tes3 ~]# curl 172.20.0.100 this is web from IP 51 #转移到10.0.0.51上面去了
keepalive+nginx 双主模型
10.0.0.50keepalive配置文件
! Configuration File for keepalived global_defs { notification_email { root@localhost } notification_email_from keepalive@localhost smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id LVS_DEVEL vrrp_mcast_group4 224.0.0.18 } vrrp_script chk_down { script "[[ -f /etc/keepalived/down ]]&& exit 1 ||exit 0" interval 1 weight -2 } vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 199 priority 100 advert_int 1 authentication { auth_type PASS auth_pass c09060332be9bb06 } virtual_ipaddress { 172.20.0.100 } track_interface { eth0 } track_script { chk_down } } vrrp_instance VI_2 { state BACKUP interface eth0 virtual_router_id 200 priority 99 advert_int 1 authentication { auth_type PASS auth_pass c09060332be9bb06 } virtual_ipaddress { 172.20.0.101 } track_interface { eth0 } track_script { chk_down } }
10.0.0.51keepalive配置文件
! Configuration File for keepalived global_defs { notification_email { root@localhost } notification_email_from keepalive@localhost smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id LVS_DEVEL vrrp_mcast_group4 224.0.0.18 } vrrp_script chk_down { script "[[ -f /etc/keepalived/down ]]&& exit 1||exit 0" interval 1 weight -2 } vrrp_instance VI_1 { state BACKUP interface eth0 virtual_router_id 199 priority 99 advert_int 1 authentication { auth_type PASS auth_pass c09060332be9bb06 } virtual_ipaddress { 172.20.0.100 } track_interface { eth0 } track_script { chk_down } } vrrp_instance VI_2 { state MASTER interface eth0 virtual_router_id 200 priority 100 advert_int 1 authentication { auth_type PASS auth_pass c09060332be9bb06 } virtual_ipaddress { 172.20.0.101 } track_interface { eth0 } track_script { chk_down } }
实验结果
[root@lvs1 keepalived]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:c6:0c:57 brd ff:ff:ff:ff:ff:ff inet 10.0.0.50/24 brd 10.0.0.255 scope global eth0 inet 172.20.0.100/32 scope global eth0 inet6 fe80::20c:29ff:fec6:c57/64 scope link valid_lft forever preferred_lft forever [root@lvs2 keepalived]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:97:67:2a brd ff:ff:ff:ff:ff:ff inet 10.0.0.51/24 brd 10.0.0.255 scope global eth0 inet 172.20.0.101/32 scope global eth0 inet6 fe80::20c:29ff:fe97:672a/64 scope link valid_lft forever preferred_lft forever 在远程主机curl一下172.20.0.100和172.20.0.101 [root@centos6-tes3 ~]# curl 172.20.0.100 this is web from IP 50 [root@centos6-tes3 ~]# curl 172.20.0.101 this is web from IP 51
down掉10.0.0.50的keepalive,测试VIP是否会转移
[root@lvs1 keepalived]# touch down [root@lvs1 keepalived]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:c6:0c:57 brd ff:ff:ff:ff:ff:ff inet 10.0.0.50/24 brd 10.0.0.255 scope global eth0 inet6 fe80::20c:29ff:fec6:c57/64 scope link valid_lft forever preferred_lft forever [root@lvs2 keepalived]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:97:67:2a brd ff:ff:ff:ff:ff:ff inet 10.0.0.51/24 brd 10.0.0.255 scope global eth0 inet 172.20.0.101/32 scope global eth0 inet 172.20.0.100/32 scope global eth0 inet6 fe80::20c:29ff:fe97:672a/64 scope link valid_lft forever preferred_lft forever 远程主机上分别curl 172.20.0.100和172.20.0.101 [root@centos6-tes3 ~]# curl 172.20.0.100 this is web from IP 51 [root@centos6-tes3 ~]# curl 172.20.0.101 this is web from IP 51
在10.0.0.50上删掉down文件,测试172.20.0.100会不会转移回来
[root@lvs1 keepalived]# rm down rm: remove regular empty file `down‘? y [root@lvs1 keepalived]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:c6:0c:57 brd ff:ff:ff:ff:ff:ff inet 10.0.0.50/24 brd 10.0.0.255 scope global eth0 inet 172.20.0.100/32 scope global eth0 inet6 fe80::20c:29ff:fec6:c57/64 scope link valid_lft forever preferred_lft forever 远程主机上curl 172.20.0.100 [root@centos6-tes3 ~]# curl 172.20.0.100 this is web from IP 50 可以上实验实现了双主,负载。
keepalive+lvs实现双主负载
在keepalive上安装好lvs,以及把后端的web服务器配置好vip
web1,web2网络配置
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 2 > /proc/sys/net/ipv4/conf/eth0/arp_announce
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 1 > /proc/sys/net/ipv4/conf/eth0/arp_ignore
在lo上配置ip别名
ifconfiglo:0 10.0.0.100/32 broadcast 10.0.0.100 up
ifconfiglo:1 10.0.0.101/32 broadcast 10.0.0.101 up
添加lo到网卡的路由
route add -host 10.0.0.100 dev lo:0
route add -host 10.0.0.101 dev lo:1
10.0.0.50keepalive配置
! Configuration File for keepalived global_defs { notification_email { root@localhost } notification_email_from keepalive@localhost smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id LVS_DEVEL vrrp_mcast_group4 224.0.0.18 vrrp_mcast_group5 224.0.0.19 } vrrp_script chk_down { script "[[ -f /etc/keepalived/down ]]&& exit 1 ||exit 0" interval 1 weight -2 } vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 199 priority 100 advert_int 1 authentication { auth_type PASS auth_pass c09060332be9bb06 } virtual_ipaddress { 172.20.0.100/16 dev eth0 label eth0:0 } track_interface { eth0 } track_script { chk_down } } virtual_server 172.20.0.100 80 { delay_loop 3 lb_algo rr lb_kind DR nat_mask 255.255.255.0 # persistence_timeout 50 protocol TCP real_server 10.0.0.91 80 { HTTP_GET { url { path / status_code 200 } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 10.0.0.92 80 { HTTP_GET { url { path / status_code 200 } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } } vrrp_instance VI_2 { state BACKUP interface eth0 virtual_router_id 200 priority 99 advert_int 1 authentication { auth_type PASS auth_pass c09060332be9bb06 } virtual_ipaddress { 172.20.0.101/16 dev eth0 label eth0:1 } track_interface { eth0 } track_script { chk_down } } virtual_server 172.20.0.101 80 { delay_loop 3 lb_algo rr lb_kind DR nat_mask 255.255.255.0 # persistence_timeout 50 protocol TCP real_server 10.0.0.91 80 { HTTP_GET { url { path / status_code 200 } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 10.0.0.92 80 { HTTP_GET { url { path / status_code 200 } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } }
10.0.0.51keepalive配置文件
! Configuration File for keepalived global_defs { notification_email { root@localhost } notification_email_from keepalive@localhost smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id LVS_DEVEL vrrp_mcast_group4 224.0.0.18 vrrp_mcast_group5 224.0.0.19 } vrrp_script chk_down { script "[[ -f /etc/keepalived/down ]]&& exit 1 ||exit 0" interval 1 weight -2 } vrrp_instance VI_1 { state BACKUP interface eth0 virtual_router_id 199 priority 99 advert_int 1 authentication { auth_type PASS auth_pass c09060332be9bb06 } virtual_ipaddress { 172.20.0.100/16 dev eth0 label eth0:0 } track_interface { eth0 } track_script { chk_down } } virtual_server 172.20.0.100 80 { delay_loop 3 lb_algo rr lb_kind DR nat_mask 255.255.255.0 # persistence_timeout 50 protocol TCP real_server 10.0.0.91 80 { HTTP_GET { url { path / status_code 200 } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 10.0.0.92 80 { HTTP_GET { url { path / status_code 200 } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } } vrrp_instance VI_2 { state MASTER interface eth0 virtual_router_id 200 priority 100 advert_int 1 authentication { auth_type PASS auth_pass c09060332be9bb06 } virtual_ipaddress { 172.20.0.101/16 dev eth0 label eth0:1 } track_interface { eth0 } track_script { chk_down } } virtual_server 172.20.0.101 80 { delay_loop 3 lb_algo rr lb_kind DR nat_mask 255.255.255.0 # persistence_timeout 50 protocol TCP real_server 10.0.0.91 80 { HTTP_GET { url { path / status_code 200 } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 10.0.0.92 80 { HTTP_GET { url { path / status_code 200 } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } }
停掉keepalive,重启network,清理掉ipvs规则,最后再启动keepalive服务
service keepalived start
ipvsadm -C
service network restart
service keepalived start
在10.0.0.50和10.0.0.51上面分别查看ip和ipvs规则
[root@lvs1 keepalived]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:c6:0c:57 brd ff:ff:ff:ff:ff:ff inet 10.0.0.50/24 brd 10.0.0.255 scope global eth0 inet 172.20.0.100/16 scope global eth0:0 inet6 fe80::20c:29ff:fec6:c57/64 scope link valid_lft forever preferred_lft forever [root@lvs1 keepalived]# ipvsadm -L -n IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 172.20.0.100:80 rr -> 10.0.0.91:80 Route 1 0 0 -> 10.0.0.92:80 Route 1 0 0 TCP 172.20.0.101:80 rr -> 10.0.0.91:80 Route 1 0 0 -> 10.0.0.92:80 Route 1 0 0 [root@lvs2 keepalived]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:97:67:2a brd ff:ff:ff:ff:ff:ff inet 10.0.0.51/24 brd 10.0.0.255 scope global eth0 inet 172.20.0.101/16 scope global eth0:1 inet6 fe80::20c:29ff:fe97:672a/64 scope link valid_lft forever preferred_lft forever [root@lvs2 keepalived]# ipvsadm -L -n IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 172.20.0.100:80 rr -> 10.0.0.91:80 Route 1 0 0 -> 10.0.0.92:80 Route 1 0 0 TCP 172.20.0.101:80 rr -> 10.0.0.91:80 Route 1 0 0 -> 10.0.0.92:80 Route 1 0 0
每个lvs节点都有一个VIP,再通过远程主机curl每个vip,是否能负载均衡
[root@centos6-tes3 ~]# curl 172.20.0.100 this is 92 page [root@centos6-tes3 ~]# curl 172.20.0.100 this is 91 page [root@centos6-tes3 ~]# curl 172.20.0.100 this is 92 page [root@centos6-tes3 ~]# curl 172.20.0.101 this is 92 page [root@centos6-tes3 ~]# curl 172.20.0.101 this is 91 page
最后down掉10.0.0.50,测试所有的VIP是否都转移到10.0.0.51,且两个VIP都能均衡
[root@lvs1 keepalived]# touch down #down 10.0.0.50的keepalive服务 [root@lvs1 keepalived]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:c6:0c:57 brd ff:ff:ff:ff:ff:ff inet 10.0.0.50/24 brd 10.0.0.255 scope global eth0 inet6 fe80::20c:29ff:fec6:c57/64 scope link valid_lft forever preferred_lft forever [root@lvs1 keepalived]# ipvsadm -L -n #但是ipvs的规则不会清除 IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 172.20.0.100:80 rr -> 10.0.0.91:80 Route 1 0 0 -> 10.0.0.92:80 Route 1 0 0 TCP 172.20.0.101:80 rr -> 10.0.0.91:80 Route 1 0 0 -> 10.0.0.92:80 Route 1 0 0 [root@lvs2 keepalived]# ip addr #查看VIP全部都转移到10.0.0.51这台上面来了。 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:97:67:2a brd ff:ff:ff:ff:ff:ff inet 10.0.0.51/24 brd 10.0.0.255 scope global eth0 inet 172.20.0.101/16 scope global eth0:1 inet 172.20.0.100/16 scope global secondary eth0:0 inet6 fe80::20c:29ff:fe97:672a/64 scope link valid_lft forever preferred_lft forever [root@lvs2 keepalived]# ipvsadm -L -n IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 172.20.0.100:80 rr -> 10.0.0.91:80 Route 1 0 0 -> 10.0.0.92:80 Route 1 0 0 TCP 172.20.0.101:80 rr -> 10.0.0.91:80 Route 1 0 0 -> 10.0.0.92:80 Route 1 0 0 在远程主机curl测试2个vip是否可以使用,且能负载 [root@centos6-tes3 ~]# curl 172.20.0.100 this is 92 page [root@centos6-tes3 ~]# curl 172.20.0.100 this is 91 page [root@centos6-tes3 ~]# curl 172.20.0.101 this is 92 page [root@centos6-tes3 ~]# curl 172.20.0.101 this is 91 page
一切正常,说明通过2个VIP,可以实现双主+双备,同时实现负载和冗余的目的
原文地址:http://iznowow.blog.51cto.com/6232028/1703825