标签:art 现在 loop selinux 注意 keepalive 介绍 pass sha
LVS负责多台WEB端的负载均衡(LB);Keepalived负责LVS的高可用(HA),这里介绍主备模型。
HOSTNAME IP SYSTEM
DR1 192.168.10.234 CENTOS7.5
DR2 192.168.10.235 CENTOS7.5
RS1 192.168.10.236 CENTOS7.5
RS2 192.168.10.237 CENTOS7.5
VIP 192.168.10.239
yum install ipvsadm keepalived -y
2.修改配置文件/etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id LVS ## 不一定要与主机名相同,也不必与BACKUP的名字一致
}
vrrp_instance VI_1 {
state MASTER ## LVS-1配置了为主,另外一台LVS-2配置为BACKUP
interface eth0 ## 注意匹配网卡名
virtual_router_id 51 ## 虚拟路由ID(0-255),在一个VRRP实例中主备服务器ID必须一样
priority 150 ## 优先级值设定:MASTER要比BACKUP的值大
advert_int 3 ## 通告时间间隔:单位秒,主备要一致
authentication { ##认证机制
auth_type PASS ## 默认PASS; 有两种:PASS或AH
auth_pass 1111 ## 默认1111; 可多位字符串,但仅前8位有效
}
virtual_ipaddress {
192.168.10.239 ## 虚拟IP;可多个,写法为每行一个
}
}
virtual_server 192.168.10.222 80 {
delay_loop 3 ## 设置健康状态检查时间
lb_algo rr ## 调度算法,这里用了rr轮询算法,便于后面测试查看
lb_kind DR ## 这里测试用了Direct Route 模式,
# persistence_timeout 1 ## 持久连接超时时间,先注释掉,不然在单台上测试时,全部会被lvs调度到其中一台Real Server
protocol TCP
real_server 192.168.10.236 80 {
weight 1
TCP_CHECK {
connect_timeout 10 ##设置响应超时时间
nb_get_retry 3 ##设置超时重试次数
delay_before_retry 3 ##设置超时重试间隔时间
connect_port 80
}
}
real_server 192.168.10.237 80 {
weight 1
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
}
在dr2上只要修改
state MASTER >> state BACKUP
priority 150 >> priority 140
3.通过执行脚本dr.sh配置,需加执行权限(DR1和DR2都执行)
#!/bin/bash
# director服务器上开启路由转发功能
echo 1 > /proc/sys/net/ipv4/ip_forward
# 测试发现因为调度器跟real_server在同一网段,如果需要转发给外网或别的网段就打开
ipv=/sbin/ipvsadm
vip=192.168.10.239
rs1=192.168.10.236
rs2=192.168.10.237
ifconfig eth0:0 $vip broadcast $vip netmask 255.255.255.255 up
# 注意子网掩码是255.255.255.255,代表vip所在网段没有别的主机了
route add -host $vip dev eth0:0
# 增加一条路由
$ipv -C
$ipv -A -t $vip:80 -s rr
$ipv -a -t $vip:80 -r $rs1:80 -g -w 1
$ipv -a -t $vip:80 -r $rs2:80 -g -w 1
# -g代表dr模式
4.启动keepalived服务,并通过ip a命令查看vip是否配置生效(会出现一个192.168.10.239的ip)
[root@CentOS7 ~]# systemctl start keepalived
1.安装web服务(RS1和RS2上分别安装)
[root@CentOS7 ~]# yum install nginx -y
修改RS1首页/usr/share/nginx/html/index.html(nginx默认首页)
THIS IS RS1,THE IP IS 192.168.10.236
修改RS2首页/usr/share/nginx/html/index.html(nginx默认首页)
THIS IS RS2,THE IP IS 192.168.10.237
2.为方便操作,这里新建一个脚本rs.sh记录(RS1和RS2上同步操作)
#!/bin/bash
VIP=192.168.10.239
case "$1" in
start)
echo "start LVS of RealServer DR"
/sbin/ifconfig lo:0 $VIP broadcast $VIP netmask 255.255.255.255 up
/sbin/route add -host $VIP dev lo:0
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
;;
stop)
/sbin/ifconfig lo:0 down
echo "close LVS of RealServer DR"
echo "0" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "0" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "0" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "0" >/proc/sys/net/ipv4/conf/all/arp_announce
;;
*)
echo "Usage: $0 {start|stop}"
exit 1
esac
exit 0
给脚本添加执行权限,并启动脚本示例
[root@CentOS7 ~]# chmod +x dr.sh
[root@CentOS7 ~]# ./dr.sh start // start:启动; stop:停止;
start LVS of RealServer DR
3.启动web服务
[root@CentOS7 ~]# systemctl start nginx
测试①: 默认现在访问VIP:192.168.10.239,走的是DR1(MASTER)
[root@dib101 ~]# while true ; do curl 192.168.10.239; sleep 1;done
THIS IS RS1,THE IP IS 192.168.10.236
THIS IS RS2,THE IP IS 192.168.10.237
THIS IS RS1,THE IP IS 192.168.10.236
THIS IS RS2,THE IP IS 192.168.10.237
THIS IS RS1,THE IP IS 192.168.10.236
THIS IS RS2,THE IP IS 192.168.10.237
THIS IS RS1,THE IP IS 192.168.10.236
THIS IS RS2,THE IP IS 192.168.10.237
THIS IS RS1,THE IP IS 192.168.10.236
THIS IS RS2,THE IP IS 192.168.10.237
THIS IS RS1,THE IP IS 192.168.10.236
THIS IS RS2,THE IP IS 192.168.10.237
THIS IS RS1,THE IP IS 192.168.10.236
测试②:
断开MASTER:VIP飘到BACKUP上,访问VIP正常,Client 轮询依旧;
复活MASTER:VIP飘回MASTER上,访问VIP正常,Client 轮询依旧; // 成功实现:LVS的高可用 和 Nginx的负载均衡
测试③:
手动断开Nginx,然后再手动启动Nginx:
1.关闭防火墙,selinux等功能
2.此笔记参考博客http://www.cnblogs.com/ding2016/p/7235195.html 整理记录
标签:art 现在 loop selinux 注意 keepalive 介绍 pass sha
原文地址:https://www.cnblogs.com/cdw0724/p/10867406.html