标签:lvs配置 keepalived+lvs+nginx lvs详解
LVS简介:
LVS集群有DR、TUN、NAT三种配置模式,可以对www服务、FTP服务、MAIL服务等做负载均衡,下面通过搭建www服务的负载均衡实例,讲述基于DR模式的LVS集群配置
Director-Server: LVS的核心服务器,作用类似于一个路由器,含有为LVS功能完成的路由表,通过路由表把用户的请求分发给服务器组层的应用服务器(Real_Server),同时监控Real-servers
,在Real-Server不可用时,将其从LVS路由表中剔除,再恢复时,重新加入。
Real-Server:由web服务器,mail服务器,FTP服务器,DNS服务器,视频服务器中的一个或多个,每个Real-Server通过LAN分布或WAN分布相连接。实际过程中DR,也可以同时兼任Real-server
LVS的三种负载均衡方式:
NAT:调度器将请求的目标地址和目标端口,改写成Real-server的地址和端口,然后发送到选定的Real-server上,Real-Server端将数据返回给用户时,需要再次经过DR将报文的源地址和源端口改成虚拟IP的地址和端口,然后把数据发送给用户,完成整个负载调度过程。
弊端:调度器负载大
TUN: IP隧道方式,调度器将请求通过IP隧道转发到Real-server,而Real-server直接响应用户的请求,不再经调度器。D与R可不同网络,TUN方式中,调度器将只处理用户的报文请求,提高吞吐量。
弊端:有IP隧道开销
DR:直接路由技术实现虚拟服务器,DR通过改写请求的MAC,将请求发送给Real-server,而Real-server直接响应给Client,免去了隧道开销。三种方式中,效果最好。
弊端:要求D与R同在一个物理网段
LVS的负载调度方式:
LVS是根据Real-Server的负载情况,动态的选择Real-server响应,IPVS实现了8种负载调度算法,这里讲述4种调度算法:
rr 轮叫调度:
平等分配R,不考虑负载。
wrr 加权轮叫调度:
设置高低权值,分配给R。
lc 最少连接调度:
动态分配给已建立连接少的R上。
wlc 加权最少连接调度:
动态的设置R的权值,在分配新连接请求时,尽可能使R的已建立连接和R的权值成正比。
环境介绍:
本例使用三台主机,一台Director-server(调度服务器),两台web real_server(web服务器)
DS的真实IP:10.2.16.250
VIP:10.2.16.252
RealServer——1 的真实IP: 10.2.16.253
RealServer——2 的真实IP: 10.2.16.254
注意:本例采用LVS的DR模式,使用rr轮询来做负载均衡
用keepalived方式安装配置LVS
1、安装keepalived
[root@proxy ~]# tar -zxvf keepalived-1.2.13.tar.gz -C ./
[root@proxy ~]# cd keepalived-1.2.13
[root@proxy keepalived-1.2.13]# ./configure --sysconf=/etc/ --with-kernel-dir=/usr/src/kernels/2.6.32-358.el6.x86_64/
[root@proxy keepalived-1.2.13]# make && make install
[root@proxy keepalived-1.2.13]# ln /usr/local/sbin/keepalived /sbin/
2、安装LVS
yum -y install ipvsadm*
开启路由转发功能:
[root@proxy ~]# vim /etc/sysctl.conf
net.ipv4.ip_forward = 1
[root@proxy ~]# sysctl -p
3、在调度服务器上配置keepalived和LVS
[root@proxy ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_instance VI_1 {
state MASTER
interface eth0 #LVS的真实物理网卡
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress { #LVS的VIP
10.2.16.252
}
}
virtual_server 10.2.16.252 80 { #定义对外提供服务的LVS的VIP以及port
delay_loop 6 #设置运行情况检查时间,单位是s
lb_algo rr #设置负载调度算法,RR为轮询调度
lb_kind DR #设置LVS的负载均衡机制,NAT/TUN/DR 三种模式
nat_mask 255.255.255.0
# persistence_timeout 50 #会话保持时间,单位是s,对动态网页的session共享有用
protocol TCP #指定转发协议类型
real_server 10.2.16.253 80 { #指定realserver的真实IP和port
weight 1 #设置权值,数字越大分配的几率越高
TCP_CHECK { #realserver的状态检测部分
connect_timeout 3 #表示3秒无响应超时
nb_get_retry 3 #重试次数
delay_before_retry 3 #重试间隔
}
}
real_server 10.2.16.254 80 { #配置服务节点2
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
4、配置Real_Server
由于采用的是DR方式调度,Real_Server会以LVS的VIP来直接回复Client,所以需要在Real_Server的lo上开启LVS的VIP来与Client建立通信
1、此处写了一个脚本来实现VIP这项功能:
[root@web-1 ~]# cat /etc/init.d/lvsrs
#!/bin/bash
#description : start Real Server
VIP=10.2.16.252
./etc/rc.d/init.d/functions
case "$1" in
start)
echo " Start LVS of Real Server "
/sbin/ifconfig lo:0 $VIP broadcast $VIP netmask 255.255.255.255 up
/sbin/route add -host $VIP dev lo:0
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore #注释:这四句目的是为了关闭ARP广播响应,使VIP不能向网络内发送广播,以防止网络出现混乱
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
;;
stop)
/sbin/ifconfig lo:0 down
echo "close LVS Director server"
echo "0" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "0" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "0" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "0" >/proc/sys/net/ipv4/conf/all/arp_announce
;;
*)
echo "Usage: $0 {start|stop}"
exit 1
esac
2、启动脚本:
[root@web-1 ~]# service lvsrs start
Start LVS of Real Server
3、查看lo:0虚拟网卡的IP:
[root@web-1 ~]# ifconfig
eth0 Link encap:Ethernet HWaddr 00:0C:29:A2:C4:9F
inet addr:10.2.16.253 Bcast:10.2.16.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fea2:c49f/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:365834 errors:0 dropped:0 overruns:0 frame:0
TX packets:43393 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:33998241 (32.4 MiB) TX bytes:4007256 (3.8 MiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:17 errors:0 dropped:0 overruns:0 frame:0
TX packets:17 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1482 (1.4 KiB) TX bytes:1482 (1.4 KiB)
lo:0 Link encap:Local Loopback
inet addr:10.2.16.252 Mask:255.255.255.255
UP LOOPBACK RUNNING MTU:16436 Metric:1
4、确保nginx访问正常
[root@web-1 ~]# netstat -anptul
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1024/nginx
5、在real_server2上,执行同样的4步操作。
6、开启DR上的keepalived:
[root@proxy ~]# service keepalived start
Starting keepalived: [ OK ]
查看keepalived启动日志是否正常:
[root@proxy ~]# tail -f /var/log/messeges
May 24 10:06:57 proxy Keepalived[2767]: Starting Keepalived v1.2.13 (05/24,2014)
May 24 10:06:57 proxy Keepalived[2768]: Starting Healthcheck child process, pid=2770
May 24 10:06:57 proxy Keepalived[2768]: Starting VRRP child process, pid=2771
May 24 10:06:57 proxy Keepalived_healthcheckers[2770]: Netlink reflector reports IP 10.2.16.250 added
May 24 10:06:57 proxy Keepalived_vrrp[2771]: Netlink reflector reports IP 10.2.16.250 added
May 24 10:06:57 proxy Keepalived_healthcheckers[2770]: Netlink reflector reports IP fe80::20c:29ff:fee6:ce1a added
May 24 10:06:57 proxy Keepalived_healthcheckers[2770]: Registering Kernel netlink reflector
May 24 10:06:57 proxy Keepalived_healthcheckers[2770]: Registering Kernel netlink command channel
May 24 10:06:57 proxy Keepalived_vrrp[2771]: Netlink reflector reports IP fe80::20c:29ff:fee6:ce1a added
May 24 10:06:57 proxy Keepalived_vrrp[2771]: Registering Kernel netlink reflector
May 24 10:06:57 proxy Keepalived_vrrp[2771]: Registering Kernel netlink command channel
May 24 10:06:57 proxy Keepalived_vrrp[2771]: Registering gratuitous ARP shared channel
May 24 10:06:57 proxy Keepalived_vrrp[2771]: Opening file ‘/etc/keepalived/keepalived.conf‘.
May 24 10:06:57 proxy Keepalived_vrrp[2771]: Configuration is using : 63303 Bytes
May 24 10:06:57 proxy Keepalived_vrrp[2771]: Using LinkWatch kernel netlink reflector...
May 24 10:06:57 proxy Keepalived_healthcheckers[2770]: Opening file ‘/etc/keepalived/keepalived.conf‘.
May 24 10:06:57 proxy Keepalived_healthcheckers[2770]: Configuration is using : 14558 Bytes
May 24 10:06:57 proxy Keepalived_vrrp[2771]: VRRP sockpool: [ifindex(2), proto(112), unicast(0), fd(10,11)]
May 24 10:06:57 proxy Keepalived_healthcheckers[2770]: Using LinkWatch kernel netlink reflector...
May 24 10:06:57 proxy Keepalived_healthcheckers[2770]: Activating healthchecker for service [10.2.16.253]:80
May 24 10:06:57 proxy Keepalived_healthcheckers[2770]: Activating healthchecker for service [10.2.16.254]:80
May 24 10:06:58 proxy Keepalived_vrrp[2771]: VRRP_Instance(VI_1) Transition to MASTER STATE
May 24 10:06:59 proxy Keepalived_vrrp[2771]: VRRP_Instance(VI_1) Entering MASTER STATE
May 24 10:06:59 proxy Keepalived_vrrp[2771]: VRRP_Instance(VI_1) setting protocol VIPs.
May 24 10:06:59 proxy Keepalived_vrrp[2771]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 10.2.16.252
May 24 10:06:59 proxy Keepalived_healthcheckers[2770]: Netlink reflector reports IP 10.2.16.252 added
May 24 10:07:04 proxy Keepalived_vrrp[2771]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 10.2.16.252
一切正常!
7、查看LVS的路由表:
[root@proxy ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.2.16.252:80 rr
-> 10.2.16.253:80 Route 1 0 0
-> 10.2.16.254:80 Route 1 0 0
8、测试,打开网页,输入 http://10.2.16.252/
能正常出现两台负载均衡服务器的网页,则证明已经成功!
9、测试其中一台Real-Server服务挂掉
(1)把254的nginx进程杀掉,再开启。
(2)查看keepalived的日志:
[root@proxy ~]#tail -f /var/log/messeges
May 24 10:10:55 proxy Keepalived_healthcheckers[2770]: TCP connection to [10.2.16.254]:80 failed !!!
May 24 10:10:55 proxy Keepalived_healthcheckers[2770]: Removing service [10.2.16.254]:80 from VS [10.2.16.252]:80
May 24 10:10:55 proxy Keepalived_healthcheckers[2770]: Remote SMTP server [127.0.0.1]:25 connected.
May 24 10:10:55 proxy Keepalived_healthcheckers[2770]: SMTP alert successfully sent.
May 24 10:11:43 proxy Keepalived_healthcheckers[2770]: TCP connection to [10.2.16.254]:80 success.
May 24 10:11:43 proxy Keepalived_healthcheckers[2770]: Adding service [10.2.16.254]:80 to VS [10.2.16.252]:80
May 24 10:11:43 proxy Keepalived_healthcheckers[2770]: Remote SMTP server [127.0.0.1]:25 connected.
May 24 10:11:43 proxy Keepalived_healthcheckers[2770]: SMTP alert successfully sent.
可见keepalive的反应速度还是非常快的!
到此,LVS配置全部结束,圆满成功!
本文出自 “Fate” 博客,请务必保留此出处http://czybl.blog.51cto.com/4283444/1536474
搭建LVS负载均衡环境(keepalived+lvs+nginx),布布扣,bubuko.com
搭建LVS负载均衡环境(keepalived+lvs+nginx)
标签:lvs配置 keepalived+lvs+nginx lvs详解
原文地址:http://czybl.blog.51cto.com/4283444/1536474