标签:lvs
LVS-dr
============================================================================
概述:
============================================================================
ipvs-dr设计要点:
★dr模型中,各主机上均需要配置VIP,解决地址冲突的方式有三种:
(1)在前端网关做静态绑定;
(2)在各RS使用arptables;
(3)在各RS修改内核参数,来限制arp响应和通告的级别;
限制响应级别:arp_ignore
0:默认值,表示可使用本地任意接口上配置的任意地址进行响应;
1: 仅在请求的目标IP配置在本地主机的接收到请求报文接口上时,才给予响应;
限制通告级别:arp_announce
0:默认值,把本机上的所有接口的所有信息向每个接口上的网络进行通告;
1:尽量避免向非直接连接网络进行通告;
2:必须避免向非本地直接连接网络进行通告;
1.实验前准备
拓扑图
各主机上均需要配置VIP;
在各RS修改内核参数,来限制arp响应和通告的级别;
RIP与DIP在同一IP网络;RIP的网关不能指向DIP;
RS跟Director要在同一个物理网络;
2.实验测试环境搭建
准备三台虚拟主机,一台作为调度器,另外两台作为RS服务器,并保证RIP与DIP在同一IP网络;接着,修改两台RS服务器的内核参数,并设置两台RS主机上的VIP(本地lo网卡别名)及网关;然后再配置调度器(一块网卡)的VIP(物理网卡别名);
1)首先我们要配置调度器的物理网卡别名(即VIP),主要通信还是使用DIP,别名只是用来接收目标为VIP的通信
2)接下来我们首先要修改两台RS的内核参数arp_ignore和arp_announce,然后配置其本地lo网卡的网卡别名作为VIP,因为实际工作中,RS服务器不只一台,所以如果手动去修改的话,很费事,所以,在这里我写了一个脚本,可以一步到位,脚本源码如下:
#! /bin/bash # vip=10.1.252.73 mask=‘255.255.255.255‘ case $1 in start) echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce ifconfig lo:0 $vip netmask $mask broadcast $vip up # 配置网卡lo的别名为VIP route add -host $vip dev lo:0 ;; stop) ifconfig lo:0 down echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce ;; *) echo "Usage $(basename $0) start|stop" exit 1 ;; esac
执行此脚本后,RS的VIP配置如下(两台RS的VIP相同):
3)保证调度器(DIP)和RS(RIP)在同一网络,我这里使用的都是桥接模式下自动获取的ip地址。
现在我们用客户端主机(CentOS 6)去ping 10.1.252.73,可以发现,响应的只有调度器上的网卡,可见我们配置在两台RS主机上的VIP没有响应,这样就保证了,请求报文要经由Director,但响应不能经由Director,而是由RS直接发往Client,如下:
如上,整个环境就已经搭建完成了!
3.使用ipvsadm定义集群,并测试
# 添加集群服务,-t 指明地址和端口(集群服务);-s 调度算法为加权轮询(wrr) [root@centos7 ~]# ipvsadm -A -t 10.1.252.73:80 -s wrr # 在集群服务上添加RS,-r指明RS,-g为lvs-dr类型,-w指明权重 [root@centos7 ~]# ipvsadm -a -t 10.1.252.73:80 -r 10.1.252.37 -g -w 1 [root@centos7 ~]# ipvsadm -a -t 10.1.252.73:80 -r 10.1.249.203 -g -w 2 # 查看添加的集群服务和规则 [root@centos7 ~]# ipvsadm -L -n IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.1.252.73:80 wrr -> 10.1.249.203:80 Route 2 0 0 -> 10.1.252.37:80 Route 1 0 0
使用客户端(CentOS 6)做测试,去访问调度器上的web服务(伪装的),真正响应的是RS是服务器,响应方式为加权轮询
[root@CentOS6 ~]# for i in {1..10};do curl http://10.1.252.73; done <h1>RS2</h1> <h1>RS1</h1> <h1>RS2</h1> <h1>RS2</h1> <h1>RS1</h1> <h1>RS2</h1> <h1>RS2</h1> <h1>RS1</h1> <h1>RS2</h1> <h1>RS2</h1>
[root@centos7 ~]# ipvsadm -L -n IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.1.252.73:80 wrr -> 10.1.249.203:80 Route 2 0 6 -> 10.1.252.37:80 Route 1 0 4
附加:
RS的预配置脚本:
VS的配置脚本:
★作用:
借助于防火墙标记来分类报文,而后基于标记定义集群服务;可将多个不同的应用使用同一个集群服务进行调度;
★打标记方法(在Director主机):
# iptables -t mangle -A PREROUTING -d $vip -p $proto --dport $port -j MARK --set-mark NUMBER
★基于标记定义集群服务:
# ipvsadm -A -f NUMBER [options]
实验:
1.我们现在再添加一个mysql的集群服务,如下:
1)首先在两台RS主机上授权一个相同的可远程登录的用户
[root@centos7 ~]# mysql Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 2 Server version: 5.5.44-MariaDB MariaDB Server Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others. Type ‘help;‘ or ‘\h‘ for help. Type ‘\c‘ to clear the current input statement. # 授权的用户为test,主机为10.1.0.0网段中的任何主机,密码为testpass MariaDB [(none)]> GRANT all ON *.* TO ‘test‘@‘10.1.%.%‘ IDENTIFIED BY ‘testpass‘; Query OK, 0 rows affected (0.01 sec) MariaDB [(none)]> FLUSH PRIVILEGES; Query OK, 0 rows affected (0.00 sec) mysql> create database mydb; # 创建一个数据库 Query OK, 1 row affected (0.00 sec) mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mydb | | mysql | | performance_schema | | test | +--------------------+ 5 rows in set (0.01 sec) MariaDB [(none)]> exit Bye
如上为RS2的主机的授权,RS1的主机授权方法同上(为了以示区别,RS1不创建数据库),这里不再给出,授权好之后,要在远程主机上测试看能否登陆,我这里已经测试过了,没有问题.
2)现在我们在上例中lvs-dr类型中再添加一个mysql的集群服务,如下:
[root@centos7 ~]# ipvsadm -A -t 10.1.252.73:3306 -s rr [root@centos7 ~]# ipvsadm -a -t 10.1.252.73:3306 -r 10.1.252.37 -g -w 1 [root@centos7 ~]# ipvsadm -a -t 10.1.252.73:3306 -r 10.1.249.203 -g -w 1 [root@centos7 ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.1.252.73:80 wrr -> 10.1.249.203:80 Route 2 0 0 -> 10.1.252.37:80 Route 1 0 0 TCP 10.1.252.73:3306 rr -> 10.1.249.203:3306 Route 1 0 0 -> 10.1.252.37:3306 Route 1 0 0
3)测试服务,远程登录调度器的mysql服务(伪装的),可以看到真正响应的是后端的RS主机上的mysql,并且以轮询的方式响应
[root@CentOS6 ~]# mysql -h10.1.252.73 -utest -ptestpass -e ‘show databases‘ +--------------------+ | Database | +--------------------+ | information_schema | | mydb |# RS2响应 | mysql | | performance_schema | | test | +--------------------+ [root@CentOS6 ~]# mysql -h10.1.252.73 -utest -ptestpass -e ‘show databases‘ +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | # RS1响应 | test | | vsftpd | +--------------------+ [root@CentOS6 ~]# mysql -h10.1.252.73 -utest -ptestpass -e ‘show databases‘ +--------------------+ | Database | +--------------------+ | information_schema | | mydb | # RS2 响应 | mysql | | performance_schema | | test | +--------------------+ [root@CentOS6 ~]# mysql -h10.1.252.73 -utest -ptestpass -e ‘show databases‘ +--------------------+ | Database | +--------------------+ | information_schema | | mysql | # RS1响应 | performance_schema | | test | | vsftpd | +--------------------+
4)现在我们同时访问web服务和mysql服务,发现他们是两个各自独立的服务,每个服独立调度,各自轮询,如下:
[root@CentOS6 ~]# curl http://10.1.252.73 <h1>RS2</h1> [root@CentOS6 ~]# mysql -h10.1.252.73 -utest -ptestpass -e ‘show databases‘ +--------------------+ | Database | +--------------------+ | information_schema | | mydb | # RS2 | mysql | | performance_schema | | test | +--------------------+ [root@CentOS6 ~]# curl http://10.1.252.73 <h1>RS1</h1> [root@CentOS6 ~]# mysql -h10.1.252.73 -utest -ptestpass -e ‘show databases‘ +--------------------+ | Database | +--------------------+ | information_schema | | mydb | # RS2 | mysql | | performance_schema | | test | +--------------------+ [root@CentOS6 ~]# curl http://10.1.252.73 <h1>RS2</h1> [root@CentOS6 ~]# mysql -h10.1.252.73 -utest -ptestpass -e ‘show databases‘ +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | # RS1 | test | | vsftpd | +--------------------+
2.现在我们借助于防火墙标记来分类报文,标记定义这两个不同的应用使用同一个集群服务进行调度;
1)首先我们记住防火墙来打标记,把80服务和3306服务,都标记为11
[root@centos7 ~]# iptables -t mangle -A PREROUTING -d 10.1.252.73 -p tcp -m multiport --dports 80,3306 -j MARK --set-mark 11 [root@centos7 ~]# iptables -t mangle -vnL Chain PREROUTING (policy ACCEPT 228 packets, 21439 bytes) pkts bytes target prot opt in out source destination 0 0 MARK tcp -- * * 0.0.0.0/0 10.1.252.73 multiport dports 80,3306 MARK set 0xb Chain INPUT (policy ACCEPT 219 packets, 20737 bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 136 packets, 12838 bytes) pkts bytes target prot opt in out source destination Chain POSTROUTING (policy ACCEPT 136 packets, 12838 bytes) pkts bytes target prot opt in out source destination
2)现在我们根据防火墙定义的标记来定义集群服务,如下:
[root@centos7 ~]# ipvsadm -C # 把之前定义的规则清空 [root@centos7 ~]# ipvsadm -A -f 11 -s wrr [root@centos7 ~]# ipvsadm -a -f 11 -r 10.1.252.37 -g -w 1 [root@centos7 ~]# ipvsadm -a -f 11 -r 10.1.249.203 -g -w 1 [root@centos7 ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn FWM 11 wrr -> 10.1.249.203:0 Route 1 0 0 -> 10.1.252.37:0 Route 1 0 0
3)测试,可见现在两个服务当做一类请求来进行调度,被放在一起进行轮询,而不是各自轮询;
[root@CentOS6 ~]# curl http://10.1.252.73 <h1>RS2</h1> [root@CentOS6 ~]# mysql -h10.1.252.73 -utest -ptestpass -e ‘show databases‘ +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | test | | vsftpd | +--------------------+ [root@CentOS6 ~]# curl http://10.1.252.73 <h1>RS2</h1> [root@CentOS6 ~]# mysql -h10.1.252.73 -utest -ptestpass -e ‘show databases‘ +--------------------+ | Database | +--------------------+ | information_schema | | mydb | | mysql | | performance_schema | | test | +--------------------+ [root@CentOS6 ~]# curl http://10.1.252.73 <h1>RS1</h1>
★作用:
持久连接模板:实现无论使用任何算法,在一段时间内,实现将来自同一个地址的请求始终发往同一个RS;
★port Affinity:
每端口持久:每集群服务单独定义,并定义其持久性;
每防火墙标记持久:基于防火墙标记定义持久的集群服务;可实现将多个端口上的应用统一调度,即所谓的port Affinity;
每客户端持久:基于0端口定义集群服务,即将客户端对所有应用的请求统统调度至后端主机,而且可使用持久连接进行绑定;
实验:
1.没有做持久连接之前,以轮询的方式进行调度,如下:
[root@centos7 ~]# ipvsadm -A -t 10.1.252.73:80 -s rr [root@centos7 ~]# ipvsadm -a -t 10.1.252.73:80 -r 10.1.252.37 -g -w 1 [root@centos7 ~]# ipvsadm -a -t 10.1.252.73:80 -r 10.1.249.203 -g -w 1 [root@CentOS6 ~]# for i in {1..10};do curl http://10.1.252.73; done <h1>RS2</h1> <h1>RS1</h1> <h1>RS2</h1> <h1>RS1</h1> <h1>RS2</h1> <h1>RS1</h1> <h1>RS2</h1> <h1>RS1</h1> <h1>RS2</h1> <h1>RS1</h1> [root@centos7 ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.1.252.73:80 rr -> 10.1.249.203:80 Route 1 0 5 -> 10.1.252.37:80 Route 1 0 5
2.现在我们定义持久连接时间为60秒,再做测试发现,在规定的时间内将始终访问的是第一次匹配到的RS;
[root@centos7 ~]# ipvsadm -E -t 10.1.252.73:80 -s rr -p 60 [root@centos7 ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.1.252.73:80 rr persistent 60 -> 10.1.249.203:80 Route 1 0 0 -> 10.1.252.37:80 Route 1 0 0 [root@CentOS6 ~]# for i in {1..10};do curl http://10.1.252.73; done <h1>RS2</h1> <h1>RS2</h1> <h1>RS2</h1> <h1>RS2</h1> <h1>RS2</h1> <h1>RS2</h1> <h1>RS2</h1> <h1>RS2</h1> <h1>RS2</h1> <h1>RS2</h1>
3.现在我们结合之前的FWM机制,实现端口姻亲关系,统一调度持久连接
[root@centos7 ~]# ipvsadm -C [root@centos7 ~]# ipvsadm -A -f 11 -s rr -p [root@centos7 ~]# ipvsadm -a -f 11 -r 10.1.252.37 -g -w 1 [root@centos7 ~]# ipvsadm -a -f 11 -r 10.1.249.203 -g -w 1 [root@centos7 ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn FWM 11 rr persistent 360 -> 10.1.249.203:0 Route 1 0 0 -> 10.1.252.37:0 Route 1 0 0
测试,发现两个服务持久连接并进行绑定
[root@CentOS6 ~]# for i in {1..10};do curl http://10.1.252.73; done <h1>RS2</h1> <h1>RS2</h1> <h1>RS2</h1> <h1>RS2</h1> <h1>RS2</h1> <h1>RS2</h1> <h1>RS2</h1> <h1>RS2</h1> <h1>RS2</h1> <h1>RS2</h1> [root@CentOS6 ~]# for i in {1..10};do mysql -h10.1.252.73 -utest -ptestpass -e ‘show databases‘; done +--------------------+ | Database | +--------------------+ | information_schema | | mydb | | mysql | | performance_schema | | test | +--------------------+ +--------------------+ | Database | +--------------------+ | information_schema | | mydb | | mysql | | performance_schema | | test | +--------------------+ +--------------------+ | Database | +--------------------+ | information_schema | | mydb | | mysql | | performance_schema | | test | +--------------------+ +--------------------+ | Database | +--------------------+ | information_schema | | mydb | | mysql | | performance_schema | | test | +--------------------+ +--------------------+ | Database | +--------------------+ | information_schema | | mydb | | mysql | | performance_schema | | test | +--------------------+
4.基于0端口定义集群服务,将客户端对所有应用的请求统统调度至后端主机,而且可使用持久连接进行绑定
[root@centos7 ~]# ipvsadm -C [root@centos7 ~]# ipvsadm -A -t 10.1.252.73:0 -s rr -p [root@centos7 ~]# ipvsadm -a -t 10.1.252.73:0 -r 10.1.252.37 -g -w 1 [root@centos7 ~]# ipvsadm -a -t 10.1.252.73:0 -r 10.1.249.203 -g -w 1 [root@centos7 ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.1.252.73:0 rr persistent 360 -> 10.1.249.203:0 Route 1 0 0 -> 10.1.252.37:0 Route 1 0 0
测试,发现不管是什么服务,统统调度至后端主机,而且,持久连接绑定
[root@CentOS6 ~]# for i in {1..10};do curl http://10.1.252.73; done <h1>RS2</h1> <h1>RS2</h1> <h1>RS2</h1> <h1>RS2</h1> <h1>RS2</h1> <h1>RS2</h1> <h1>RS2</h1> <h1>RS2</h1> <h1>RS2</h1> <h1>RS2</h1> [root@CentOS6 ~]# for i in {1..10};do mysql -h10.1.252.73 -utest -ptestpass -e ‘show databases‘; done +--------------------+ | Database | +--------------------+ | information_schema | | mydb | # RS2 | mysql | | performance_schema | | test | +--------------------+ +--------------------+ | Database | +--------------------+ | information_schema | | mydb | | mysql | | performance_schema | | test | +--------------------+ [root@CentOS6 ~]# ssh root@10.1.252.73 The authenticity of host ‘10.1.252.73 (10.1.252.73)‘ can‘t be established. RSA key fingerprint is cf:7d:49:75:55:54:45:88:a3:dd:ff:f3:87:be:3f:06. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added ‘10.1.252.73‘ (RSA) to the list of known hosts. root@10.1.252.73‘s password: Last login: Sun Oct 30 17:22:47 2016 from 10.1.250.25 [root@centos7 ~]# ifconfig eno16777736: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.1.249.203 netmask 255.255.0.0 broadcast 10.1.255.255 inet6 fe80::20c:29ff:fe2b:b6e7 prefixlen 64 scopeid 0x20<link> ether 00:0c:29:2b:b6:e7 txqueuelen 1000 (Ethernet) RX packets 42213 bytes 3831451 (3.6 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 3329 bytes 382656 (373.6 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 0 (Local Loopback) RX packets 456 bytes 44634 (43.5 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 456 bytes 44634 (43.5 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo:0: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 10.1.252.73 netmask 255.255.255.255 loop txqueuelen 0 (Local Loopback)
★保存:建议保存至/etc/sysconfig/ipvsadm
ipvsadm-save > /PATH/TO/IPVSADM_FILE
ipvsadm -S > /PATH/TO/IPVSADM_FILE
systemctl stop ipvsadm.service
★重载:
ipvsadm-restore < /PATH/FROM/IPVSADM_FILE
ipvsadm -R < /PATH/FROM/IPVSADM_FILE
systemctl restart ipvsadm.service
示例:
[root@centos7 ~]# cat /usr/lib/systemd/system/ipvsadm.service [Unit] Description=Initialise the Linux Virtual Server After=syslog.target network.target [Service] Type=oneshot ExecStart=/bin/bash -c "exec /sbin/ipvsadm-restore < /etc/sysconfig/ipvsadm" ExecStop=/bin/bash -c "exec /sbin/ipvsadm-save -n > /etc/sysconfig/ipvsadm" ExecStop=/sbin/ipvsadm -C RemainAfterExit=yes [Install] WantedBy=multi-user.target [root@centos7 ~]# ipvsadm -S > /etc/sysconfig/ipvsadm [root@centos7 ~]# cat /etc/sysconfig/ipvsadm -A -t 10.1.252.73:0 -s rr -p 360 -a -t 10.1.252.73:0 -r 10.1.249.203:0 -g -w 1 -a -t 10.1.252.73:0 -r 10.1.252.37:0 -g -w 1
考虑:
标签:lvs
原文地址:http://1992tao.blog.51cto.com/11606804/1867464