标签:
前篇文章我们主要讲解了LVS的理论知识,包括LVS来源、宗旨、三种模型的架构以及LVS内核空间的十种算法,今天我们来进行实践的LVS中三种模型中的NAT模型的架构以及实现方式。(实验环境以Web集群作为实验对象)
此处我们LVS-NAT模型环境架构为三台Linux服务器,都有以下配置。
基于NAT机制实现。当用户请求到达director之后,director将请求报文的目标地址(即VIP)改成选定的realserver地址,同时将报文的目标端口也改成选定的realserver的相应端口,最后将报文请求发送到指定的realserver。在服务器端得到数据后,realserver将数据返给director,而director将报文的源地址和源端口改成VIP和相应端口,然后把数据发送给用户,完成整个负载调度过程。
使用软件VMware WorkStation 11,Director的DIP和RealServer的RS1、RS2的RIP为仅主机模式,让其在同一个局域网内,Direcotr桥接宿主机模式,方便我们后边的测试。
一台Director:
两台RealServer:
搭建LVS集群只需要在Director服务器上安装ipvsadmin工具,此处我们使用RedHat自带的rpm包进行安装
现在LVS已经是Linux标准内核的一部分,在Linux2.4内核以前,使用LVS时必须要重新编译内核以支持LVS功能模块,但是从Linux2.4内核以后,已经完全内置了LVS的各个功能模块,无需给内核打任何补丁,可以直接使用LVS提供的各种功能。
注意
[root@Director Packages]# uname -r # 此处我们环境内核版本为2.6.32-358.el6.x86_64,所以不需要再打补丁,如果你的内核低于2.4那么则需要提前打补丁
2.6.32-358.el6.x86_64
[root@Director ~]# modprobe -l | grep ipvs # 以下有之前所解释的十个内核所支持的算法(如果能有以下搜索到那么你的内核就支持ipvs)
kernel/net/netfilter/ipvs/ip_vs.ko
kernel/net/netfilter/ipvs/ip_vs_rr.ko
kernel/net/netfilter/ipvs/ip_vs_wrr.ko
kernel/net/netfilter/ipvs/ip_vs_lc.ko
kernel/net/netfilter/ipvs/ip_vs_wlc.ko
kernel/net/netfilter/ipvs/ip_vs_lblc.ko
kernel/net/netfilter/ipvs/ip_vs_lblcr.ko
kernel/net/netfilter/ipvs/ip_vs_dh.ko
kernel/net/netfilter/ipvs/ip_vs_sh.ko
kernel/net/netfilter/ipvs/ip_vs_sed.ko
kernel/net/netfilter/ipvs/ip_vs_nq.ko
kernel/net/netfilter/ipvs/ip_vs_ftp.ko
我们使用本地光盘来作为YUM源
[root@Director ~]# mount /dev/sr0 /media # 挂载本地光盘到本地目录
mount: block device /dev/sr0 is write-protected, mounting read-only
三台服务器我们都需要配置,因为后面两台RealServer我们还需要安装web服务来作为集群服务。
[root@Director ~]# vim /etc/yum.repos.d/rhel-source.repo
[localhost] # 库名称(可随意)
name=localhost # 名称描述 (自定义)
baseurl=file:///media # yum源目录,源地址
enabled=1 # 是否用该yum源,0为禁用,1为使用
gpgcheck=0 # 检查GPG-KEY,0为不检查,1为检查
到此我们安装前准备就做好了,来下面进行下一步
[root@Director Packages]# yum install -y ipvsadm-1.25-10.el6.x86_64.rpm
[root@Director ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth1 # 此处我们不指定GATEWAY(真实生产一定要指向SP给的公网IP)
DEVICE=eth1
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=172.16.21.110
NETMASK=255.255.255.0
[root@Director ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth2 # 此内网网卡可不指定GATEWAY,因为和后台RealServer在同一个局域网(但也要根据真实环境而定)
DEVICE=eth2
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=192.168.91.70
NETMASK=255.255.255.0
[root@Director ~]# service network restart
Shutting down interface eth1: [ OK ]
Shutting down interface eth2: [ OK ]
Shutting down loopback interface: [ OK ]
Bringing up loopback interface: [ OK ]
Bringing up interface eth1: [ OK ]
Bringing up interface eth2: [ OK ]
[root@Director ~]# ifconfig # 查看配置是否生效
eth1 Link encap:Ethernet HWaddr 00:0C:29:E5:9A:47
inet addr:172.16.21.110 Bcast:172.16.21.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fee5:9a47/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:6126 errors:0 dropped:0 overruns:0 frame:0
TX packets:4370 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:524154 (511.8 KiB) TX bytes:753784 (736.1 KiB)
eth2 Link encap:Ethernet HWaddr 00:0C:29:E5:9A:51
inet addr:192.168.91.70 Bcast:192.168.91.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fee5:9a51/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:28 errors:0 dropped:0 overruns:0 frame:0
TX packets:36 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:4146 (4.0 KiB) TX bytes:2160 (2.1 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:822 errors:0 dropped:0 overruns:0 frame:0
TX packets:822 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:93120 (90.9 KiB) TX bytes:93120 (90.9 KiB)
[root@Director ~]# cat /proc/sys/net/ipv4/ip_forward # 查看本地路由功能是否打开(1 开启 0 关闭)
0
[root@Director ~]# vim /etc/sysctl.conf # 开启本地路由转发
net.ipv4.ip_forward = 1 将0更改为1即可
[root@Director ~]# sysctl -p # 重新加载配置文件
net.ipv4.ip_forward = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
[root@RS1 ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
TYPE=Ethernet
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.91.80
NETMASK=255.255.255.0
GATEWAY=192.168.91.70
NM_CONTROLLED=yes
[root@RS1 ~]# service network restart
Shutting down interface eth1: [ OK ]
Shutting down loopback interface: [ OK ]
Bringing up loopback interface: [ OK ]
Bringing up interface eth1: [ OK ]
[root@RS1 ~]# ifconfig # 查看配置是否生效
eth1 Link encap:Ethernet HWaddr 00:0C:29:41:4A:CC
inet addr:192.168.91.80 Bcast:192.168.91.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe41:4acc/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:5212 errors:0 dropped:0 overruns:0 frame:0
TX packets:3586 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:435875 (425.6 KiB) TX bytes:252327 (246.4 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:129 errors:0 dropped:0 overruns:0 frame:0
TX packets:129 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:13806 (13.4 KiB) TX bytes:13806 (13.4 KiB)
[root@RS1 ~]# yum install -y httpd
[root@RS1 ~]# service httpd start # 启动httpd服务
Starting httpd: httpd: apr_sockaddr_info_get() failed for RS1
httpd: Could not reliably determine the server‘s fully qualified domain name, using 127.0.0.1 for ServerName
[ OK ]
[root@RS1 ~]# service httpd status # 查看httpd是否启动
httpd (pid 14114) is running...
[root@RS1 ~]# netstat -an | grep :80 # 查看web服务80端口是否监听
tcp 0 0 :::80 :::* LISTEN
[root@RS1 ~]# echo "RS1.xuxingzhuang.com" > /var/www/html/index.html # 给web服务提供网页界面
[root@RS1 ~]# curl http://localhost # 访问本地web是否可以正常访问
RS1.xuxingzhuang.com
[root@RS1 ~]# iptables -F
[root@RS2 ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
TYPE=Ethernet
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.91.90
NETMASK=255.255.255.0
GATEWAY=192.168.91.70
NM_CONTROLLED=yes
[root@RS2 ~]# service network restart
Shutting down interface eth1: [ OK ]
Shutting down loopback interface: [ OK ]
Bringing up loopback interface: [ OK ]
Bringing up interface eth1: [ OK ]
[root@RS2 ~]# ifconfig # 查看配置是否生效
eth1 Link encap:Ethernet HWaddr 00:0C:29:9A:31:FB
inet addr:192.168.91.90 Bcast:192.168.91.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe9a:31fb/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:5749 errors:0 dropped:0 overruns:0 frame:0
TX packets:4646 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:457342 (446.6 KiB) TX bytes:1052880 (1.0 MiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:167 errors:0 dropped:0 overruns:0 frame:0
TX packets:167 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:17790 (17.3 KiB) TX bytes:17790 (17.3 KiB)
[root@RS2 ~]# yum install -y httpd
[root@RS2 ~]# service httpd start # 启动httpd服务
Starting httpd: httpd: apr_sockaddr_info_get() failed for RS1
httpd: Could not reliably determine the server‘s fully qualified domain name, using 127.0.0.1 for ServerName
[ OK ]
[root@RS2 ~]# service httpd status # 查看httpd是否启动
httpd (pid 2069) is running...
[root@RS1 ~]# netstat -an | grep :80 # 查看web服务80端口是否监听
tcp 0 0 :::80 :::* LISTEN
[root@RS2 ~]# echo "RS2.xuxingzhuang.com" > /var/www/html/index.html # 给web服务提供网页界面
[root@RS2 ~]# curl http://localhost # 访问本地web是否可以正常访问
RS2.xuxingzhuang.com
[root@RS2 ~]# iptables -F
注意问题
[root@Director ~]# curl http://192.168.91.80 #看来我们Driector访问后台web服务都正常
RS1.xuxingzhuang.com
[root@Director ~]# curl http://192.168.91.90
RS2.xuxingzhuang.com
我们本次使用LVS内核rr调度算法来作为实验方式,还不明白LVS的内核调度算法的小伙伴,请查看我们上一篇LVS集群服务详解
[root@Director ~]# ipvsadm -A -t 172.16.21.110:80 -s rr
[root@Director ~]# ipvsadm -a -t 172.16.21.110:80 -r 192.168.91.80 -m -w 2 # 此处的-w指定权重是没有意义的,因为我们使用的为rr调度算法(轮叫),不过你也可以指定,后边我们改变算法时就不用重新定义了,省去了时间
[root@Director ~]# ipvsadm -a -t 172.16.21.110:80 -r 192.168.91.90 -m -w 1
[root@Director ~]# ipvsadm -L -n # 查看集群服务
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.16.21.110:80 rr
-> 192.168.91.80:80 Masq 2 0 0
-> 192.168.91.90:80 Masq 1 0 0
使用本机进web测试,要和Director的VIP在同一个网段,那么真实环境你就只要将网关指向运营商给的公网地址就可以了。
[root@Director ~]# ipvsadm -E -t 172.16.21.110:80 -s wrr #那么此处我们使用-E选项来修改为调度算法为wrr,那么前边-w选项指定RealServer的权重就生效了,那么我们就不用再对RealServer重新指定了
[root@Director ~]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.16.21.110:80 wrr
-> 192.168.91.80:80 Masq 2 0 0
-> 192.168.91.90:80 Masq 1 0 0
此处访问那么算法就根据我们指定了RS1权重为2,RS2的权重为1,那么最后我们访问的结果为刷新两次访问RS1,一个访问RS2。
到这里我们就基于LVS-NAT模型搭建了一个web集群服务,这里我们就测试了两种LVS的内核调度算法,对于其他的八种算法配置基本一样,主要是看我们是否可以进行理解,配置都一样,后边我们将讲解LVS-DR模型的LVS集群服务的实现过程。
标签:
原文地址:http://blog.csdn.net/xuxingzhuang/article/details/51650071