标签:keepalived nginx lvs
1. 前言
keepalived是一个C语言开发的,能够基于Linux基础架构提供一个HA实现的软件。HA是基于VRRP协议实现,可以为LVS、Nginx、HAProxy等实现的LB提供高可用。
下图是keepalived的软件架构图
主要核心模块:
2. 实验前提
搭建时间服务器
为192.168.23.0/24网络提供统一时钟,此网络中所有机器都开启ntpd守护进程,时间源指向192.168.23.123这台时间服务器。即在配置文件中server部分增加一条,并启动ntpd。
# vim /etc/ntp.conf
server 192.168.23.123
# service ntpd start
# chkconfig ntpd on
3. 高可用反向代理
这里使用LNNMP,在第一级N上实现反向代理的高可用。实验中上游服务器用2台nginx的WEB服务器,Mysql和PHP略去。LNNMP可以参照博文《Nginx之LNMP、LNNMP、LNNNMP架构实现及缓存技术》。
3.1规划
3.2 keepalived的安装
CentOS6.4之后就已经收录了keepalived,因此直接可以yum安装。
# yum –y install keepalived
# rpm -ql keepalived
/etc/keepalived
/etc/keepalived/keepalived.conf
/etc/rc.d/init.d/keepalived
/etc/sysconfig/keepalived
/usr/sbin/keepalived
……
可以看到默认的一个配置文件已经放在了/etc/keepalived下,只要对其根据规划修改就可以了。
3.3 各服务器配置
3.3.1 WEB1配置
nginx.conf
# vim /usr/local/nginx/conf/nginx.conf
user nginx;
worker_processes auto;
#pid /var/run/nginx.pid;
error_log /var/log/nginx/error.log;
events {
use epoll;
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
log_format main ‘$remote_addr - $remote_user [$time_local] "$request" ‘
‘$status $body_bytes_sent "$http_referer" ‘
‘"$http_user_agent" "$http_x_forwarded_for"‘;
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 5;
#gzip on;
server {
listen 80;
server_name WEB1;
add_header X-upS WEB1-$server_addr:$server_port;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
root /web/nginx/static;
index index.html index.htm;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
index.html
# vim /web/nginx/static/index.html
<html>
<head>
<title>static</title>
</head>
<body>
<h1 align="center">OK! This is a static page of WEB1</h1>
</body>
</html>
3.3.2 WEB2配置
nginx.conf
# vim /usr/local/nginx/conf/nginx.conf
user nginx;
worker_processes auto;
#pid /var/run/nginx.pid;
error_log /var/log/nginx/error.log;
events {
use epoll;
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
log_format main ‘$remote_addr - $remote_user [$time_local] "$request" ‘
‘$status $body_bytes_sent "$http_referer" ‘
‘"$http_user_agent" "$http_x_forwarded_for"‘;
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 5;
#gzip on;
server {
listen 8080;
server_name WEB2;
add_header X-upS WEB2-$server_addr:$server_port;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
root /web/nginx/static;
index index.html index.htm;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
index.html
# vim /web/nginx/static/index.html
<html>
<head>
<title>static</title>
</head>
<body>
<h1 align="center">OK! This is a static page of WEB2</h1>
</body>
</html>
3.3.3 Proxy1(keepalived MASTER)
nginx.conf
# vim /usr/local/nginx/conf/nginx.conf
user nginx;
worker_processes auto;
#pid /var/run/nginx.pid;
error_log /var/log/nginx/error.log;
events {
use epoll;
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
log_format main ‘$remote_addr - $remote_user [$time_local] "$request" ‘
‘$status $body_bytes_sent "$http_referer" ‘
‘"$http_user_agent" "$http_x_forwarded_for"‘;
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 5;
#gzip on;
upstream static {
server 192.168.23.80;
server 192.168.23.81:8080;
server 127.0.0.1:8080 backup;
}
server {
listen 80;
server_name localhost;
add_header X-Proxy Proxy100-$server_addr:$server_port;
#charset koi8-r;
#access_log logs/host.access.log main;
location ~ \.(gif|jpg|jpeg|png|bmp|swf)$ {
expires 30d;
}
location ~ \.(js|css)$ {
expires 1h;
}
location / {
proxy_pass http://static;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
server {
listen 8080;
server_name sorry;
add_header X-Sorry SorryServer-$server_addr:$server_port;
location / {
root html;
rewrite .* /maintenance.html break;
}
}
}
maintenance.html
# vim /usr/local/nginx/html/maintenance.html
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>Sorry</title>
</head>
<body>
<h1 align="center">Down for maintenance</h1>
</body>
keepalived.conf
# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id PROXY_HA1
}
vrrp_instance VI_PROXY_HA {
state MASTER
interface eth0
virtual_router_id 10
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 192pass
}
virtual_ipaddress {
172.16.23.80
}
}
3.3.4 Proxy2(keepalived BACKUP)
nginx.conf
# vim /usr/local/nginx/conf/nginx.conf
user nginx;
worker_processes auto;
#pid /var/run/nginx.pid;
error_log /var/log/nginx/error.log;
events {
use epoll;
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
log_format main ‘$remote_addr - $remote_user [$time_local] "$request" ‘
‘$status $body_bytes_sent "$http_referer" ‘
‘"$http_user_agent" "$http_x_forwarded_for"‘;
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 5;
#gzip on;
upstream static {
server 192.168.23.80;
server 192.168.23.81:8080;
server 127.0.0.1:8080 backup;
}
server {
listen 80;
server_name localhost;
add_header X-Proxy Proxy101-$server_addr:$server_port;
#charset koi8-r;
#access_log logs/host.access.log main;
location ~ \.(gif|jpg|jpeg|png|bmp|swf)$ {
expires 30d;
}
location ~ \.(js|css)$ {
expires 1h;
}
location / {
proxy_pass http://static;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
server {
listen 8080;
server_name sorry;
add_header X-Sorry SorryServer-$server_addr:$server_port;
location / {
root html;
rewrite .* /maintenance.html break;
}
}
}
maintenance.html
# vim /usr/local/nginx/html/maintenance.html
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>Sorry</title>
</head>
<body>
<h1 align="center">Down for maintenance</h1>
</body>
keepalived.conf
# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id PROXY_HA2
}
vrrp_instance VI_PROXY_HA {
state BACKUP
interface eth0
virtual_router_id 10
priority 99
advert_int 1
authentication {
auth_type PASS
auth_pass 192pass
}
virtual_ipaddress {
172.16.23.80
}
}
从以上配置可以看出,使用唯一的资源就是VIP 172.16.23.80。这些配置都非常的简单,这里不做解释。关于nginx的配置,可以参照博文《Nginx之LNMP、LNNMP、LNNNMP架构实现及缓存技术》。
为了区别,特意在nginx的代理服务器的配置中增加了header,2台代理是不一样的,这样在验证HA效果的时候可以判断是那一台代理服务器在工作。
3.4 测试
(1) 开启backup节点
先开启Proxy2的nginx代理和keepalived服务,WEB1和WEB2的nginx的WEB服务。Proxy1的服务器暂不开启。
上图中,启动了backup的keepalived。由于只有它一个节点,因此它成为主节点,并配置VIP开始工作。
上面2图,使用了自定义的响应首部X-Proxy,说明了从172.16.23.80上80提供了代理,并轮询调度用户请求至上游服务器响应。
(2) 开启master节点
开启Proxy1的nginx代理和keepalived服务,由于Proxy1的优先级高,将成为名副其实的MASTER,将VIP资源夺回,配置在自身指定的接口上,同时Proxy2释放VIP资源。
上图中master节点成为MASTER状态,配置VIP。
而backup节点成为BACKUP状态,并释放IP资源。
从首部信息看出,响应的代理服务器实际上发生了变化,这个Nginx代理服务器编译时修改了版本信息,因此Server:MeXP/1.0.1,这说明是另一个Proxy提供的代理服务。由于使用同样的VIP对外提供服务,因此X-Proxy是一样的,当然这个也可以设置不一样。
依然实现对上游服务器的轮询调度。
(3)小结
从以上的测试中可以看到,VIP确实可以在不同的节点流动,但是有一个问题,如果nginx停止工作,VIP这个资源并不会转移,但是拥有VIP的节点实际上是不能提供WEB服务了。所以我们要增加对tcp 80端口的验证,只要WEB服务能够返回一个页面,或者能够返回状态码为200,就认为该节点正常,否则重新启动nginx或者迁移VIP资源。
下来对keepalived的配置文件做一些改进。
3.5 状态转换的通知机制
notify_master | specify a shell script to be executed during transition to master state | path |
notify_backup | specify a shell script to be executed during transition to backup state | path |
notify_fault | specify a shell script to be executed during transition to fault state | path |
由上表可知,keepalived在做状态变化的时候,可以调用一个脚本。这样我们可以把处理nginx服务的逻辑放在里面。
我们要实现状态变成master,处理配置VIP还要启动nginx。其他状态都是移除VIP并关闭nginx。
以backup节点为例,其配置文件修改如下:
! Configuration File for keepalived
global_defs {
router_id PROXY_HA2
}
vrrp_instance VI_PROXY_HA {
state BACKUP
interface eth0
virtual_router_id 10
priority 99
advert_int 1
authentication {
auth_type PASS
auth_pass 192pass
}
virtual_ipaddress {
172.16.23.80
}
notify_master "/etc/rc.d/init.d/nginx start"
notify_backup "/etc/rc.d/init.d/nginx stop"
notify_fault "/etc/rc.d/init.d/nginx stop"
}
这样,以后nginx启动的事情就可以交给keepalived来做了,所以在2台代理服务器上请停止nginx服务,并设置成开机不启动。
# service nginx stop
# chkconfig nginx off
现在服务跟着主节点走,VIP也跟着主节点走。
还是有一个问题,如果代理服务器上的nginx自己关闭了,怎么办?
检测主节点nginx进程,如果失败就将权重减去一个大值,这样backup节点就提升为一个主节点,原主节点降为backup节点后会重启nginx服务,一旦启动成功,其权重恢复,这样重新投票它将重新获得主节点地位。
master节点
# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id PROXY_HA1
}
vrrp_script chk_nginx {
script "killall -0 nginx"
interval 1
weight -5
fall 2
rise 1
}
vrrp_instance VI_PROXY_HA {
state MASTER
interface eth0
virtual_router_id 10
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 192pass
}
track_script {
chk_nginx
}
virtual_ipaddress {
172.16.23.80
}
notify_master "/etc/rc.d/init.d/nginx start"
notify_backup "/etc/rc.d/init.d/nginx restart"
notify_fault "/etc/rc.d/init.d/nginx stop"
}
backup节点
! Configuration File for keepalived
global_defs {
router_id PROXY_HA2
}
vrrp_script chk_nginx {
script "killall -0 nginx"
interval 1
weight -2
fall 2
rise 1
}
vrrp_instance VI_PROXY_HA {
state BACKUP
interface eth0
virtual_router_id 10
priority 99
advert_int 1
authentication {
auth_type PASS
auth_pass 192pass
}
track_script {
chk_nginx
}
virtual_ipaddress {
172.16.23.80
}
notify_master "/etc/rc.d/init.d/nginx start"
notify_backup "/etc/rc.d/init.d/nginx restart"
notify_fault "/etc/rc.d/init.d/nginx stop"
}
看看测试的效果,在BACKUP节点观察
上图中首先启动BACKUP节点,它进入MASTER状态配置了VIP后,调用了/etc/rc.d/init.d/nginx脚本启动nginx服务。
随后,启动MASTER节点,BACKUP节点转入BACKUP状态,移除VIP,调用了/etc/rc.d/init.d/nginx脚本停止nginx服务。
4. 高可用LVS
4.1 规划
说明:
使用VS/NAT完成。
和上例的规划图非常相似,只不过keepalived内部集成IPVS Wrapper。
因此需要修改master和backup的配置文件。注意如果在上例的基础上做,因为nginx代理监听tcp 80端口,为了避免引起疑惑,还是把nginx停止或移除较好。
注:本实验中ipvs也要检查发往80端口的数据,但是这和上例实验本质是不同。因为ipvs在内核中就已经把数据报文截获,并处理转发。
4.2 网关设置
使用VS/NAT模型,LVS中Director做的是DNAT,所以要在内网主机上WEB1、WEB2、sorry server上配置路由,但是因为2个keepalived节点上的内网地址不同,路由实际上也是变化的,如何做呢?
可以在master和backup上配置同一个内网地址资源192.168.23.105,这个资源和VIP一样是跟随着主节点的,并使它成为这几台为外部提供WEB服务的主机网关,因此配置接口增加网关配置。
在内网主机上配置网关,如下:
# vim /etc/sysconfig/network-scripts/ifcfg-eth0
GATEWAY=192.168.23.105
4.3 ipvsadm的安装
安装ipvsadm是为了验证keepalived是否加入了规则,ipvsadm其实可以不安装
# yum -y install ipvsadm
# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
可以看到没有任何规则
4.4 WEB1
提供测试页、计算测试网页的md5值
# cd /web/nginx/static/
# vim test.html
Just test for WEB1
# md5sum test.html
60f16b089b6c69717ffdd41fc0896652 test.html
4.5 WEB2
提供测试页、计算测试网页的md5值
# cd /web/nginx/static/
# vim test.html
Just test for WEB2
# md5sum test.html
b75859919db38e0501fa02de82026078 test.html
4.6 master节点配置
keepalived的配置、开通主机路由
! Configuration File for keepalived
global_defs {
router_id PROXY_HA1
}
vrrp_sync_group VGM {
group {
VI_LVS_HA
VI_LOCAL_ADDR
}
}
vrrp_instance VI_LOCAL_ADDR {
state MASTER
interface eth1
vs_sync_daemon_inteface eth1
virtual_router_id 11
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 168pass
}
virtual_ipaddress {
192.168.23.105
}
}
vrrp_instance VI_LVS_HA {
state MASTER
interface eth0
vs_sync_daemon_inteface eth0
virtual_router_id 10
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 192pass
}
virtual_ipaddress {
172.16.23.80
}
}
virtual_server 172.16.23.80 80 {
delay_loop 6
lb_algo rr
lb_kind NAT
persistence_timeout 5
protocol TCP
sorry_server 192.168.23.121 8080
real_server 192.168.23.80 80 {
weight 1
HTTP_GET {
url {
path /test.html
digest 60f16b089b6c69717ffdd41fc0896652
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.23.81 8080 {
weight 1
HTTP_GET {
url {
path /test.html
digest b75859919db38e0501fa02de82026078
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
# sed -i ‘s/net.ipv4.ip_forward =.*/net.ipv4.ip_forward = 1/‘ /etc/sysctl.conf
# sysctl –p
4.7 backup节点配置
keepalived的配置文件、开通主机路由
! Configuration File for keepalived
global_defs {
router_id PROXY_HA2
}
vrrp_sync_group VGM {
group {
VI_LVS_HA
VI_LOCAL_ADDR
}
}
vrrp_instance VI_LOCAL_ADDR {
state BACKUP
interface eth1
lvs_sync_daemon_inteface eth1
virtual_router_id 11
priority 99
advert_int 1
authentication {
auth_type PASS
auth_pass 168pass
}
virtual_ipaddress {
192.168.23.105
}
}
vrrp_instance VI_LVS_HA {
state BACKUP
interface eth0
lvs_sync_daemon_inteface eth0
virtual_router_id 10
priority 99
advert_int 1
authentication {
auth_type PASS
auth_pass 192pass
}
virtual_ipaddress {
172.16.23.80
}
}
virtual_server 172.16.23.80 80 {
delay_loop 6
lb_algo rr
lb_kind NAT
persistence_timeout 5
protocol TCP
sorry_server 192.168.23.121 8080
real_server 192.168.23.80 80 {
weight 1
HTTP_GET {
url {
path /test.html
digest 60f16b089b6c69717ffdd41fc0896652
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.23.81 8080 {
weight 1
HTTP_GET {
url {
path /test.html
digest b75859919db38e0501fa02de82026078
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
# sed -i ‘s/net.ipv4.ip_forward =.*/net.ipv4.ip_forward = 1/‘ /etc/sysctl.conf
# sysctl –p
4.8 sorry server配置
nginx的配置,并启动。maintenance.html内容已将在上面提供过了。
user nginx;
worker_processes auto;
#pid /var/run/nginx.pid;
error_log /var/log/nginx/error.log;
events {
use epoll;
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
log_format main ‘$remote_addr - $remote_user [$time_local] "$request" ‘
‘$status $body_bytes_sent "$http_referer" ‘
‘"$http_user_agent" "$http_x_forwarded_for"‘;
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 5;
#gzip on;
server {
listen 8080;
server_name sorry;
add_header X-Sorry SorryServer-$server_addr:$server_port;
location / {
root html;
rewrite .* /maintenance.html break;
}
}
}
4.9 测试
先启动backup节点,后启动master节点,看看backup节点的变化
测试一下
可以看到直接是由后端的real server响应的。其他测试不再赘述。
4.10 keepalived节点究竟做了什么?
1) DNAT
keepalived的MASTER状态的节点获得VIP,同时它也成为了LVS的director,并且做DNAT后,把客户端数据发送至后端服务器,后端服务器响应请求,并发送数据给网关director,由网关替换源地址为VIP后,将数据发往客户端。
2) VRRP协议心跳
keepalived所有节点都在向组播地址发心跳包
3) 健康状态检测
不断的向内网探测后端服务器的健康状况,这里是用访问测试页并用其md5值来判断。这样的好处是,真正由WEB服务返回数据,比ping服务器、监控WEB服务的方式好。因为ping通一台主机或者监控web服务进程存在不能说明web服务能够真正的提供服务。
4.11 sorry server
将WEB1和WEB2上的测试网页test.html改名,2台keepalived节点的测试发现test.html找不到了,认为real server故障,便移除2台real server,最后只能指向sorry server。
[ 抓包分析 ]
可以看到对index.html的请求转向了192.168.23.121的8080来响应,返回维护页面。
对测试页面test.html的访问返回了404错误,找不到test.html了。因此keepalived就在ipvs中把WEB1和WEB2的调度移除,加入sorry server。
至此基于keepalived完成VS/NAT模型的高可用实验。
4.12 总结
可以看出,在NAT模式下,数据做了DNAT后,发往Real Server,所以服务器必须指定网关或路由,但是由于HA集群中当前为master的节点不一定,所以也不清楚究竟哪一个内网地址可以作为网关,因此再引入一个网关IP地址资源,让它跟随着master节点迁移,而网关指向这个地址就可以了。这样就能保证数据一般情况下会从来路返回给客户端。
keepalived对测试页面的HTTP_GET测试,使用的是内网地址,按照配置的策略不断测试。只要keepalived启动,不管是否是主从节点,都会进行测试。
5. 进阶双主模型
这里就有了一个大的麻烦,为了解决数据报文的原路返回,使用了一个内网地址的资源192.168.23.105。
如果做成一个双主模型,势必内网地址也得有2个,这样内网中上游服务器的路由又没有办法配置路由了。因为上游服务器返回数据的时候一定要原路返回,而上游服务器发出的包的目标地址是客户端IP,不是Director,所以上游服务器不知道从哪一个Director进来的,该发回给哪一个Director,因为只有在进来的Director上通过连接追踪才能替换源地址为VIP,并把数据包转发给客户端。
如何解决呢?
(1)VS/FullNAT
淘宝实现FullNAT的并提供出来。
根据目前淘宝大侠们的提供的信息,使用LVS FullNAT要向特定版本内核打补丁,重新编译内核。编译其提供的ipvsadm和keepalived。
目前还没有完全调试完成。所以后续博文中将实现LVS的FULLNAT + Keepalived + nginx(WEB) 双主模型高可用实验。
(2)VS/DR
可以使用DR模型,客户端请求在前端被DNS调度到不同的keepalived的节点,这里是2个主节点,每一个主节点都把请求调度到上游服务器,上游服务器响应后,直接把响应的结果发个这个内网中某个路由器(包括网关),这个路由器能把数据发给下一跳路由器,就这样数据返回到了客户端。
DR模型实现起来也不用像NAT模型那样需要配置内网的VRRP地址且让数据从Director原路返回。
参考资料
http://www.keepalived.org/LVS-NAT-Keepalived-HOWTO.html
http://kb.linuxvirtualserver.org/wiki/IPVS_FULLNAT_and_SYNPROXY
本文出自 “终南山下” 博客,转载请与作者联系!
Keepalived实现高可用Nginx反向代理和基于NAT的LVS及分析
标签:keepalived nginx lvs
原文地址:http://me2xp.blog.51cto.com/6716920/1562751