标签:kconfig exit include color 状态 框架 httpd 等于 com
#准备环境: 系统:centos6.5 64位 ip 192.168.179.103 主 系统:centos6.5 64位 ip 192.168.179.102 备 #先安装openssl组件,不然提示报错 yum install -y openssl openssl-devel checking for openssl/ssl.h... no configure: error: [root@oldboy tools]# ls -l /usr/src/ total 8 lrwxrwxrwx. 1 root root 39 May 20 17:59 2.6.32-431.el6.x86_64 -> /usr/src/kernels/2.6.32-431.el6.x86_64/ drwxr-xr-x. 2 root root 4096 Sep 23 2011 debug drwxr-xr-x. 3 root root 4096 May 20 14:17 kernels lrwxrwxrwx. 1 root root 39 May 20 17:59 linux -> /usr/src/kernels/2.6.32-431.el6.x86_64/ [root@oldboy tools]# tar xf keepalived-1.1.19.tar.gz && cd keepalived-1.1.19 [root@oldboy keepalived-1.1.19]# ./configure Keepalived configuration ------------------------ Keepalived version : 1.1.19 Compiler : gcc Compiler flags : -g -O2 Extra Lib : -lpopt -lssl -lcrypto Use IPVS Framework : Yes #ipvs框架 IPVS sync daemon support : Yes #ipvs同步支持 Use VRRP Framework : Yes #VRRP Use Debug flags : No #可能出错 configure: error: Popt libraries is required yum install popt-devel -y [root@oldboy keepalived-1.1.19]# make [root@oldboy keepalived-1.1.19]# make install [root@oldboy keepalived-1.1.19]# /bin/cp /usr/local/etc/rc.d/init.d/keepalived /etc/init.d/ [root@oldboy keepalived-1.1.19]# /bin/cp /usr/local/etc/sysconfig/keepalived /etc/sysconfig/ [root@oldboy keepalived-1.1.19]# mkdir /etc/keepalived -p [root@oldboy keepalived-1.1.19]# /bin/cp /usr/local/etc/keepalived/keepalived.conf /etc/keepalived/ [root@oldboy keepalived-1.1.19]# /bin/cp /usr/local/sbin/keepalived /usr/sbin/ [root@oldboy keepalived-1.1.19]# /etc/init.d/keepalived start Starting keepalived: [ OK ] [root@oldboy keepalived-1.1.19]# ps -ef | grep keep root 35874 1 0 14:25 ? 00:00:00 keepalived -D root 35876 35874 0 14:25 ? 00:00:00 keepalived -D root 35877 35874 0 14:25 ? 00:00:00 keepalived -D root 35879 29339 0 14:25 pts/1 00:00:00 grep keep [root@oldboy keepalived-1.1.19]# /etc/init.d/keepalived stop #成功后停止keepalived服务; Stopping keepalived: [ OK ] chkconfig --add keepalived chkconfig keepalived on #单实例的应用,主服务器的配置文件 cat keepalived.conf ! Configuration File for keepalived global_defs { #全局定义 notification_email { 493939840@qq.com } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 192.168.179.2 smtp_connect_timeout 30 router_id LVS_7 #keepalived ID } vrrp_instance VI_1 { #实例VI_1 state MASTER #主 interface eth0 #网卡 virtual_router_id 55 #实例的id priority 150 #优先级 advert_int 1 #接管时间 authentication { #权限,明文密码,官方建议用明文,配置文件可以man keepalived auth_type PASS auth_pass 1111 } virtual_ipaddress { #VIP地址组 192.168.179.110/24 } } #备服务器的配置文件 ! Configuration File for keepalived global_defs { notification_email { 493939840@qq.com } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 192.168.179.2 smtp_connect_timeout 30 router_id LVS_2 } vrrp_instance VI_1 { state BACKUP interface eth0 virtual_router_id 55 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.179.110/24 } } 两台都需安装ipvsadm以及keepalived,ipvsadm只需make install即可,手工配置不需要配置,keepalived接替 两台都需开启服务 [root@oldboy keepalived]# /etc/init.d/keepalived stop Stopping keepalived: [ OK ] #主keepalived [root@oldboy keepalived]# ip add #ip add查看 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:a5:a3:23 brd ff:ff:ff:ff:ff:ff inet 192.168.179.103/24 brd 192.168.179.255 scope global eth0 inet 192.168.179.110/24 scope global secondary eth0 #vip地址 inet6 fe80::20c:29ff:fea5:a323/64 scope link valid_lft forever preferred_lft forever [root@oldboy keepalived]# ip add|grep 179.110 inet 192.168.179.110/24 scope global secondary eth0 #vip地址 #备keepalived [root@oldboy keepalived]# ip add 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:a3:b8:e2 brd ff:ff:ff:ff:ff:ff inet 192.168.179.102/24 brd 192.168.179.255 scope global eth0 inet6 fe80::20c:29ff:fea3:b8e2/64 scope link valid_lft forever preferred_lft forever [root@oldboy keepalived]# ip add|grep 179.110 [root@oldboy keepalived]# 现在将主的keepalived关掉 [root@oldboy keepalived]# /etc/init.d/keepalived stop Stopping keepalived: [ OK ] 看备的状态信息,发现已经接管过来了。大概1秒钟的时间 [root@oldboy keepalived]# ip add 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:a3:b8:e2 brd ff:ff:ff:ff:ff:ff inet 192.168.179.102/24 brd 192.168.179.255 scope global eth0 inet 192.168.179.110/24 scope global secondary eth0 inet6 fe80::20c:29ff:fea3:b8e2/64 scope link valid_lft forever preferred_lft forever 这样主备负载均衡就已经实现了
多实例,双主互备,双向接管,互相监测,不同业务的双主
主keepalived配置文件 ! Configuration File for keepalived global_defs { notification_email { 493939840@qq.com } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 192.168.179.2 smtp_connect_timeout 30 router_id LVS_1 } vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 51 priority 150 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.179.110/24 } } vrrp_instance VI_2 { state BACKUP interface eth0 virtual_router_id 52 priority 50 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.179.111/24 } } 备keepalived配置文件 ! Configuration File for keepalived global_defs { notification_email { 493939840@qq.com } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 192.168.179.2 smtp_connect_timeout 30 router_id LVS_1 } vrrp_instance VI_1 { state BACKUP interface eth0 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.179.110/24 } } vrrp_instance VI_2 { state MASTER interface eth0 virtual_router_id 52 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.179.111/24 } } #主备重启服务 [root@oldboy keepalived]# /etc/init.d/keepalived restart Stopping keepalived: [ OK ] Starting keepalived: [ OK ] #主节点 [root@oldboy keepalived]# ip add|egrep "179.110|179.111" inet 192.168.179.110/24 scope global secondary eth0 #主VIP110,备111 #备节点 [root@oldboy keepalived]# ip add|egrep "179.110|179.111" inet 192.168.179.111/24 scope global secondary eth0 #主VIP111,备110 [root@oldboy keepalived]# /etc/init.d/keepalived stop #停止服务 Stopping keepalived: [ OK ] #主节点查看,发现都已经接管了 [root@oldboy keepalived]# ip add|egrep "179.110|179.111" inet 192.168.179.110/24 scope global secondary eth0 inet 192.168.179.111/24 scope global secondary eth0 #备启动服务 [root@oldboy keepalived]# /etc/init.d/keepalived start Starting keepalived: [ OK ] #主节点再次查看,发现已经,但恢复的时间要比接管的时间久,大概5到10秒 [root@oldboy keepalived]# ip add|egrep "179.110|179.111" inet 192.168.179.110/24 scope global secondary eth0 #如果只有两台服务器,需要做高可用,那在两台webserver上安装keeplived,再把域名解析到VIP上,需要配置实时同步推送到备节点(比如用户上传的图片,帖子) #默认情况下keeplived只会监控主机级别的,仅在对方机器down机的情况下接管,不会监控webserver服务down掉,可以自己写守护进程来处理当业务有问题就接管。 #例如在主keeplived中:守护进程,用nmap来检查,如果行数不等于1就stop #!/bin/sh while true do if[`nmap 127.0.0.1 -p 80|grep open|wc -l` -ne 1];then /etc/init.d/keepalived stop fi sleep 100 done #执行sh check_web.sh ,ctrl+z退出,bg查看 fg到前台,ctrl+c停掉脚本 #第二个小脚本 #!/bin/bash while true do httpdpid=`ps -C httpd --no-heading |wc -l` if [ $httpdpid -eq 0 ];then /etc/init.d/httpd start sleep 5 httpdpid=`ps -C httpd --no-heading |wc -l` if [ $httpdpid -eq 0 ];then /etc/init.d/keepalive stop fi fi sleep 5 done
keepalived的默认日志在/var/log/message,指定文件接收message,在sysconfig/keepalived中增加一行KEEPALIVED_OPTIONS
为什么要指定日志文件呢,因为在message中还有很多其它的日志文件,这样查看日志不方便
[root@oldboy keepalived]# cat /etc/sysconfig/keepalived # Options for keepalived. See `keepalived --help‘ output and keepalived(8) and # keepalived.conf(5) man pages for a list of all options. Here are the most # common ones : # # --vrrp -P Only run with VRRP subsystem. # --check -C Only run with Health-checker subsystem. # --dont-release-vrrp -V Dont remove VRRP VIPs & VROUTEs on daemon stop. # --dont-release-ipvs -I Dont remove IPVS topology on daemon stop. # --dump-conf -d Dump the configuration data. # --log-detail -D Detailed log messages. # --log-facility -S 0-7 Set local syslog facility (default=LOG_DAEMON) # KEEPALIVED_OPTIONS="-D" KEEPALIVED_OPTIONS="-D -d -S 0" # -S 指定syslog设备,0就是syslog第一个设备 [root@oldboy keepalived]# tail -1 /etc/rsyslog.conf local0.* /var/log/keepalivend.log #0.*是指所有的状态信息 #重启rsyslog,重启keepalived (centos 5.X系统中是syslog) [root@oldboy keepalived]# /etc/init.d/rsyslog restart Shutting down system logger: [ OK ] Starting system logger: [ OK ] [root@oldboy keepalived]# /etc/init.d/keepalived restart Stopping keepalived: [ OK ] Starting keepalived: [ OK ] [root@oldboy keepalived]# tail -5 /var/log/keepalivend.log May 22 14:03:52 oldboy Keepalived_vrrp: VRRP_Instance(VI_1) Entering MASTER STATE May 22 14:03:52 oldboy Keepalived_vrrp: VRRP_Instance(VI_1) setting protocol VIPs. May 22 14:03:52 oldboy Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.179.110 May 22 14:03:52 oldboy Keepalived_healthcheckers: Netlink reflector reports IP 192.168.179.110 added May 22 14:03:57 oldboy Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.179.110 #Keepalived配合Lvs #############Keepalived配合Lvs############## 1. 如下为备节点:主节点不相同配置标注如下 ! Configuration File for keepalived global_defs { notification_email { 493939840@qq.com } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 192.168.179.2 smtp_connect_timeout 30 router_id LVS_2 #主为 router_id LVS_1 } vrrp_instance VI_1 { state BACKUP #主为MASTER interface eth0 virtual_router_id 51 priority 100 #主为150 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.179.110/24 #提供服务的VIP } } virtual_server 192.168.179.110 80 { #设置虚拟服务器 delay_loop 6 #设置健康状态检查时间 lb_algo rr #设置负载调度算法 lb_kind DR #设置LVS实现负载均衡的机制 nat_mask 255.255.255.0 #设置掩码 persistence_timeout 50 #会话保持时间,50秒 protocol TCP #指定转发协议类型 #上面这几行相当于ipvsadm命令,ipvsadm -A -t 192.168.179.110:80 -s rr -p 20 sorry_server 127.0.0.1 80 #设置应急服务器 real_server 192.168.179.134 80 { #后端服务器节点 weight 1 #设置服务节点的权重 TCP_CHECK { #设置检测方式 connect_timeout 8 #设置响应超时时间 nb_get_retry 3 #设置超时重试次数 delay_before_retry 3 #设置超时重试间隔 connect_port 80 #TCP端口80 } } real_server 192.168.179.135 80 { #后端服务器节点 weight 1 #设置服务节点的权重 TCP_CHECK { #设置检测方式 connect_timeout 8 #设置响应超时时间 nb_get_retry 3 #设置超时重试次数 delay_before_retry 3 #设置超时重试间隔 connect_port 80 #TCP端口80 } } } 对于单实例配置文件来讲主备的差别只有三个router_id 、state、 priority、 对于多实例再增加一个主备的stste和priority #上面这几行相当于ipvsadm命令; #ipvsadm -a -t 192.168.179.110:80 -r 192.168.179.134 -g -w 1 #ipvsadm -a -t 192.168.179.110:80 -r 192.168.179.135 -g -w 1 #LVS的DR模式不需要内核转发,virtual_server就是VIP地址,LVS本身可以管理,keepalievd也可以管理,但是keepalievd 并不是调用ipvsadm,它有自己的API接口,两台web都需要配置ARP抑制跟VIP绑定 #检测方式有TCP检测80端口,如果80端口不存在,就剔除,也可以http检测,通过url,返回状态码的方式 #两个RS节点全部故障,可以设置应急服务器 2. 两台web服务器配置ARP抑制和绑定VIP脚本 [root@data-1-1 tools]# cat ipvs ipvs_rs ipvsadm-1.26/ ipvsadm-1.26.tar.gz [root@data-1-1 tools]# cat ipvs_rs #!/bin/sh VIP=( 192.168.179.110 #多VIP,添加IP即可 ) . /etc/init.d/functions case "$1" in start) echo "start LVS of REALServer IP" for ((i=0; i<`echo ${#VIP[*]}`; i++)) do interface="lo:`echo ${VIP[$i]}|awk -F . ‘{print $4}‘`" /sbin/ifconfig $interface ${VIP[$i]} broadcast ${VIP[$i]} netmask 255.255.255.255 up route add -host ${VIP[$i]} dev $interface done echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce ;; stop) for ((i=0; i<`echo ${#VIP[*]}`; i++)) do interface="lo:`echo ${VIP[$i]}|awk -F . ‘{print $4}‘`" /sbin/ifconfig $interface ${VIP[$i]} broadcast ${VIP[$i]} netmask 255.255.255.255 down route del -host ${VIP[$i]} dev $interface done echo "STOP LVS of REALServer IP" #如果是多VIP情况下,不要停止 echo "0" >/proc/sys/net/ipv4/conf/lo/arp_ignore echo "0" >/proc/sys/net/ipv4/conf/lo/arp_announce echo "0" >/proc/sys/net/ipv4/conf/all/arp_ignore echo "0" >/proc/sys/net/ipv4/conf/all/arp_announce ;; *) echo "Usage: $0 {start|stop}" exit 1 esac 2.1 增加权限,开机自启动 [root@data-1-1 tools]# chmod 700 ipvs_rs [root@data-1-1 tools]# ./ipvs_rs start [root@data-1-1 tools]# tail -3 /etc/rc.local tail: inotify cannot be used, reverting to polling /application/apache/bin/apachectl start /home/oldboy/tools/ipvs_rs start /etc/init.d/keepalived start 3. #两台keepalived重启服务 [root@oldboy keepalived]# /etc/init.d/keepalived restart Stopping keepalived: [ OK ] Starting keepalived: [ OK ] #主节点状态 [root@oldboy keepalived-1]# ip add 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:a5:a3:23 brd ff:ff:ff:ff:ff:ff inet 192.168.179.103/24 brd 192.168.179.255 scope global eth0 inet 192.168.179.110/24 scope global secondary eth0 inet6 fe80::20c:29ff:fea5:a323/64 scope link valid_lft forever preferred_lft forever #备节点状态 [root@oldboy keepalived]# ip add 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:a3:b8:e2 brd ff:ff:ff:ff:ff:ff inet 192.168.179.102/24 brd 192.168.179.255 scope global eth0 inet6 fe80::20c:29ff:fea3:b8e2/64 scope link valid_lft forever preferred_lft forever 4. 测试主节点模拟故障 [root@oldboy keepalived]# /etc/init.d/keepalived stop Stopping keepalived: [ OK ] [root@oldboy keepalived]# ipvsadm -L -n IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn [root@oldboy keepalived]# ip add 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:a5:a3:23 brd ff:ff:ff:ff:ff:ff inet 192.168.179.103/24 brd 192.168.179.255 scope global eth0 inet6 fe80::20c:29ff:fea5:a323/64 scope link valid_lft forever preferred_lft forever #备节点状态,发现已经接管了VIP [root@oldboy keepalived]# ip add 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:a3:b8:e2 brd ff:ff:ff:ff:ff:ff inet 192.168.179.102/24 brd 192.168.179.255 scope global eth0 inet 192.168.179.110/24 scope global secondary eth0 inet6 fe80::20c:29ff:fea3:b8e2/64 scope link valid_lft forever preferred_lft forever 5. 在主节点测试,然后开启另一个shell窗口 ,web节点停止web服务,查看状态将自动剔除,启动web服务,将自动添加 [root@data-1-1 blog]# watch ipvsadm -L -n Every 2.0s: ipvsadm -L -n Sat Sep 3 12:22:33 2016 IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.179.110:80 rr persistent 50 -> 192.168.179.136:80 Local 1 0 0 [root@data-1-2 log]# /application/nginx/sbin/nginx -s reload #开启服务 [root@data-1-1 blog]# watch ipvsadm -L -n #再次查看 Every 2.0s: ipvsadm -L -n Sat Sep 3 12:22:33 2016 IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.179.110:80 rr persistent 50 -> 192.168.179.136:80 Local 1 0 0 -> 192.168.179.107:80 Route 1 0 0 注:在测试上述步骤的时候遇到如下问题,刚开始用如下命令也报错,后面检查df -h 磁盘写满,清空一些日志释放空间再用如下命令就ok nginx: [error] invalid PID number "" in "/app/logs/nginx.pid" [root@data-1-2 log]# [root@data-1-2 log]# /application/nginx/sbin/nginx -c /application/nginx/conf/nginx.conf [root@data-1-2 log]# /application/nginx/sbin/nginx -s reload #ipvsadmin生产常用命令组合 ipvasdm -Ln --stats ipvasdm -Lnc ipvasdm -Ln --thresholds ipvasdm -Ln --timeout #nginx+keepalive 高可用web集群 keepalive+nginx 192.168.179.103 主VIP110 keepalive+nginx 192.168.179.136 备VIP110 web 192.168.179.134 web 192.168.179.135 首先配置基础环境 主VIP上配置keepalive安装步骤如上,nginx安装步骤(略) #keepalive主配置文件(如上单实例) ! Configuration File for keepalived global_defs { notification_email { 493939840@qq.com } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 192.168.179.2 smtp_connect_timeout 30 router_id LVS_7 } vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 55 priority 150 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.179.110/24 } #nginx配置文件 [root@oldboy conf]# cat nginx.conf worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; upstream backend { server 192.168.179.134:80 weight=5; server 192.168.179.135:80 weight=5; } server { listen 80; server_name www.etiantian.org; index index.html index.htm; location / { proxy_pass http://backend; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } } #备IP节点keeplivend配置文件 ! Configuration File for keepalived global_defs { notification_email { 49000448@qq.com } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 192.168.179.2 smtp_connect_timeout 30 router_id LVS_2 } vrrp_instance VI_1 { state BACKUP interface eth0 virtual_router_id 55 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.179.110 } } #备节点nginx配置文件 [root@oldboy conf]# vim /application/nginx/conf/nginx.conf worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; upstream backend { server 192.168.179.134:80 weight=5; server 192.168.179.135:80 weight=5; } server { listen 80; server_name www.etiantian.org; index index.html index.htm; location / { proxy_pass http://backend; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } } #各自重启服务 /etc/init.d/keepalived restart /application/nginx/sbin/nginx -s reload #在局域网的一台机器上测试 [root@oldboy conf]# curl 192.168.179.110 www.etiantian.org [root@oldboy conf]# curl 192.168.179.110 http://www.etiantian.org [root@oldboy conf]# curl 192.168.179.110 www.etiantian.org [root@oldboy conf]# curl 192.168.179.110 http://www.etiantian.org #主VIP110 [root@oldboy conf]# /etc/init.d/keepalived start Starting keepalived: [ OK ] [root@oldboy conf]# #备节点 [root@oldboy conf]# curl 192.168.179.110 www.etiantian.org [root@oldboy conf]# curl 192.168.179.110 http://www.etiantian.org [root@oldboy conf]# curl 192.168.179.110 www.etiantian.org [root@oldboy conf]# curl 192.168.179.110 http://www.etiantian.org [root@oldboy conf]# curl 192.168.179.110 www.etiantian.org [root@oldboy conf]# curl 192.168.179.110 http://www.etiantian.org #测试过程中发现,当主keepalived停止服务,nginx代理停止服务,它会自动切换到nginx备上 。如果是nginx代理先停止服务,它不会主动切换到备上 。 #如果nginx需要高可用前边还需要LVS IP负载均衡 #生产场景: 大并发 简单4层转发用lvs+keepalive,大并发,功能要求URL转发 LVS+nginx ,并发不大直接nginx或者haproxy,大公司,三个场景基本都会用
标签:kconfig exit include color 状态 框架 httpd 等于 com
原文地址:https://www.cnblogs.com/w787815/p/9535791.html