码迷,mamicode.com
首页 > 其他好文 > 详细

实例:LVS+Keepalived配置LVS的高可用

时间:2014-09-19 19:35:36      阅读:214      评论:0      收藏:0      [点我收藏+]

标签:模型

LVS+Keepalived配置LVS的高可用

我们这里LVS-DR模型的高可用集群:

实验环境:
    vm1 LVS-DR1:
             eth0 172.16.3.2/16
             VIP :eth0:0 172.16.3.88         
    vm2 LVS-DR2:
            eth0 172.16.3.3/16
    vm3 Server-web1
            RS1: eth0 172.16.3.1/16
            VIP: lo:0 172.16.3.88/16
    vm4 Server-web2
            RS2: eth0 172.16.3.10/16
            VIP: lo:0 172.16.3.88/16

  测试机:实体本机:IP: 172.16.3.100           
1、vm3 Server-web1配置:
      # ifconfig eth0 172.16.3.1/16 up                             RIP1
      # echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore 
      # echo 1 > /proc/sys/net/ipv4/conf/eth0/arp_ignore (由于是一个网卡,也可以指定的接口为lo:0)
      # echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
      # echo 2 > /proc/sys/net/ipv4/conf/eth0/arp_announce
      # ifconfig lo:0 172.16.3.88 netmask 255.255.255.255 broadcast 172.16.3.88    VIP
      # route add -host 172.16.3.88 dev lo:0
    web1主页面为
      # yum install nginx
      # echo "172.16.3.1" > /usr/share/nginx/html/index.html
2、vm4 Server-web2配置:
      # ifconfig eth0 172.16.3.10/16 up                        RIP2
      # echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
      # echo 1 > /proc/sys/net/ipv4/conf/eth0/arp_ignore (由于是一个网卡,也可以指定的接口为lo:0)
      # echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
      # echo 2 > /proc/sys/net/ipv4/conf/eth0/arp_announce
      # ifconfig lo:0 172.16.3.88 netmask 255.255.255.255 broadcast 172.16.3.88    VIP
      # route add -host 172.16.3.88 dev lo:0
    web2主页面为
      #yum install nginx
      # echo "172.16.3.10" > /usr/share/nginx/html/index.html
3、vm1 LVS-DR1 (我们这里的vm1为)配置
      # yum install keepalived
# vim /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {
   notification_email {
    root@localhost
   }
   notification_email_from ning@qq.com
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
#   router_id LVS_DEVEL
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 88
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass ning
    }
    virtual_ipaddress {
    172.16.3.88
    }
}

virtual_server 172.16.3.88 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    nat_mask 255.255.0.0
#    persistence_timeout 50
    protocol TCP
   sorry_server 127.0.0.1 80
    real_server 172.16.3.1 80 {
        weight 1
        HTTP_GET {
            url {
              path /
         status_code 200
            }
            connect_timeout 2
            nb_get_retry 3
            delay_before_retry 1
        }
    }

    real_server 172.16.3.10 80 {
        weight 1
        HTTP_GET {
            url {
              path /
         status_code 200
            }
            connect_timeout 2
            nb_get_retry 3
            delay_before_retry 1
        }
    }
}
由于我们添加了sorry_service 所以我们在DR节点上也安装了nginx(方便测试)
  # yum install nginx
  # echo "172.16.3.2" > /usr/share/nginx/html/index.html
4、vm2 LVS-DR2 (我们这里的vm2配置文件)配置
    # yum install keepalived     
# vim  /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
}

vrrp_instance VI_1 {
    state BACKUP               主要是这里不同
    interface eth0
    virtual_router_id 88           
    priority 99                  优先级不同
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass ning
    }
    virtual_ipaddress {
    172.16.3.88
    }
}

virtual_server 172.16.3.88 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    nat_mask 255.255.0.0
#    persistence_timeout 50
    protocol TCP
   sorry_server 127.0.0.1 80  这里我们添加了sorry_server,就是说,两个web服务器都不在线时,就给客户提供一个页面,(这里我们指向了自己的主机,,可以指定另外一台webserver)
    real_server 172.16.3.1 80 {
        weight 1
        HTTP_GET {
            url {
              path /
         status_code 200
            }
            connect_timeout 2
            nb_get_retry 3
            delay_before_retry 1
        }
    }

    real_server 172.16.3.10 80 {
        weight 1
        HTTP_GET {
            url {
              path /
         status_code 200
            }
            connect_timeout 2
            nb_get_retry 3
            delay_before_retry 1
        }
    }
}

由于我们添加了sorry_service 所以我们在DR节点上也安装了nginx(方便测试)
  # yum install nginx
  # echo "172.16.3.3" > /usr/share/nginx/html/index.html

5、测试
http://172.16.3.88
  (1)我们测试keepalived集群负责能不能用?(这里不做太多说明)
        用到的命令:
        # service keepalived stop|start
        # ip addr show
  (2)测试是否可以轮训
     直接在实体机:
http://172.16.3.88测试即可
(3)测试sorry_server
    关掉所有的 web-server(vm3、vm4)
      # service nginx stop
http://172.16.3.88测试即可
=====================================================================================
双主模型案例:
在上面的基础上lvs-dr双主模型的配置文件
这里我们没有配置,VIP1得地址和禁用同步MAC
1、vm3 添加 VIP2
    # ifconfig lo:1 172.16.3.188 netmask 255.255.255.255 broadcast 172.16.3.188 up
    # route add -host 172.16.3.188 dev lo:1
2、vm4 添加 VIP2
    # ifconfig lo:1 172.16.3.188 netmask 255.255.255.255 broadcast 172.16.3.188 up
    # route add -host 172.16.3.188 dev lo:1
3、vm1 LVS-DR1 (我们这里的vm1为)配置
    # cat keepalived.conf
    ! Configuration File for keepalived

    global_defs {
       notification_email {
        root@localhost
       }
       notification_email_from ning@qq.com
       smtp_server 127.0.0.1
       smtp_connect_timeout 30
    }
    vrrp_instance VI_1 {          vm1主
        state MASTER
        interface eth0
        virtual_router_id 88
        priority 100
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass ning
        }
        virtual_ipaddress {
        172.16.3.88                                VIP1
        }
    }
    vrrp_instance VI_2 {        vm2 从
        state BACKUP                         注意的地方
        interface eth0
        virtual_router_id 90                    注意地方
        priority 99                            注意地方
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass ning1
        }
        virtual_ipaddress {
        172.16.3.188                            VIP2
        }
    }

    virtual_server 172.16.3.88 80 {             vm1 web-server  VIP1
        delay_loop 6
        lb_algo rr
        lb_kind DR
        nat_mask 255.255.0.0
    #    persistence_timeout 50
        protocol TCP
       sorry_server 127.0.0.1 80
        real_server 172.16.3.1 80 {
            weight 1
            HTTP_GET {
                url {
                  path /
             status_code 200
                }
                connect_timeout 2
                nb_get_retry 3
                delay_before_retry 1
            }
        }

        real_server 172.16.3.10 80 {
            weight 1
            HTTP_GET {
                url {
                  path /
             status_code 200
                }
                connect_timeout 2
                nb_get_retry 3
                delay_before_retry 1
            }
        }
    virtual_server 172.16.3.188 80 {                    vm2 web-servr VIP2
        delay_loop 6
        lb_algo rr
        lb_kind DR
        nat_mask 255.255.0.0
    #    persistence_timeout 50
        protocol TCP
       sorry_server 127.0.0.1 80
        real_server 172.16.3.1 80 {
            weight 1
            HTTP_GET {
                url {
                  path /
             status_code 200
                }
                connect_timeout 2
                nb_get_retry 3
                delay_before_retry 1
            }
        }

        real_server 172.16.3.10 80 {
            weight 1
            HTTP_GET {
                url {
                  path /
             status_code 200
                }
                connect_timeout 2
                nb_get_retry 3
                delay_before_retry 1
            }
        }
    }
    }

4、vm2 LVS-DR2 (我们vm2的配置文件配置)配置
    # cat keepalived.conf
    ! Configuration File for keepalived

    global_defs {
       notification_email {
        root@localhost
       }
       notification_email_from ning@qq.com
       smtp_server 127.0.0.1
       smtp_connect_timeout 30
    }
    vrrp_instance VI_1 {
        state BACKUP                      注意
        interface eth0
        virtual_router_id 88               注意
        priority 99                        注意
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass ning                 注意
        }
        virtual_ipaddress {
        172.16.3.88                     注意
        }
    }
    vrrp_instance VI_2 {
        state MASTER                  注意
        interface eth0
        virtual_router_id 90         注意
        priority 100                注意
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass ning1          注意
        }
        virtual_ipaddress {
        172.16.3.188                  注意
        }
    }

    virtual_server 172.16.3.88 80 {
        delay_loop 6
        lb_algo rr
        lb_kind DR
        nat_mask 255.255.0.0
    #    persistence_timeout 50
        protocol TCP
       sorry_server 127.0.0.1 80
        real_server 172.16.3.1 80 {
            weight 1
            HTTP_GET {
                url {
                  path /
             status_code 200
                }
                connect_timeout 2
                nb_get_retry 3
                delay_before_retry 1
            }
        }

        real_server 172.16.3.10 80 {
            weight 1
            HTTP_GET {
                url {
                  path /
             status_code 200
                }
                connect_timeout 2
                nb_get_retry 3
                delay_before_retry 1
            }
        }
    virtual_server 172.16.3.188 80 {
        delay_loop 6
        lb_algo rr
        lb_kind DR
        nat_mask 255.255.0.0
    #    persistence_timeout 50
        protocol TCP
       sorry_server 127.0.0.1 80
        real_server 172.16.3.1 80 {
            weight 1
            HTTP_GET {
                url {
                  path /
             status_code 200
                }
                connect_timeout 2
                nb_get_retry 3
                delay_before_retry 1
            }
        }

        real_server 172.16.3.10 80 {
            weight 1
            HTTP_GET {
                url {
                  path /
             status_code 200
                }
                connect_timeout 2
                nb_get_retry 3
                delay_before_retry 1
            }
        }
    }
    }
5 、测试:
   (1)双主模型是否开启(我们这里只开启vm1)
      # service keepalived start
      # ip addr show
      3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
        link/ether 00:0c:29:d7:f7:9c brd ff:ff:ff:ff:ff:ff
        inet 172.16.3.2/16 brd 172.16.255.255 scope global eth0
        inet 172.16.3.88/32 scope global eth0                -----------VIP1
        inet 172.16.3.188/32 scope global eth0                ------------VIP2
        inet6 fe80::20c:29ff:fed7:f79c/64 scope link
       valid_lft forever preferred_lft forever
    (2)再次启动vm2
        # service keepalived start
        在vm1上查看
        # ip addr show
        3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
        link/ether 00:0c:29:d7:f7:9c brd ff:ff:ff:ff:ff:ff
        inet 172.16.3.2/16 brd 172.16.255.255 scope global eth0
        inet 172.16.3.88/32 scope global eth0            ---------------VIP1
        inet6 fe80::20c:29ff:fed7:f79c/64 scope link
       valid_lft forever preferred_lft forever
       在vm2上查看‘
       # ip addr show
       2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
        link/ether 00:0c:29:0b:35:6a brd ff:ff:ff:ff:ff:ff
        inet 172.16.3.3/16 brd 172.16.255.255 scope global eth0
        inet 172.16.3.188/32 scope global eth0          -----------------VIP2
        inet6 fe80::20c:29ff:fe0b:356a/64 scope link
       valid_lft forever preferred_lft forever
    (3)测试主页
        本机上测试:都会轮训
http://172.16.3.88
http://172.16.3.188

实例:LVS+Keepalived配置LVS的高可用

标签:模型

原文地址:http://wodemeng.blog.51cto.com/1384120/1555244

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!