前言:
上回讲到,《lvs-dr模型负载均衡高可用Discuz》,但是由于lvs过于重量级,小公司使用有点不合时宜,本回咱们使用nginx实现此功能。望各位博友笑纳。
此次试验使用nginx代替lvs作为前端调度器,使用keepalived对nginx做双主模型高可用,向后做反向代理。RS服务器使用httpd+php来处理php页面请求,httpd和php以模块形式结合。暂时不考虑会话保持和动静资源分离的问题。我们下一篇博客会做补充。一次写太多会影响品质!!!
规划:
前端使用两台nginx服务器作为前端调度器,使用keepalived对nginx做双主模型高可用,两台RS服务器,后端一台共享存储提供mariadb和NFS服务,因为不再使用LVS调度,而是通过nginx反代来实现,所以两台RS服务器不需要在本地回环接口的别名上配置VIP
各个主机的IP地址
Host VS1
eno16777736 10.0.0.201/8 (DIP)
gateway: 10.0.0.254
Host VS2
eno16777736 10.0.0.203/8(DIP)
gateway: 10.0.0.254
Host RS1
eno16777736 10.0.0.101/8 (RIP1)
gateway: 10.0.0.254
Host RS2
eno16777736 10.0.0.102/8 (RIP2)
gateway: 10.0.0.254
Host DB
eno16777736 10.0.0.202/8
gateway: 10.0.0.254
时间同步:
# ntpdate cn.pool.ntp.org # hwclock --sysohc
安装软件:
HostDB
安装二进制mariadb-5.5.46
详细配置请入传送门:http://wscto.blog.51cto.com/11249394/1783131
安装NFS
# yum install -y nfs-utils
HostRS1
安装nginx,注意nginx属于epel源
# yum install-y nginx php-fpm php-mbstring php-mysql nfs-utils mariadb
HostRS2
安装nginx,注意nginx属于epel源
# yum install-y nginx php-fpm php-mbstring php-mysql nfs-utils mariadb
Host VS1
安装keepalived是为了实现高可用,nginx实现反代 nfs-utils
# yum install -y keepalived nfs-utils # wget http://nginx.org/packages/centos/6/x86_64/RPMS/nginx-1.8.1-1.el6.ngx.x86_64.rpm # rpm -ivh nginx-1.8.1-1.el6.ngx.x86_64.rpm
注意:CentOS 6中nginx版本,比较低,所以可以这样下载,在7中无需考虑
Host VS2
安装keepalived是为了实现高可用,nginx实现反代 nfs-utils
# yum install -y keepalived nfs-utils # wget http://nginx.org/packages/centos/6/x86_64/RPMS/nginx-1.8.1-1.el6.ngx.x86_64.rpm # rpm -ivh nginx-1.8.1-1.el6.ngx.x86_64.rpm
HostDB
完成安全初始化后,创建discuz数据库和discuz用户,并授权其可以远程操作数据库
# mysql_secure_installation # mysql -u root -p Enter password: Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 3 Server version: 5.5.46-MariaDB-log MariaDB Server Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others. Type ‘help;‘ or ‘\h‘ for help. Type ‘\c‘ to clear the current input statement. MariaDB [(none)]> create database discuz; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> create user ‘discuz‘@‘localhost‘ identified by ‘magedu‘; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> grant all privileges on discuz.* to ‘discuz‘@‘%‘ identified by ‘magedu‘; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> flush privileges; Query OK, 0 rows affected (0.00 sec)
配置NFS
# mkdir /nfshare/ # ls -dl /nfshare/ drwxr-xr-x 2 root root 6 May 9 17:01 /nfshare/ # echo "/nfshare/ 10.0.0.101(rw,no_root_squash,async) 10.0.0.102(rw,no_root_squash,async)" > /etc/exports # systemctl start rpcbind <--如果是CentOS-6要使用/etc/init.d/rpcbind start Starting rpcbind: [ OK ] [root@34400575 ~]# /etc/init.d/nfs start Starting NFS services: [ OK ] Starting NFS quotas: [ OK ] Starting NFS mountd: [ OK ] Starting NFS daemon: [ OK ] Starting RPC idmapd: [ OK ] # systemctl enable rpcbind <--如果是6 chkconfig rpcbind on # systemctl enable rpcbind <--如果是6 chkconfig nfs on [root@34400575 ~]# chkconfig rpcbind --list rpcbind 0:off 1:off 2:on 3:on 4:on 5:on 6:off [root@34400575 ~]# chkconfig nfs --list nfs 0:off 1:off 2:on 3:on 4:on 5:on 6:off [root@34400575 ~]# showmount -e 127.0.0.1 Export list for 127.0.0.1: /nfshare 172.18.71.102,172.18.71.101 此步骤如果出现问题,请查看《lvs-dr负载均衡Discuz》
解压discuz的程序包至/nfsshare/目录
# mkdir /nfshare/discuz # unzip /Discuz_X3.2_SC_UTF8.zip -d /nfshare/discuz/ # ls /nfshare/discuz/ readme upload utility
Host RS1
测试链接HostDB上的mariadb
# mysql -h 10.0.0.202 -u discuz -p
挂载HostDB上的NFS共享存储目录,属主属组改为apache
# showmount -e 10.0.0.202 Export list for 10.0.0.202: /nfshare 10.0.0.102,10.0.0.101 [root@b9cf468b ~]# mkdir /htdocs [root@b9cf468b ~]# ls -ld /htdocs/ drwxr-xr-x 2 root root 6 May 9 17:05 /htdocs/ [root@b9cf468b ~]# mount -t nfs 10.0.0.202:/nfshare /htdocs [root@b9cf468b ~]# ls /htdocs/ discuz [root@b9cf468b ~]# chown -R apache:apache /htdocs/discuz/
配置httpd,测试没有语法错误,启动httpd
# vim /etc/httpd/conf.d/vhosts.conf ... <VirtualHost *:80> ServerName admin.ws.com DocumentRoot "/htdocs/discuz/upload/" <Directory "/htdocs/discuz/upload/"> Options Indexes FollowSymLinks AllowOverride None Require all granted </Directory> </VirtualHost> # systemctl start httpd.service # ss -tnl | grep 80 LISTEN 0 128 :::80 :::*
客户端访问:http://10.0.0.102/install/
注意一定要在这里安装完成,否则搭建完反代再安装就晚了。可能会不识别静态内容
Host RS2
与Host RS1的配置相同,此处不再复制
Host VS1
挂在HostDB上的NFS共享存储到Host VS1上
# showmount -e 10.0.0.202 Export list for 10.0.0.202: /nfshare 10.0.0.0/8 # mkdir /htdocs # mount -t nfs 10.0.0.202:/nfshare /htdocs
使用upstream模块为后端RS主机做反代,并为其配置SorryServer,在后端RS主机宕机的时候,能够给用户一些友好的提示
# vim /etc/nginx/conf.d/default.conf upstream backend { server 10.0.0.101 weight=1 max_fails=3 fail_timeout=3; server 10.0.0.102 weight=1 max_fails=3 fail_timeout=3; server 10.0.0.111:8080 backup; <--当所有RS不可用时,backup可以定义一台备用主机 server 10.0.0.112:8080 backup; } server { listen 80; server_name localhost; # 当nginx将php代码送至后端RS处理时请求头中的Host值会是backend. # php代码在RS上处理时,其内部代码会去请求图片/层叠样式表等静态资源以填充页面. # 而php代码去请求静态资源时使用的是如http://backend/xxxx.gif这样的url,自然是取不到的. # 所以我们要在nginx向后代理遇到Host为backend时,将其转换为127.0.0.1. set $my_host $http_host; if ($http_host = "backend") { set $my_host "127.0.0.1"; } location / { proxy_pass http://backend; proxy_set_header Host $my_host; } } server { listen 8080; server_name localhost; charset utf-8; root /usr/local/nginx/html; index index.html index.htm; # sorry_server仅提供主页面, 访问其它资源也转至主页面. location ~ .* { error_page 404 /; } }
生成sorry_server的页面,测试语法无误后启动nginx
# echo "服务维护中,请稍后访问." > /usr/share/nginx/html/index.html # nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful [root@8a028eb8 ~]# nginx [root@8a028eb8 nginx]# ss -tnl | grep 80 LISTEN 0 128 *:8080 *:* LISTEN 0 128 *:80 *:*
修改keepalived配置文件,此处无需像lvs一样定义virtual_server和real_server
调度算法这些东西了,这些工作全由nginx完成
# vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { notification_email { root@localhost # 邮件给本机root用户 } notification_email_from kaadmin@twoyang.com smtp_server 127.0.0.1 # 使用本机作为smtp服务器 smtp_connect_timeout 30 router_id 8a028eb8 # 标识主机,可以使用主机名. vrrp_mcast_group4 224.0.71.18 # 多播地址,用于发送心跳信号.尽量让集群内的主机处于同一个独立的多播地址. } # nginx进程监控脚本.如果进程不在,降低自身权重,使从节点主机优先级高于自身,将VRRP漂移至从节点主机. vrrp_script chk_nginx { script "killall -0 nginx" interval 2 weight -8 } vrrp_instance VI_1 { state MASTER # vrrp实例VI_1中HostA作为主 interface eth0 virtual_router_id 71 # 0-255范围内的数字,用于区分vrrp实例,所以两个实例不能一致. priority 100 # MASTER的优先级要高一些 advert_int 1 authentication { auth_type PASS auth_pass uWjblY61 # 简单字符认证, 8位任意字符串. } virtual_ipaddress { 10.0.0.0.111/8 brd 10.0.0.111 dev eno16777736 label eno16777736:0 # VIP1 } # 在此处调用nginx进程监控脚本 track_script { chk_nginx } # 关闭争用. 争用是指当高优先级节点上线会立即争夺成为MASTER # 而不管其它节点是否正在给用户提供服务. #nopreempt # 开启争用时,会延迟一段时间才开始. #preempt_delay 300 } vrrp_instance VI_2 { state BACKUP # vrrp实例VI_2中HostA作为备 interface eth0 virtual_router_id 171 priority 95 # BACKUP的优先级要低一些 advert_int 1 authentication { auth_type PASS auth_pass uWjblY62 } virtual_ipaddress { 10.0.0.112/8 brd 10.0.0.112 dev eno16777736 label eno16777736:1 # VIP2 } # 在此处调用nginx进程监控脚本 track_script { chk_nginx } }
启动keepalived服务,并检查ip地址和日志,可以看到HostVS1 VI_2实例的BACKUP,优先级较低,但是由于作为VI_2实例MASTER的HostVS2主机还没有启动keepalived服务,所以HostVS1获取到了VI_2实例的VIP 10.0.0.112
Host VS2
挂载HostDB上的NFS共享存储目录
# showmount -e 10.0.0.202 Export list for 10.0.0.202: /nfshare 10.0.0.0/8 # mkdir /htdocs # mount -t nfs 10.0.0.202:/nfshare /htdocs
使用upstream模块为后端RS主机做反代,并为其配置SorryServer,在后端RS主机宕机的时候,能够给用户一些友好的提示
# vim /etc/nginx/conf.d/default.conf upstream backend { server 10.0.0.101 weight=1 max_fails=3 fail_timeout=3; server 10.0.0.102 weight=1 max_fails=3 fail_timeout=3; server 10.0.0.111:8080 backup; server 10.0.0.112:8080 backup; } server { listen 80; server_name localhost; # 当nginx将php代码送至后端RS处理时请求头中的Host值会是backend. # php代码在RS上处理时,其内部代码会去请求图片/层叠样式表等静态资源以填充页面. # 而php代码去请求静态资源时使用的是如http://backend/xxxx.gif这样的url,自然是取不到的. # 所以我们要在nginx向后代理遇到Host为backend时,将其转换为127.0.0.1. set $my_host $http_host; if ($http_host = "backend") { set $my_host "127.0.0.1"; } location / { proxy_pass http://backend; proxy_set_header Host $my_host; } } server { listen 8080; server_name localhost; charset utf-8; root /usr/local/nginx/html; index index.html index.htm; # sorry_server仅提供主页面, 访问其它资源也转至主页面. location ~ .* { error_page 404 /; } }
生成sorry_server的页面,测试语法无误后启动nginx
# echo "服务维护中,请稍后访问." > /usr/share/nginx/html/index.html # nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful [root@8a028eb8 ~]# nginx [root@8a028eb8 nginx]# ss -tnl | grep 80 LISTEN 0 128 *:8080 *:* LISTEN 0 128 *:80
修改Host VS2的keepalived配置文件,配置文件无需大改,只需要修改二者的主从关系和优先级即可
# vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { notification_email { root@localhost # 邮件给本机root用户 } notification_email_from kaadmin@twoyang.com smtp_server 127.0.0.1 # 使用本机作为smtp服务器 smtp_connect_timeout 30 router_id 8a028eb8 # 标识主机,可以使用主机名. vrrp_mcast_group4 224.0.71.18 # 多播地址,用于发送心跳信号.尽量让集群内的主机处于同一个独立的多播地址. } # nginx进程监控脚本.如果进程不在,降低自身权重,使从节点主机优先级高于自身,将VRRP漂移至从节点主机. vrrp_script chk_nginx { script "killall -0 nginx" interval 2 weight -8 } vrrp_instance VI_1 { state BACKUP # vrrp实例VI_1中HostA作为主 interface eno16777736 virtual_router_id 71 # 0-255范围内的数字,用于区分vrrp实例,所以两个实例不能一致. priority 98 # MASTER的优先级要低一些 advert_int 1 authentication { auth_type PASS auth_pass uWjblY61 # 简单字符认证, 8位任意字符串. } virtual_ipaddress { 10.0.0.0.111/8 brd 10.0.0.111 dev eno16777736 label eno16777736:0 # VIP1 } # 在此处调用nginx进程监控脚本 track_script { chk_nginx } # 关闭争用. 争用是指当高优先级节点上线会立即争夺成为MASTER # 而不管其它节点是否正在给用户提供服务. #nopreempt # 开启争用时,会延迟一段时间才开始. #preempt_delay 300 } vrrp_instance VI_2 { state BACKUP # vrrp实例VI_2中HostA作为备 interface eno16777736 virtual_router_id 171 priority 100 # BACKUP的优先级要高一些 advert_int 1 authentication { auth_type PASS auth_pass uWjblY62 } virtual_ipaddress { 10.0.0.112/8 brd 10.0.0.112 dev eno16777736 label eno16777736:1 # VIP2 } # 在此处调用nginx进程监控脚本 track_script { chk_nginx } }
启动keepalived服务,并检查IP地址和日志,可以看到HostVS2作为VI_2实例的MASTER,优先级较高,在争用模式下,HostVS2立刻获取到了VI_2的VIP 10.0.0.112
测试:分别访问http://10.0.0.111/和10.0.0.112/
轮流关闭Host VS1和Host VS2的keepalived服务和nginx服务,都不会影响用户访问
由于nginx对后端RS主机的健康状态检测模块health_check已经被商业化,我们只能考虑使用由淘宝技术团队开发的tengine,所以我们要想实现,某一台RS宕机之后,仍然能继续工作,以及RS全部宕机还能够提供Sorry-server页面就需要使用tengine,我们下一篇博客细细讲解
keepalived_nginx实现discuz负载均衡和高可用
原文地址:http://wscto.blog.51cto.com/11249394/1784291