实验要求,至少三台虚拟机,在同一网段172.18/16,172.18.10.10/11,其中172.18.200.100做为DR
并且配置两块网卡,另一块网卡的地址不能再171.18/16网段内
[root@localhost ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:23:f3:8d brd ff:ff:ff:ff:ff:ff
inet 172.18.200.100/16 brd 172.18.255.255 scope global eth1
inet6 fe80::20c:29ff:fe23:f38d/64 scope link
valid_lft forever preferred_lft forever
3: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:23:f3:97 brd ff:ff:ff:ff:ff:ff
inet 192.168.10.100/24 brd 192.168.10.255 scope global eth2
inet6 fe80::20c:29ff:fe23:f397/64 scope link
valid_lft forever preferred_lft forever
1、在10/11上使用yum安装php和httpd
[root@localhost ~]# yum install php httpd
2、配置时间同步,先安装chrony安装包,再进行配置
[root@localhost ~]# yum install -y chrony
[root@localhost ~]# vim /etc/chrony.conf
# Allow NTP client access from local network.
#allow 192.168/16
allow 172.18/16
# Serve time even if not synchronized to any NTP server.
local stratum 10
3、启动chrony服务
[root@localhost ~]# service chronyd start
Starting chronyd: [ OK ]
4、安装同步时间的守护进程,为的是每过一段时间同步时间
分别在两台Vs上安装chrony
[root@localhost ~]# yum install -y chrony
[root@localhost ~]# yum install -y chrony
编辑配置文件,两台VS机子一样的配置
[root@localhost ~]# vim /etc/chrony.conf
#server 0.rhel.pool.ntp.org iburst
#server 1.rhel.pool.ntp.org iburst
#server 2.rhel.pool.ntp.org iburst
#server 3.rhel.pool.ntp.org iburst
server 172.18.200.100 iburst
启动chrony服务,两台VS机子一样的配置
[root@localhost ~]# service chronyd start
Starting chronyd: [ OK ]
5、与服务器同步时间
[root@localhost ~]# ntpdate 172.18.200.100
10 May 02:13:57 ntpdate[3324]: step time server 172.18.200.100 offset -2587.052960 sec
[root@localhost ~]# ntpdate 172.18.200.100
10 May 02:13:57 ntpdate[3324]: step time server 172.18.200.100 offset -2587.052960 sec
[root@localhost ~]# date
Wed May 10 02:31:05 CST 2017
[root@localhost ~]# date
Wed May 10 02:31:08 CST 2017
6、在两台VS主机上提供相关主页
这里用简单的for循环快速生成10个页面测试文件
[root@localhost ~]# for i in {1..20}; do echo "Test Page $i on UpStream Server 1 (172.18.10.10)" > /var/www/html/test$i.html;done
[root@localhost html]# ls /var/www/html/
test10.html test12.html test14.html test16.html test18.html test1.html test2.html test4.html test6.html test8.html
test11.html test13.html test15.html test17.html test19.html test20.html test3.html test5.html test7.html test9.html
另一台机子做同样的操作
7、启动httpd服务(10/11)
root@localhost html]# service httpd start
8、在DR服务端测试,使用curl测试页面是否能访问
[root@localhost ~]# curl http://172.18.10.10/test1.html
Test Page 1 on UpStream Server 1 (172.18.10.10)
[root@localhost ~]# curl http://172.18.10.11/test1.html
Test Page 1 on UpStream Server 1 (172.18.10.11)
9、在DR端下载并安装nginx,由于使用centos6.8,无法安装nginx1.10以上的版本
lftp 172.18.0.1:/pub/Sources/6.x86_64/nginx> ls
-rw-r--r-- 1 500 500 714233 Jul 25 2013 nginx-1.0.15-5.el6.src.rpm
-rwxr--r-- 1 500 500 319456 Apr 24 2014 nginx-1.4.7-1.el6.ngx.x86_64.rpm
-rw-r--r-- 1 0 0 344416 Sep 16 2014 nginx-1.6.2-1.el6.ngx.x86_64.rpm
lftp 172.18.0.1:/pub/Sources/6.x86_64/nginx> mget nginx-1.6.2-1.el6.ngx.x86_64.rpm
344416 bytes transferred
lftp 172.18.0.1:/pub/Sources/6.x86_64/nginx> bye
安装nginx
[root@localhost ~]# yum install nginx-1.6.2-1.el6.ngx.x86_64.rpm
10、配置nginx,仅仅作为反向代理服务器
[root@localhost ~]# cd /etc/nginx/conf.d/
[root@localhost conf.d]# ls
default.conf example_ssl.conf
[root@localhost conf.d]# cp default.conf default.conf.bak
[root@localhost conf.d]# ls
default.conf default.conf.bak example_ssl.conf
因为要先定义组,所以首先编辑nginx.conf文件
[root@localhost nginx]# ls
conf.d fastcgi_params koi-utf koi-win mime.types nginx.conf scgi_params uwsgi_params win-utf
[root@localhost nginx]# vim nginx.conf
在http上下文中定义upstream 模块,操作如下
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main ‘$remote_addr - $remote_user [$time_local] "$request" ‘
‘$status $body_bytes_sent "$http_referer" ‘
‘"$http_user_agent" "$http_x_forwarded_for"‘;
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
upstream websrvs {
server 172.18.10.10:80;
server 172.18.10.11:80;
}
include /etc/nginx/conf.d/*.conf;
}编辑conf.d文件中的default.conf文件
[root@localhost conf.d]# vim default.conf
在location上下文中编辑proxy_pass反向代理文件
server {
listen 80;
server_name localhost;
#charset koi8-r;
#access_log /var/log/nginx/log/host.access.log main;
location / {
proxy_pass http://websrvs;
root /usr/share/nginx/html;
index index.html index.htm;
nginx -t 检查配置文件语法
10、启动nginx服务
[root@localhost conf.d]# nginx
查看监听端口80是否开启
[root@localhost conf.d]# ss -tnl
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:80 *:*
LISTEN 0 128 :::22 :::*
LISTEN 0 128 *:22 *:*
LISTEN 0 100 ::1:25 :::*
LISTEN 0 100 127.0.0.1:25
11、在客户端上使用curl命令一次性访问10次页面,看看效果
[root@localhost ~]# for ((i=1;i<=10;i++));do curl http://172.18.200.100:80/test1.html; done
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 2 (172.18.10.11)
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 2 (172.18.10.11)
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 2 (172.18.10.11)
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 2 (172.18.10.11)
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 2 (172.18.10.11)
发现实现负载均衡
12、若期望负载权限有所不同,则进行如下配置
[root@localhost conf.d]# cd ..
[root@localhost nginx]# vim nginx.conf
#gzip on;
upstream websrvs {
server 172.18.10.10:80 weigth=2;
server 172.18.10.11:80 weight=3; #### 这里权重:综合2+3,意为第一台服务器虚拟为2台,第二台服务器虚拟为3台
在upstream 模块里增加权重设置
保存退出,重启服务
[root@localhost nginx]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@localhost nginx]# nginx -s reload
13、客户端再次访问10次效果如下
[root@localhost ~]# for ((i=1;i<=10;i++));do curl http://172.18.200.100:80/test1.html; done
Test Page 1 on UpStream Server 2 (172.18.10.11)
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 2 (172.18.10.11)
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 2 (172.18.10.11)
Test Page 1 on UpStream Server 2 (172.18.10.11)
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 2 (172.18.10.11)
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 2 (172.18.10.11)
14、关闭其中一台VS,再次测试
[root@localhost ~]# service httpd stop
Stopping httpd: [ OK ]
[root@localhost ~]# for ((i=1;i<=10;i++));do curl http://172.18.200.100:80/test1.html; done
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 1 (172.18.10.10)
再次启动VS,在测试
[root@localhost ~]# service httpd start
[root@localhost ~]# for ((i=1;i<=10;i++));do curl http://172.18.200.100:80/test1.html; done
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 2 (172.18.10.11)
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 2 (172.18.10.11)
Test Page 1 on UpStream Server 2 (172.18.10.11)
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 2 (172.18.10.11)
Test Page 1 on UpStream Server 1 (172.18.10.10)
15、定义最大失败次数max_fails ,和超时时间间隔 fail_timeout
[root@localhost nginx]# vim nginx.conf
upstream websrvs {
server 172.18.10.10:80 weight=2 max_fails=2 fail_timeout=2;
server 172.18.10.11:80 weight=3 backup;
保存退出并重启服务
在客户端测试
[root@localhost ~]# for ((i=1;i<=10;i++));do curl http://172.18.200.100:80/test1.html; done
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 1 (172.18.10.10)
再次修改配置
[root@localhost ~]# for ((i=1;i<=10;i++));do curl http://172.18.200.100:80/test1.html; done
Test Page 1 on UpStream Server 2 (172.18.10.11)
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 2 (172.18.10.11)
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 2 (172.18.10.11)
Test Page 1 on UpStream Server 2 (172.18.10.11)
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 2 (172.18.10.11)
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 2 (172.18.10.11)
因此发现nginx默认调度规则为加权轮询wrr
16、实现ip_hash(ip哈希绑定)绑定,第一次访问哪个主机,随后都会访问该主机
vim nginx.conf
upstream websrvs {
ip_hash;
server 172.18.10.10:80 weight=2 max_fails=2 fail_timeout=2;
server 172.18.10.11:80 weight=3;
[root@localhost ~]# for ((i=1;i<=10;i++));do curl http://172.18.200.100:80/test1.html; done
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 1 (172.18.10.10)
17、实现least_conn(加权最少连接),相当于wlc,最少连接,只要weight不同就会考虑权重
upstream websrvs {
least_conn;
server 172.18.10.10:80 weight=2 max_fails=2 fail_timeout=2;
server 172.18.10.11:80 weight=3;
}
[root@localhost ~]# for ((i=1;i<=10;i++));do curl http://172.18.200.100:80/test1.html; done
Test Page 1 on UpStream Server 2 (172.18.10.11)
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 2 (172.18.10.11)
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 2 (172.18.10.11)
Test Page 1 on UpStream Server 2 (172.18.10.11)
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 2 (172.18.10.11)
Test Page 1 on UpStream Server 1 (172.18.10.10)
Test Page 1 on UpStream Server 2 (172.18.10.11)
18.哈希 hash key ,key可以是任何表示,url,地址等等,hash后跟什么就绑定什么
upstream websrvs {
hash $request;
server 172.18.10.10:80 weight=2 max_fails=2 fail_timeout=2;
server 172.18.10.11:80 weight=3;
}
第一次访问那个资源在哪台服务器,之后就绑定再该服务器上
upstream websrvs {
hash $request consistent; #####一致性哈希算法
server 172.18.10.10:80 weight=2 max_fails=2 fail_timeout=2;
server 172.18.10.11:80 weight=3;
}
19、keepalive connections
并发太高,给端口带来压力,为此避免这种情况
可以在后面这一侧,使用长连接
nginx的特点,不是每一个请求靠每一个进程来响应,
而是一个进程响应n个请求
因此大量请求都是由worker响应的
worker 4个
现在用一个来表示
保持一定数量的长连接
一个长连接只能响应一个请求
与第一个server保留32个,意味着一次性可以发出32个长连接,也不用建立新连接
与第二个server保留32个长连接,也是一样,不用建立新连接,一共就是64个
所以连接一直在,也不用三次握手和四次断开
这时候,worker在nginx端占用的端口数量也就不变了
因此,32个长连接连接着,大量的请求就可以通过这些长连接发送
不用每个连接都建立一个端口
可以极大地节约端口
keepalive connections 表示保留的空闲长连接有多少个
假如没人用这些长连接了,一直建立这些长连接也是对后端服务器的压力
解决办法就是超时关闭,最少保持一个关闭
后面访问就直接可以用了,不用再去建立长连接
keepalive connections ,因此将该设置数字调高一些,可以一定意义上提升服务器性能
本文出自 “12866758” 博客,请务必保留此出处http://12876758.blog.51cto.com/12866758/1926014
NginxUpStream模块三种绑定模默认wrr ip_hash 以及hashkey的实现
原文地址:http://12876758.blog.51cto.com/12866758/1926014