标签:成功 toc cee levels 服务器 ref status load rect
通常我们会用Nginx的upstream做基于http/https端口的7层负载均衡,由于Nginx老版本不支持tcp协议,所以基于tcp/udp端口的四层负载均衡一般用LVS或Haproxy来做。至于4层负载均衡和7层负载均衡的区别,可以参考:http://www.cnblogs.com/kevingrace/p/6137881.html。然而Nginx从1.9.0版本开始,新增加了一个stream模块,用来实现四层协议的转发、代理或者负载均衡等,鉴于Nginx在7层负载均衡和web service上的成功,和Nginx良好的框架,stream模块前景一片光明。
Nginx的stream模块默认不会自带安装,需要编译安装的时候手动添加上这个模块。废话不多说了,下面介绍下一个自己使用stream做四层负载均衡的案例:
在此之前已经使用Ningx+Keepalived(主从模式)做了7层负载均衡的LB环境,之前编译的时候,没有加上这个stream模块,所以需要后续手动添加该模块。 由于Nginx的LB已经有业务跑在上面,可以选择平滑添加stream模块,并不会对线上业务造成多大影响。步骤如下: 1)现在LB的slave从机上进行平滑添加,然后再将vip切换到从机上,随即在对master主机进行平滑添加该模块。 2)平滑添加即是重新configure编译的时候加上--with-stream,接着make。 3)千万注意,make之后,不要make install,否则会覆盖掉之前的配置!!! -------------------------------------------------------------------------------------------------- 由于本人的LB环境升级了openssl版本,再添加--with-stream重新编译的时候报了错误,具体可参考: http://www.cnblogs.com/kevingrace/p/8058535.html --------------------------------------------------------------------------------------------------
操作之前,一定要备份一下之前的nginx安装目录,防止操作失败进行回滚! [root@external-lb01 ~]# cp -r /data/nginx /mnt/nginx.bak 之前的编译命令是: [root@external-lb01 vhosts]# cd /data/software/nginx-1.12.2 [root@external-lb01 nginx-1.12.2]# ./configure --prefix=/data/nginx --user=www --group=www --with-http_ssl_module --with-http_flv_module --with-http_stub_status_module --with-http_gzip_static_module --with-pcre 现在需要手动添加stream,编译命令如下: [root@external-lb01 vhosts]# cd /data/software/nginx-1.12.2 [root@external-lb01 nginx-1.12.2]# ./configure --prefix=/data/nginx --user=www --group=www --with-http_ssl_module --with-http_flv_module --with-http_stub_status_module --with-http_gzip_static_module --with-pcre --with-stream [root@external-lb01 nginx-1.12.2]# make [root@external-lb01 nginx-1.12.2]# cp -f /data/software/nginx-1.12.2/objs/nginx /data/nginx/sbin/nginx [root@external-lb01 nginx-1.12.2]# pkill -9 nginx [root@external-lb01 nginx-1.12.2]# /data/nginx/sbin/nginx 检查下,发现nginx已经安装了stream模块了 [root@external-lb01 nginx-1.12.2]# /data/nginx/sbin/nginx -V nginx version: nginx/1.12.2 built by gcc 4.4.7 20120313 (Red Hat 4.4.7-18) (GCC) built with OpenSSL 1.1.0g 2 Nov 2017 TLS SNI support enabled configure arguments: --prefix=/data/nginx \
--user=www --group=www \
--with-http_ssl_module --with-http_flv_module \
--with-http_stub_status_module \
--with-http_gzip_static_module \
--with-pcre --with-stream --with-openssl=/usr/local/ssl
[root@external-lb01 ~]# cat /data/nginx/conf/nginx.conf user www; worker_processes 8; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 65535; } stream { upstream kk5 { server 10.0.58.22:5100; server 10.0.58.23:5100; } upstream kk5http { server 10.0.58.22:8000; server 10.0.58.23:8000; } upstream kk5https { server 10.0.58.22:8443; server 10.0.58.23:8443; } server { listen 5100; proxy_connect_timeout 1s; proxy_pass kk5; } server { listen 8000; proxy_connect_timeout 1s; proxy_pass kk5http; } server { listen 8443; proxy_connect_timeout 1s; proxy_pass kk5https; } } http { include mime.types; default_type application/octet-stream; charset utf-8; ###### ## set access log format ###### log_format main ‘$remote_addr $remote_user [$time_local] "$request" ‘ ‘$status $body_bytes_sent "$http_referer" ‘ ‘$http_user_agent $http_x_forwarded_for $request_time $upstream_response_time $upstream_addr $upstream_status‘; ####### ## http setting ####### sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; proxy_cache_path /var/www/cache levels=1:2 keys_zone=mycache:20m max_size=2048m inactive=60m; proxy_temp_path /var/www/cache/tmp; fastcgi_connect_timeout 3000; fastcgi_send_timeout 3000; fastcgi_read_timeout 3000; fastcgi_buffer_size 256k; fastcgi_buffers 8 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_intercept_errors on; # client_header_timeout 600s; client_body_timeout 600s; # client_max_body_size 50m; client_max_body_size 100m; client_body_buffer_size 256k; gzip on; gzip_min_length 1k; gzip_buffers 4 16k; gzip_http_version 1.1; gzip_comp_level 9; gzip_types text/plain application/x-javascript text/css application/xml text/javascript application/x-httpd-php; gzip_vary on; ## includes vhosts include vhosts/*.conf; } [root@external-lb01 ~]# cd /data/nginx/conf/vhosts/ [root@external-lb01 vhosts]# ls -rw-r-xr-- 1 root root 889 12月 26 15:18 bpm.kevin.com.conf -rw-r-xr-- 1 root root 724 12月 26 14:38 mobi.kevin.com.conf [root@external-lb01 vhosts]# cat bpm.kevin.com.conf upstream os-8080 { ip_hash; server 10.0.58.20:8080 max_fails=3 fail_timeout=15s; server 10.0.58.21:8080 max_fails=3 fail_timeout=15s; } server { listen 80; server_name bpm.kevin.com; access_log /data/nginx/logs/bpm.kevin.com-access.log main; error_log /data/nginx/logs/bpm.kevin.com-error.log; location / { proxy_pass http://os-8080; proxy_set_header Host $host; proxy_redirect http://os-8080/ http://bpm.kevin.com/; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_next_upstream error timeout invalid_header http_502 http_503 http_504; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } [root@external-lb01 vhosts]# cat mobi.kevin.com.conf upstream mobi_cluster{ server 10.0.54.20:8080; } server { listen 80; server_name mobi.kevin.com; access_log /data/nginx/logs/mobi.kevin.com-access.log main; error_log /data/nginx/logs/mobi.kevin.com-error.log; location / { proxy_pass http://mobi_cluster; proxy_set_header Host $host; proxy_redirect http://mobi_cluster/ http://mobi.kevin.com/; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } 关闭防火墙,否则要依次打开如上配置中的端口! [root@external-lb01 vhosts]# /data/nginx/sbin/nginx -s reload 重启nginx后,发现http端口80、8080、8000、8443都起来了(lsof命令可以查看到),而tcp/udp端口5100没有起来,这是正常的。
1)proxy_pass URL; Context: location, if in location, limit_except 注意:proxy_pass后面的路径不带uri时,其会将location的uri传递给后端主机 server { … server_name HOSTNAME; location /uri/ { proxy http://hos[:port]; } … } http://HOSTNAME/uri –> http://host/uri proxy_pass后面的路径是一个uri时,其会将location的uri替换为proxy_pass的uri server { … server_name HOSTNAME; location /uri/ { proxy http://host/new_uri/; } … } http://HOSTNAME/uri/ –> http://host/new_uri/ 如果location定义其uri时使用了正则表达式的模式,则proxy_pass之后必须不能使用uri; 用户请求时传递的uri将直接附加代理到的服务的之后 server { … server_name HOSTNAME; location ~|~* /uri/ { proxy http://host; } … } http://HOSTNAME/uri/ –> http://host/uri/ 2)proxy_set_header field value; 设定发往后端主机的请求报文的请求首部的值;Context: http, server, location proxy_set_header X-Real-IP $remote_addr; $remote_addr:记录的是上一台主机的IP,而上一台主机有可能也是代理服务器 proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; $proxy_add_x_forwarded_for:记录的是源IP地址 在http客户端还有修改/etc/httpd/conf/httpd.conf文件 LogFormat "%{X-Real-IP}i %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined 通过上述方法则可以在后端主机上记录真实的httpd资源请求者,而不再是只记录前端代理服务器的IP地址 3)proxy_cache_path 定义可用于proxy功能的缓存;Context: http proxy_cache_path path [levels=levels] [use_temp_path=on|off] keys_zone=name:size [inactive=time] [max_size=size] [manager_files=number] [manager_sleep=time] [manager_threshold=time] [loader_files=number] [loader_sleep=time] [loader_threshold=time] [purger=on|off] [purger_files=number] [purger_sleep=time] [purger_threshold=time]; proxy_cache_path /var/cache/nginx/proxy_cache levels=1:2:1 keys_zone=gmtest:20M max_size=1G; 4)proxy_cache zone | off; 指明要调用的缓存,或关闭缓存机制;Context: http, server, location proxy_cache gmtest; 5)proxy_cache_key string; 缓存中用于“键”的内容; 默认值:proxy_cache_key $scheme$proxy_host$request_uri; 建议定义成方法和url 6)proxy_cache_valid [code …] time; 定义对特定响应码的响应内容的缓存时长; 定义在http{…}中; proxy_cache_path /var/cache/nginx/proxy_cache levels=1:1:1 keys_zone=gmtest:20m max_size=1g; 定义在需要调用缓存功能的配置段,例如server{…},或者location中; proxy_cache gmtest; proxy_cache_key $request_uri; proxy_cache_valid 200 302 301 1h; proxy_cache_valid any 1m; 7)proxy_cache_use_stale proxy_cache_use_stale error | timeout | invalid_header | updating | http_500 | http_502 | http_503 | http_504 | http_403 | http_404 | off …; Determines in which cases a stale cached response can be used when an error occurs during communication with the proxied server. 后端服务器的故障在那种情况下,就使用缓存的功能对客户的进行返回 8)proxy_cache_methods GET | HEAD | POST …; If the client request method is listed in this directive then the response will be cached. “GET” and “HEAD” methods are always added to the list, though it is recommended to specify them explicitly. 默认方法就是GET HEAD方法 9)proxy_hide_header field; By default, nginx does not pass the header fields “Date”, “Server”, “X-Pad”, and “X-Accel-…” from the response of a proxied server to a client. The proxy_hide_header directive sets additional fields that will not be passed. 10)proxy_connect_timeout time; Defines a timeout for establishing a connection with a proxied server. It should be noted that this timeout cannot usually exceed 75 seconds. 默认为60s 11)buffer相关的配置 a:proxy_buffer_size size; Sets the size of the buffer used for reading the first part of the response received from the proxied server. This part usually contains a small response header. By default, the buffer size is equal to one memory page. 默认为4k|8k b:proxy_buffering on | off; Enables or disables buffering of responses from the proxied server. 默认为on c:proxy_buffers number size; Sets the number and size of the buffers used for reading a response from the proxied server, for a single connection. By default, the buffer size is equal to one memory page. 默认为8 4k|8k d:proxy_busy_buffers_size size; When buffering of responses from the proxied server is enabled, limits the total size of buffers that can be busy sending a response to the client while the response is not yet fully read. 默认为8k|16k
The ngx_http_headers_module module allows adding the “Expires” and “Cache-Control” header fields, and arbitrary fields, to a response header. 向由代理服务器响应给客户端的响应报文添加自定义首部,或修改指定首部的值; 1)add_header name value [always]; 添加自定义首部; add_header X-Via $server_addr; 经由的代理服务器地址 add_header X-Accel $server_name; 2)expires [modified] time; expires epoch | max | off; 用于定义Expire或Cache-Control首部的值; 可以把服务器定义的缓存时长修改了;
The ngx_http_upstream_module module is used to define groups of servers that can be referenced by the proxy_pass, fastcgi_pass, uwsgi_pass, scgi_pass, and memcached_pass directives. 1)upstream name { … } 定义后端服务器组,会引入一个新的上下文;Context: http upstream httpdsrvs { server … server… … } 2)server address [parameters]; 在upstream上下文中server成员,以及相关的参数;Context: upstream address的表示格式: unix:/PATH/TO/SOME_SOCK_FILE IP[:PORT] HOSTNAME[:PORT] parameters: weight=number 权重,默认为1;默认算法是wrr max_fails=number 失败尝试最大次数;超出此处指定的次数时,server将被标记为不可用 fail_timeout=time 设置将服务器标记为不可用状态的超时时长 max_conns 当前的服务器的最大并发连接数 backup 将服务器标记为“备用”,即所有服务器均不可用时此服务器才启用 down 标记为“不可用” 先在nginx前端配置down,然后在下架后端服务器,上架新的web程序,然后上架,在修改配置文件立马的down 3)least_conn; 最少连接调度算法,当server拥有不同的权重时其为wlc 要在后端服务器是长连接时,效果才好,比如mysql 4)ip_hash; 源地址hash调度方法 5)hash key [consistent]; 基于指定的key的hash表来实现对请求的调度,此处的key可以直接文本、变量或二者的组合 作用:将请求分类,同一类请求将发往同一个upstream server If the consistent parameter is specified the ketama consistent hashing method will be used instead. 示例: hash $request_uri consistent; hash $remote_addr; hash $cookie_name; 对同一浏览器的请求,发往同一个upstream server 6)keepalive connections; 为每个worker进程保留的空闲的长连接数量 nginx的其它的二次发行版: tengine OpenResty 1.9版本之后可以反代tcp/udp的协议,基于stream模块,工作与传输层
模拟反代基于tcp或udp的服务连接,即工作于传输层的反代或调度器 1)stream { … } 定义stream相关的服务;Context: main stream { upstream sshsrvs { server 192.168.22.2:22; server 192.168.22.3:22; least_conn; } server { listen 10.1.0.6:22022; proxy_pass sshsrvs; } } stream模块中管的upstream模块的用法同上 2)listen listen address:port [ssl] [udp] [proxy_protocol] [backlog=number] [bind] [ipv6only=on|off] [reuseport] [so_keepalive=on|off|[keepidle]:[keepintvl]:[keepcnt]];
Nginx基于TCP/UDP端口的四层负载均衡(stream模块)配置梳理
标签:成功 toc cee levels 服务器 ref status load rect
原文地址:https://www.cnblogs.com/charon2/p/10790483.html