标签:fast follow user mail 服务 use cer tab win
作为战斗民族的发明,Nginx在服务器网络处理方面因其稳定,高效,简洁的配置,出色的表现而被广泛使用
Docs written in English are the best material of mastering a specific technology.
(ref: https://docs.nginx.com/nginx/)
NGINX has one master process and one or more worker processes. If caching is enabled, the cache loader and cache manager processes also run at startup.
The main purpose of the master process is to read and evaluate configuration files, as well as maintain the worker processes.
The worker processes do the actual processing of requests. NGINX relies on OS-dependent mechanisms to efficiently distribute requests among worker processes. The number of worker processes is defined by the worker_processes
directive in the nginx.conf configuration file and can either be set to a fixed number or configured to adjust automatically to the number of available CPU cores.
To reload your configuration, you can stop or restart NGINX, or send signals to the master process. A signal can be sent by running the nginx
command (invoking the NGINX executable) with the -s
argument.
The kill
utility can also be used to send a signal directly to the master process. The process ID of the master process is written, by default, to the nginx.pid file, which is located in the /usr/local/nginx/logs or /var/rundirectory.
NGINX and NGINX Plus are similar to other services in that they use a text?based configuration file written in a particular format.
To make the configuration easier to maintain, we recommend that you split it into a set of feature?specific files stored in the /etc/nginx/conf.d directory and use the include
directive in the main nginx.conf file to reference the contents of the feature?specific files.
A few top?level directives, referred to as contexts, group together the directives that apply to different traffic types:
events – General connection processing
http – HTTP traffic
mail – Mail traffic
stream – TCP and UDP traffic
In each of the traffic?handling contexts, you include one or more server
blocks to define virtual servers that control the processing of requests. The directives you can include within a server
context vary depending on the traffic type
The following configuration illustrates the use of contexts.
user nobody; # a directive in the ‘main‘ context events { # configuration of connection processing } http { # Configuration specific to HTTP and affecting all virtual servers server { # configuration of HTTP virtual server 1 location /one { # configuration for processing URIs starting with ‘/one‘ } location /two { # configuration for processing URIs starting with ‘/two‘ } } server { # configuration of HTTP virtual server 2 } } stream { # Configuration specific to TCP/UDP and affecting all virtual servers server { # configuration of TCP virtual server 1 } }
In general, a child context – one contained within another context (its parent) – inherits the settings of directives included at the parent level.
Load balancing across multiple application instances is a commonly used technique for optimizing resource utilization, maximizing throughput, reducing latency, and ensuring fault?tolerant configurations.
To start using NGINX Plus or NGINX to load balance HTTP traffic to a group of servers, first you need to define the group with the upstream
directive. The directive is placed in the http
context.
Servers in the group are configured using the server
directive (not to be confused with the server
block that defines a virtual server running on NGINX). For example, the following configuration defines a group named backend and consists of three server configurations (which may resolve in more than three actual servers):
http { upstream backend { server backend1.example.com weight=5; server backend2.example.com; server 192.0.0.1 backup; } }
To pass requests to a server group, the name of the group is specified in the proxy_pass
directive (or the fastcgi_pass
, memcached_pass
, scgi_pass
, or uwsgi_pass
directives for those protocols.) In the next example, a virtual server running on NGINX passes all requests to the backend upstream group defined in the previous example:
server { location / { proxy_pass http://backend; } }
The following example combines the two snippets above and shows how to proxy HTTP requests to the backendserver group. The group consists of three servers, two of them running instances of the same application while the third is a backup server. Because no load?balancing algorithm is specified in the upstream
block, NGINX uses the default algorithm, Round Robin:
http { upstream backend { server backend1.example.com; server backend2.example.com; server 192.0.0.1 backup; } server { location / { proxy_pass http://backend; } } }
1 Round Robin – Requests are distributed evenly across the servers, with server weights taken into consideration. This method is used by default (there is no directive for enabling it):
upstream backend {
server backend1.example.com;
server backend2.example.com;
}
2 Least Connections – A request is sent to the server with the least number of active connections, again with server weights taken into consideration:
upstream backend {
least_conn;
server backend1.example.com;
server backend2.example.com;
}
others are IP Hash, Generic Hash and Least Time (NGINX Plus only)
By default, NGINX distributes requests among the servers in the group according to their weights using the Round Robin method. The weight
parameter to the server
directive sets the weight of a server; the default is 1:
upstream backend { server backend1.example.com weight=5; server backend2.example.com; server 192.0.0.1 backup; }
In the example, backend1.example.com has weight 5; the other two servers have the default weight (1), but the one with IP address 192.0.0.1 is marked as a backup server and does not receive requests unless both of the other servers are unavailable. With this configuration of weights, out of every six requests, five are sent to backend1.example.com and one to backend2.example.com.
The server slow?start feature prevents a recently recovered server from being overwhelmed by connections, which may time out and cause the server to be marked as failed again.
In NGINX Plus, slow?start allows an upstream server to gradually recover its weight from zero to its nominal value after it has been recovered or became available. This can be done with the slow_start
parameter to the server
directive:
upstream backend { server backend1.example.com slow_start=30s; server backend2.example.com; server 192.0.0.1 backup; }
The time value (here, 30 seconds) sets the time during which NGINX Plus ramps up the number of connections to the server to the full value.
Note that if there is only a single server in a group, the max_fails
, fail_timeout
, and slow_start
parameters to the server
directive are ignored and the server is never considered unavailable.
Session persistence means that NGINX Plus identifies user sessions and routes all requests in a given session to the same upstream server.
Sticky cookie – NGINX Plus adds a session cookie to the first response from the upstream group and identifies the server that sent the response. The client’s next request contains the cookie value and NGINX Plus route the request to the upstream server that responded to the first request:
upstream backend { server backend1.example.com; server backend2.example.com; sticky cookie srv_id expires=1h domain=.example.com path=/; }
In the example, the srv_id
parameter sets the name of the cookie. The optional expires
parameter sets the time for the browser to keep the cookie (here, 1 hour). The optional domain
parameter defines the domain for which the cookie is set, and the optional path
parameter defines the path for which the cookie is set. This is the simplest session persistence method.
others are Sticky route and Cookie learn method
With NGINX Plus, it is possible to limit the number of connections to an upstream server by specifying the maximum number with the max_conns
parameter.
If the max_conns
limit has been reached, the request is placed in a queue for further processing, provided that the queue
directive is also included to set the maximum number of requests that can be simultaneously in the queue:
upstream backend { server backend1.example.com max_conns=3; server backend2.example.com; queue 100 timeout=70; }
If the queue is filled up with requests or the upstream server cannot be selected during the timeout specified by the optional timeout
parameter, the client receives an error.
Note that the max_conns
limit is ignored if there are idle keepalive connections opened in other worker processes. As a result, the total number of connections to the server might exceed the max_conns
value in a configuration where the memory is shared with multiple worker processes.
标签:fast follow user mail 服务 use cer tab win
原文地址:https://www.cnblogs.com/geeklove01/p/9191126.html