在互联网企业中,日志分析是特别重要的一个模块。通过对日志的分析,可以获得网站的一些指标信息,和网站访问情况。另外,日志对于统计排错来说非常有利的。
nginx日志相关的配置如access_log、log_format、open_log_file_cache、log_not_found、log_subrequest、rewrite_log、error_log。
日志相关的配置参数
1.access_log指令
1.1语法
Syntax:access_log path [format [buffer=size] [gzip[=level]] [flush=time] [if=condition]]; access_log off; Default:access_log logs/access.log combined; Context:http, server, location, if in location, limit_except 默认值: access_log logs/access.log combined; 配置段: http, server, location, if in location, limit_except gzip压缩等级。 buffer设置内存缓存区大小。 flush保存在缓存区中的最长时间。 不记录日志:access_log off;
1.2 例子
access_log /path/to/log.gz combined gzip flush=5m;
2. log_format指令
2.1语法
Syntax: log_format name string ...; Default: log_format combined "..."; Context: http name表示格式名称,string表示等义的格式。 log_format有一个默认的无需设置的combined日志格式,相当于apache的combined日志格式。
2.2 例子
log_format combined ‘$remote_addr - $remote_user [$time_local] ‘ ‘"$request" $status $body_bytes_sent ‘ ‘"$http_referer" "$http_user_agent"‘;
$bytes_sent
the number of bytes sent to a client
$connection
connection serial number
$connection_requests
the current number of requests made through a connection (1.1.18)
$msec
time in seconds with a milliseconds resolution at the time of the log write
$pipe
“p” if request was pipelined, “.” otherwise
$request_length
request length (including request line, header, and request body)
$request_time
request processing time in seconds with a milliseconds resolution; time elapsed between the first bytes were read from the client and the log write after the last bytes were sent to the client
$status
response status
$time_iso8601
local time in the ISO 8601 standard format
$time_local
local time in the Common Log Format
3. open_log_file_cache指令
3.1 语法
Syntax:open_log_file_cache max=N [inactive=time] [min_uses=N] [valid=time]; open_log_file_cache off; Default:open_log_file_cache off; Context:http, server, location 对于每一条日志记录,都将是先打开文件,再写入日志,然后关闭。可以使用open_log_file_cache来设置日志文件缓存(默认是off),格式如下: 参数注释如下: max:设置缓存中的最大文件描述符数量,如果缓存被占满,采用LRU算法将描述符关闭。 inactive:设置存活时间,默认是10s min_uses:设置在inactive时间段内,日志文件最少使用多少次后,该日志文件描述符记入缓存中,默认是1次 valid:设置检查频率,默认60s off:禁用缓存
3.2 例子
open_log_file_cache max=1000 inactive=20s valid=1m min_uses=2;
4. log_not_found指令
4.1 语法
Syntax:log_not_found on | off; Default:log_not_found on; Context:http, server, location 是否在error_log中记录不存在的错误。默认是。
语法: log_subrequest on | off;
默认值: log_subrequest off;
配置段: http, server, location
是否在access_log中记录子请求的访问日志。默认不记录。
由ngx_http_rewrite_module模块提供的。用来记录重写日志的。对于调试重写规则建议开启。 Nginx重写规则指南
语法: rewrite_log on | off;
默认值: rewrite_log off;
配置段: http, server, location, if
启用时将在error log中记录notice级别的重写日志。
Syntax: error_log file [level];
Default:
error_log logs/error.log error;
Context: main, http, mail, stream, server, location
语法: error_log file | stderr | syslog:server=address[,parameter=value]
[debug | info | notice | warn | error | crit | alert | emerg];
默认值: error_log logs/error.log error;
配置段: main, http, server, location
配置错误日志。
nginx配置文件如下
http { include mime.types; default_type application/octet-stream; log_format main ‘$remote_addr - $remote_user [$time_local] "$request" ‘ ‘$status $body_bytes_sent "$http_referer" ‘ ‘"$http_user_agent" "$http_x_forwarded_for"‘; access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; server { # 监听的IP和端口 listen 192.168.163.146:80; server_name server1.domain.com; access_log logs/server1.access.log main; location / { index index.html index.htm; #存放目录 root /u01/up1; } } } cat /usr/local/nginx-1.7.9/logs/server1.access.log
192.168.163.1 - - [29/Aug/2016:14:32:00 +0800] "GET / HTTP/1.1" 200 134 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:48.0) Gecko/20100101 Firefox/48.0" "-" 192.168.163.1 - - [29/Aug/2016:14:32:01 +0800] "GET / HTTP/1.1" 200 134 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:48.0) Gecko/20100101 Firefox/48.0" "-" 192.168.163.1 - - [29/Aug/2016:14:32:01 +0800] "GET / HTTP/1.1" 200 134 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:48.0) Gecko/20100101 Firefox/48.0" "-" ... 192.168.163.1 - - [29/Aug/2016:14:32:03 +0800] "GET / HTTP/1.1" 200 134 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:48.0) Gecko/20100101 Firefox/48.0" "-" 192.168.163.1 - - [29/Aug/2016:14:32:04 +0800] "GET / HTTP/1.1" 200 134 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:48.0) Gecko/20100101 Firefox/48.0" "-" 192.168.163.1 - - [29/Aug/2016:14:32:41 +0800] "GET /index.html HTTP/1.1" 200 134 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:48.0) Gecko/20100101 Firefox/48.0" "-" 192.168.163.1 - - [29/Aug/2016:14:32:48 +0800] "GET /index2html HTTP/1.1" 404 168 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:48.0) Gecko/20100101 Firefox/48.0" "-" 192.168.163.1 - - [29/Aug/2016:14:32:51 +0800] "GET /index2.html HTTP/1.1" 404 168 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:48.0) Gecko/20100101 Firefox/48.0" "-"
整体来说,日志配置非常简单。接下来完成日志切割。
为什么要进行日志切割?
我等苦逼程序员,都有过解决线上BUG的经验吧。如果一个日志文件过大,无论从维护或查看角度来说,都不方便。所以要将日志进行切割,而按日切割是最常见的一种方式。这有点像log4j中的某个appender[org.apache.log4j.DailyRollingFileAppender].
按日切割逻辑很简单。主要分3步。
备份日志到新文件
重新启动nginx
将命令加入cron调度
#!/bin/bash log_path="/usr/local/nginx-1.7.9/logs/" pid_path="/usr/local/nginx-1.7.9/conf/nginx.pid" mkdir -p ${log_path}$(date -d "yesterday" +"%Y")/$(date -d "yesterday" +"%m")/ mv ${log_path}server1.access.log ${log_path}$(date -d "yesterday" +%Y)/$(date -d "yesterday" +%m)/server1.access_$(date -d "yesterday" +%Y%m%d).log kill -USR1 `cat ${pid_path}`
笔者对linux不经常写,平时也就是简单玩玩。写上面代码也是参照了网上或书中的例子。结果充满坎坷。
1.由于我使用WINSCP工具原因,在本地使用sublime工具开发,结果出现错误
shell /bin/bash^M: bad interpreter错误解决
2.中间错误将$(date -d "yesterday" +"%Y" 写成 ${date -d "yesterday" +"%Y"}
3.将kill -USR1 错误写成“kill -USER1”
Nginx控制信号
TERM, INT
快速关闭
QUIT
从容关闭
HUP
重新加载,用新的配置开始新的工作进程
USER1
重新打开日志文件
USER2
平滑升级可执行程序
WINCH
从容关闭工作进程
()会开启一个新的子shell,{}不会开启一个新的子shell
(())常用于算术运算比较,[[]]常用于字符串的比较.
$()返回括号中命令执行的结果
$(())返回的结果是括号中表达式值
${ }参数替换与扩展
本文出自 “简单” 博客,请务必保留此出处http://dba10g.blog.51cto.com/764602/1843959
原文地址:http://dba10g.blog.51cto.com/764602/1843959