标签:orm access uri kafka conf site add ESS ase
这里$request_time和$upstream_response_time打上引号是因为,如果取不到这个值,这个值就位 - ,logstash会报错,所以就由字符串再转为float类型
log_format json ‘{"time": "$time_iso8601", ‘
‘"remote_addr": "$remote_addr", ‘
‘"transactionId": "$http_TraceId", ‘
‘"referer": "$http_referer", ‘
‘"website": "$http_host", ‘
‘"uri": "$uri", ‘
‘"status": $status, ‘
‘"bytes": $body_bytes_sent, ‘
‘"agent": "$http_user_agent", ‘
‘"x_forwarded": "$http_x_forwarded_for", ‘
‘"up_addr": "$upstream_addr", ‘
‘"up_host": "$upstream_http_host", ‘
‘"upstreamresponsetime": "$upstream_response_time", ‘
‘"responsetime": "$request_time"‘
‘}‘;
logstash配置
[root@prod-logstash02-E logstash]# cat conf.d/nginx-output-kafka.conf
input {
kafka {
bootstrap_servers => "172.27.27.220:9092,172.27.27.221:9092,172.27.27.222:9092"
topics => ["nginxlogs"]
codec => "json"
type => "nginx-access"
}
}
filter {
if [type] == "nginx-access" {
json {
source => "message"
}
}
mutate{
convert => ["responsetime","float"]
convert => ["upstreamresponsetime","float"]
}
geoip {
source => "remote_addr"
#database => "/opt/logstash-6.5.4/Geoip/GeoLite2-City_20191022/GeoLite2-City.mmdb"
}
}
output {
if [fields][type] =~ "nginxlogs" {
elasticsearch {
hosts => [ "172.27.27.220:9200","172.27.27.221:9200","172.27.27.222:9200" ]
index => "logstash-nginxaccess-%{+YYYY.MM.dd}"
template_overwrite => true
user => "xxx"
password => "xxxx"
#document_type => "%{[type]}"
}
}
#stdout { codec => rubydebug }
}
查看uri的mapping类型为keyword,在绘图时不能模糊搜索
nginx.conf
log_format json ‘{"time": "$time_iso8601", ‘
‘"remote_addr": "$remote_addr", ‘
‘"transactionId": "$http_TraceId", ‘
‘"referer": "$http_referer", ‘
‘"website": "$http_host", ‘
‘"request": "$request", ‘
‘"status": $status, ‘
‘"bytes": $body_bytes_sent, ‘
‘"agent": "$http_user_agent", ‘
‘"x_forwarded": "$http_x_forwarded_for", ‘
‘"up_addr": "$upstream_addr", ‘
‘"up_host": "$upstream_http_host", ‘
‘"upstreamresponsetime": "$upstream_response_time", ‘
‘"responsetime": "$request_time"‘
‘}‘;
logstash配置
[root@prod-logstash02-E logstash]# cat conf.d/nginx-output-kafka.conf
input {
kafka {
bootstrap_servers => "172.27.27.220:9092,172.27.27.221:9092,172.27.27.222:9092"
topics => ["nginxlogs"]
codec => "json"
type => "nginx-access"
}
}
filter {
if [type] == "nginx-access" {
json {
source => "message"
}
}
mutate{
convert => ["responsetime","float"]
convert => ["upstreamresponsetime","float"]
}
mutate {
add_field => { "request1" => "%{request}" }
}
grok {
match => {"request1" => "(?:%{WORD:requestmethod} %{URIPATH:url}(?:%{URIPARAM:params})?(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})"}
remove_field => ["request1"]
}
geoip {
source => "remote_addr"
#database => "/opt/logstash-6.5.4/Geoip/GeoLite2-City_20191022/GeoLite2-City.mmdb"
}
}
output {
if [fields][type] =~ "nginxlogs" {
elasticsearch {
hosts => [ "172.27.27.220:9200","172.27.27.221:9200","172.27.27.222:9200" ]
index => "logstash-nginxaccess-%{+YYYY.MM.dd}"
template_overwrite => true
user => "xxx"
password => "xxxx"
#document_type => "%{[type]}"
}
}
#stdout { codec => rubydebug }
}
这样配置后,url的类型为text,这时候才可以模糊搜索。
filebeat+nginx 绘图时url不能模糊搜索的问题
标签:orm access uri kafka conf site add ESS ase
原文地址:https://www.cnblogs.com/jcici/p/11955912.html