为了让不同类型的日志记录到不同index,实现日志分类,需要更改默认的配置文件,ELK更新迭代速度很快,网上以前的文档适用于之前的版本
filebeat的docment_type配置项已经在6版本中弃用,请使用本文配置
deploy@Product1:~$ cat /etc/filebeat/filebeat.yml filebeat.prospectors: - type: log enabled: true paths: - /var/www/bigbear_server/shared/log/ms.log fields: log_topics: server - type: log enabled: true paths: - /var/www/bigbear_websocket/shared/log/ms.log fields: log_topics: socket - type: log enabled: true paths: - /var/www/bigbear_admin/shared/log/ms.log fields: log_topics: admin output.logstash: hosts: ["74.xxx.xx.xx:5044"] deploy@Product3:~$ cat /etc/filebeat/filebeat.yml filebeat.prospectors: - type: log enabled: true paths: - /var/www/bigbear_server/shared/log/ms.log fields: log_topics: server - type: log enabled: true paths: - /var/www/bigbear_sidekiq/shared/log/ms.log fields: log_topics: sidekiq - type: log enables: true paths: - /application/nginx/logs/access810*.log - /application/nginx/logs/access.log 日志的位置 fields: log_topics: nginx 日志注明类型(logstash为不同类型创建不同索引) output.logstash: hosts: ["74.xx.xx.xx:5044"]
deploy@Product4:~$ cat /application/logstash-6.2.4/config/02-beats-input.conf input { beats { # host => "74.207.240.124" codec => plain{ charset => "UTF-8" } port => 5044 # ssl => true # ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt" # ssl_key => "/etc/pki/tls/private/logstash-forwarder.key" } } output { elasticsearch { codec => plain{ charset => "UTF-8" } hosts => "http://localhost:9200" user => "elastic" password => "OFBqKJ8XjSWlX8AWL0xs" manage_template => false index => "%{[fields][log_topics]}--%{+YYYY.MM.dd}" document_type => "%{[@metadata][type]}" } }
重新启动 kibana和 filebeat
可以看到 每天都会生成一个索引按照定义的fields
原文地址:http://blog.51cto.com/dellinger/2127985