一. 环境准备
角色 | SERVER IP |
logstash agent | 10.1.11.31 |
logstash agent | 10.1.11.35 |
logstash agent | 10.1.11.36 |
logstash central | 10.1.11.13 |
elasticsearch | 10.1.11.13 |
redis | 10.1.11.13 |
kibana | 10.1.11.13 |
架构图如下:
整个流程如下:
1) 远程节点的logstash agent收集本地日志后发送到远程redis的list队列
2) 使用redis作为日志收集的中间件, 可以暂存远程节点的日志数据, 起到数据缓冲,提高并发的作用
3) central logstash分别从redis和本地日志文件读取数据发送到elasticsearch进行存储,索引
4) kibana从elasticsearch中读取数据,通过web gui展示给用户
二. 安装
ELK的安装很简单,只需将二进制包下载下来解压即可用,所需二进制包如下:
elasticsearch-1.7.1.tar.gz
kibana-4.1.1-linux-x64.tar.gz
logstash-1.5.3.tar.gz
1)启动redis (10.1.11.13)
从官方下载redis源码编译安装后,进行如下配置后启动即可:
#调整内核参数: echo 1 > /proc/sys/vm/overcommit_memory echo never > /sys/kernel/mm/transparent_hugepage/enabled echo 524288 > /proc/sys/net/core/somaxconn #修改redis配置文件如下: [root@PreRelease logstash]# cat /etc/redis-logstash.conf daemonize yes pidfile /data/redis-logstash/run/redis.pid port 6377 tcp-backlog 511 timeout 0 tcp-keepalive 0 loglevel notice logfile "/data/redis-logstash/log/redis.log" databases 16 save 900 1 save 300 10 save 60 10000 stop-writes-on-bgsave-error yes rdbcompression yes rdbchecksum yes dbfilename dump.rdb dir /data/redis-logstash/db slave-serve-stale-data yes slave-read-only yes repl-diskless-sync no repl-diskless-sync-delay 5 repl-disable-tcp-nodelay no slave-priority 100 appendonly no appendfilename "appendonly.aof" appendfsync everysec no-appendfsync-on-rewrite no auto-aof-rewrite-percentage 100 auto-aof-rewrite-min-size 64mb aof-load-truncated yes lua-time-limit 5000 slowlog-log-slower-than 10000 slowlog-max-len 128 latency-monitor-threshold 0 notify-keyspace-events "" hash-max-ziplist-entries 512 hash-max-ziplist-value 64 list-max-ziplist-entries 512 list-max-ziplist-value 64 set-max-intset-entries 512 zset-max-ziplist-entries 128 zset-max-ziplist-value 64 hll-sparse-max-bytes 3000 activerehashing yes client-output-buffer-limit normal 0 0 0 client-output-buffer-limit slave 256mb 64mb 60 client-output-buffer-limit pubsub 32mb 8mb 60 hz 10 aof-rewrite-incremental-fsync yes maxmemory 32212254720 maxmemory-policy allkeys-lru #启动redis /usr/local/bin/redis-server /etc/redis-logstash.conf
2)安装logstash agent (10.1.11.31/35/36)
解压logstash-1.5.3.tar.gz到/usr/local
cd /usr/local ln -s logstash-1.5.3 logstash #创建/etc/logstash目录,用户保存agent端的规则文件 mkdir /etc/logstash # 配置agent端收集tomcat日志的规则 vim /etc/logstash/tomcat_api.conf #配置日志输入源 input { file { type => "tomcat_api" #指定日志类别名称 path => "/data/logs/bd_api/api" #指定日志路径 start_position => "beginning" #从日志文件首部开始收集 } } #过滤规则配置 filter { if [type] == "tomcat_api" { #multiline用于将多行日志合并成一行,因为java的exception会有多行,但应该将其作为一条日志记录看待 multiline { patterns_dir => "/usr/local/logstash/patterns" #patterns_dir用于指定patterns文件的位置,patterns文件用于保存匹配日志字段的正则表达式 pattern => "^%{TIMESTAMP_ISO8601}" #指定匹配的pattern negate => true #true表示只要不匹配pattern的行都会进行合并,默认为false what => "previous" #匹配pattern的行追加到合并行的前方,作为一条日志记录输出 } #grok用来进行日志字段的解析 grok { patterns_dir => "/usr/local/logstash/patterns" match => { "message" => "%{LOG4JLOG}" } #在/usr/local/logstash/patterns中创建%{LOG4JLOG}的pattern如下: #LOG4JLOG %{TIMESTAMP_ISO8601:datetime}\s+(?<thread>\S+)\s+(?<line>\d+)\s+(?<level>\S+)\s+(?<class>\S+)\s+-\s+(?<msg>.*) #mutate可以对字段的内容进行替换 mutate { replace => [ "host", "10.1.11.31"] } } } #日志输出源 output { #规则调试时开启 #stdout { codec => "rubydebug" } #将日志数据输出到远程redis list中 redis { host => "10.1.11.13" port => 6377 data_type => "list" key => "tomcat_api" } }
3) 安装central logstash (10.1.11.13)
解压logstash-1.5.3.tar.gz到/usr/local
cd /usr/local ln -s logstash-1.5.3 logstash #创建/etc/logstash目录,用户保存central端和本地agent的规则文件 mkdir /etc/logstash #这里创建有两个规则文件 /etc/logstash/ ├── central.conf #保存central端的logstash规则 └── tomcat_uat.conf #保存本地agent的logstash规则 vim central.conf input { ##product #从redis中获取类别为tomcat_api的日志 redis { host => "127.0.0.1" port => 6377 type => "redis-input" data_type => "list" key => "tomcat_api" } #从redis中获取类别为tomcat_editor的日志 redis { host => "127.0.0.1" port => 6377 type => "redis-input" data_type => "list" key => "tomcat_editor" } } output { #stdout { codec => "rubydebug" } #日志输出到elasticsearch进行索引 elasticsearch { flush_size => 50000 idle_flush_time => 10 cluster => "logstash-1113" host => ["127.0.0.1:9300"] workers => 2 } } #----------------------------------------------------------------- vim tomcat_uat.conf input { file { type => "tomcat_api_ab" path => "/data/logs/bd_api/errors/api_error" start_position => "beginning" } file { path => "/data/logs/bd_admin/admin" type => "tomcat_9083" start_position => "beginning" } } filter { if [type] in ["tomcat_api_ab","tomcat_9083"] { multiline { patterns_dir => "/usr/local/logstash/patterns" pattern => "^%{TIMESTAMP_ISO8601}" negate => true what => "previous" } grok { patterns_dir => "/usr/local/logstash/patterns" match => { "message" => "%{LOG4JLOG}" } } mutate { replace => [ "host", "10.1.11.13"] } } } output { #stdout { codec => "rubydebug" } elasticsearch { flush_size => 50000 idle_flush_time => 10 cluster => "logstash-1113" host => ["127.0.0.1:9300"] workers => 2 } }
4) 安装elasticsearch
#解压elasticsearch-1.7.1.tar.gz到/usr/local tar xf elasticsearch-1.7.1.tar.gz -C /usr/local cd /usr/local ln -s elasticsearch-1.7.1 elasticsearch [root@PreRelease config]# egrep -v ‘^#|^$‘ elasticsearch.yml #指定集群名称 cluster.name: logstash-1113 #数据索引目录路径 path.data: /data/logstash/els/data #数据临时目录路径 path.work: /data/logstash/els/work #日志路径 path.logs: /data/logstash/els/logs #解决访问kibana时提示无法连接elasticsearch的问题(之前kibana3时出现过) http.cors.enabled: true #调整jvm对内存大小 vim /usr/local/elasticsearch/bin/elasticsearch.in.sh if [ "x$ES_MIN_MEM" = "x" ]; then ES_MIN_MEM=4g fi if [ "x$ES_MAX_MEM" = "x" ]; then ES_MAX_MEM=16g fi
5)安装kibana
#解压kibana-4.1.1-linux-x64.tar.gz到/usr/local tar xf kibana-4.1.1-linux-x64.tar.gz -C /usr/local cd /usr/local ln -s kibana-4.1.1-linux-x64 kibana
三. 启动ELK
central端启动:
### Starting logstash ### /usr/local/elasticsearch/bin/elasticsearch -d || /bin/true nohup /usr/local/logstash/bin/logstash agent -f /etc/logstash/central.conf -l /data/logstash/log/logstash-central.log &>/data/logstash/log/logstash-central.out || /bin/true & sleep 3 nohup /usr/local/logstash/bin/logstash agent -f /etc/logstash/tomcat_uat.conf -l /data/logstash/log/logstash-uat.log &>/data/logstash/log/logstash-uat.out || /bin/true & sleep 1 nohup /usr/local/kibana/bin/kibana &>/dev/null || /bin/true &
agent 端启动:
### starting logstash api-agent ### /usr/bin/nohup /usr/local/logstash/bin/logstash agent -f /etc/logstash/tomcat_api.conf -l /data/logstash/log/logstash-api.log &> /dev/null || /bin/true &
将以上命令复制到/etc/rc.local可实现开机自动启动
logstash+elasticsearch+kibana日志收集
原文地址:http://978538.blog.51cto.com/968538/1716013