app-server(filebeat) -> kafka -> logstash -> elasticsearch -> kibana
系统基础环境 # cat /etc/redhat-release CentOS release 6.5 (Final) # uname -r 2.6.32-431.el6.x86_64 192.168.162.51 logstash01 192.168.162.53 logstash02 192.168.162.55 logstash03 192.168.162.56 logstash04 192.168.162.57 logstash05 192.168.162.58 elasticsearch01 192.168.162.61 elasticsearch02 192.168.162.62 elasticsearch03 192.168.162.63 elasticsearch04 192.168.162.64 elasticsearch05 192.168.162.66 kibana 192.168.128.144 kafka01 192.168.128.145 kafka02 192.168.128.146 kafka03 192.168.138.75 filebeat,weblogic
elastic下载地址 为6.0-beta2版本的
elasticsearch-6.0.0-beta2.rpm filebeat-6.0.0-beta2-x86_64.rpm grafana-4.4.3-1.x86_64.rpm heartbeat-6.0.0-beta2-x86_64.rpm influxdb-1.3.5.x86_64.rpm jdk-8u144-linux-x64.rpm kafka_2.12-0.11.0.0.tgz kibana-6.0.0-beta2-x86_64.rpm logstash-6.0.0-beta2.rpm
在应用服务器上安装filebeat
# yum localinstall filebeat-6.0.0-beta2-x86_64.rpm -y
安装完成之后,filebeat 通过RPM安装的目录:
# ls /usr/share/filebeat/ bin kibana module NOTICE README.md scripts
配置文件为
/etc/filebeat/filebeat.yml
#=========================== Filebeat prospectors ============================= filebeat.prospectors: - type: log enabled: true paths: - /data1/logs/apphfpay_8086_domain/apphfpay.yiguanjinrong.yg.* multiline.pattern: ‘^(19|20)\d\d-(0[1-9]|1[012])-(0[1-9]|[12][0-9]|3[01]) [012][0-9]:[0-6][0-9]:[0-6][0-9]‘ multiline.negate: true multiline.match: after filebeat.config.modules: path: ${path.config}/modules.d/*.yml reload.enabled: false #----------------------------- Kafka output --------------------------------- output.kafka: hosts: [‘192.168.128.144:9092‘,‘192.168.128.145:9092‘,‘192.168.128.146:9092‘] topic: ‘credit‘
#/etc/init.d/filebeat start
日志文件
/var/log/filebeat/filebeat
分别设置三台kafka服务器的主机名
# host=kafka01 && hostname $host && echo "192.168.128.144" $host >>/etc/hosts # host=kafka02 && hostname $host && echo "192.168.128.145" $host >>/etc/hosts # host=kafka03 && hostname $host && echo "192.168.128.146" $host >>/etc/hosts
安装java
# yum localinstall jdk-8u144-linux-x64.rpm -y
解压kafka压缩包并将压缩包移动到 /usr/local/kafka
# tar fzx kafka_2.12-0.11.0.0.tgz # mv kafka_2.12-0.11.0.0 /usr/local/kafka
# pwd /usr/local/kafka/config # ls connect-console-sink.properties connect-log4j.properties server.properties connect-console-source.properties connect-standalone.properties tools-log4j.properties connect-distributed.properties consumer.properties zookeeper.properties connect-file-sink.properties log4j.properties connect-file-source.properties producer.properties
修改配置文件
# grep -Ev "^$|^#" server.properties broker.id=1 delete.topic.enable=true listeners=PLAINTEXT://192.168.128.144:9092 num.network.threads=3 num.io.threads=8 socket.send.buffer.bytes=102400 socket.receive.buffer.bytes=102400 socket.request.max.bytes=104857600 log.dirs=/data1/kafka-logs num.partitions=12 num.recovery.threads.per.data.dir=1 log.retention.hours=168 log.segment.bytes=1073741824 log.retention.check.interval.ms=300000 zookeeper.connect=zk01.yiguanjinrong.yg:2181,zk02.yiguanjinrong.yg:2181,zk03.yiguanjinrong.yg:2181 zookeeper.connection.timeout.ms=6000 # grep -Ev "^$|^#" consumer.properties zookeeper.connect=zk01.yiguanjinrong.yg:2181,zk02.yiguanjinrong.yg:2181,zk03.yiguanjinrong.yg:2181 zookeeper.connection.timeout.ms=6000 group.id=test-consumer-group # grep -Ev "^$|^#" producer.properties bootstrap.servers=192.168.128.144:9092,192.168.128.145:9092,192.168.128.146:9092 compression.type=none
首先检测配置有没有问题
启动zookeeper,查看有无报错 # /usr/local/kafka/bin/kafka-server-start.sh /usr/local/kafka/config/server.properties 启动kafka,查看有无报错 # /usr/local/kafka/bin/zookeeper-server-start.sh /usr/local/kafka/config/zookeeper.properties 如果没有报错,如果需要zookeeper和kafka 那就先启动zookeeper 在启动kafka (当然也可以写一个启动脚本) # nohup /usr/local/kafka/bin/kafka-server-start.sh /usr/local/kafka/config/server.properties & # nohup /usr/local/kafka/bin/zookeeper-server-start.sh /usr/local/kafka/config/zookeeper.properties &
检查启动情况 默认开启的端口为 2181(zookeeper) 和 9202(kafka)
创建topic
# bin/kafka-topics.sh --create --zookeeper zk01.yiguanjinrong.yg:2181 --replication-factor 1 --partition 1 --topic test Created topic "test".
查看创建的topic
# bin/kafka-topics.sh --list --zookeeper zk01.yiguanjinrong.yg:2181 test
模拟客户端发送消息
# bin/kafka-console-producer.sh --broker-list 192.168.128.144:9092 --topic test 确定后 输入一些内容,然后确定
模拟客户端接收信息(如果能正常接收到信息说明kafka部署正常)
# bin/kafka-console-consumer.sh --bootstrap-server 192.168.128.144:9202 --topic test --from-beginning
删除topic
# bin/kafka-topics.sh --delete --zookeeper zk01.yiguanjinrong.yg:2181 --topic test
# yum localinstall jdk-8u144-linux-x64.rpm -y # yum localinstall logstash-6.0.0-beta2.rpm -y
logstash的安装目录和配置文件的目录(默认没有配置文件)分别为
# /usr/share/logstash/ 安装完成之后,并没有把bin目录添加到环境变量中 # /etc/logstash/conf.d/
# cat /etc/logstash/conf.d/logstash.conf input { kafka { bootstrap_servers => "192.168.128.144:9092,192.168.128.145:9092,192.168.128.145:9092" topics => ["credit"] group_id => "test-consumer-group" codec => "plain" consumer_threads => 1 decorate_events => true } } output { elasticsearch { hosts => ["192.168.162.58:9200","192.168.162.61:9200","192.168.162.62:9200","192.168.162.63:9200","192.168.162.64:9200"] index => "logs-%{+YYYY.MM.dd}" workers => 1 } }
检查配置文件是否正确
# /usr/share/logstash/bin/logstash -t --path.settings /etc/logstash/ --verbose Sending Logstash‘s logs to /var/log/logstash which is now configured via log4j2.properties Configuration OK
由于logstash 默认没有启动脚本,但是已经给出创建方法
查看脚本使用帮助
# bin/system-install --help Usage: system-install [OPTIONSFILE] [STARTUPTYPE] [VERSION] NOTE: These arguments are ordered, and co-dependent OPTIONSFILE: Full path to a startup.options file OPTIONSFILE is required if STARTUPTYPE is specified, but otherwise looks first in /usr/share/logstash/config/startup.options and then /etc/logstash/startup.options Last match wins STARTUPTYPE: e.g. sysv, upstart, systemd, etc. OPTIONSFILE is required to specify a STARTUPTYPE. VERSION: The specified version of STARTUPTYPE to use. The default is usually preferred here, so it can safely be omitted. Both OPTIONSFILE & STARTUPTYPE are required to specify a VERSION. # /usr/share/logstash/bin/system-install /etc/logstash/startup.options sysv 创建之后文件为: /etc/init.d/logstash 要注意修改日志目录地址,建议把log放置在 /var/log/logstash # mkdir -p /var/log/logstash && chown logstash.logstash -R /var/log/logstash
下面是需要修改的部分
start() { # Ensure the log directory is setup correctly. if [ ! -d "/var/log/logstash" ]; then mkdir "/var/log/logstash" chown "$user":"$group" -R "/var/log/logstash" chmod 755 "/var/log/logstash" fi # Setup any environmental stuff beforehand ulimit -n ${limit_open_files} # Run the program! nice -n "$nice" chroot --userspec "$user":"$group" "$chroot" sh -c " ulimit -n ${limit_open_files} cd \"$chdir\" exec \"$program\" $args " >> /var/log/logstash/logstash-stdout.log 2>> /var/log/logstash/logstash-stderr.log & # Generate the pidfile from here. If we instead made the forked process # generate it there will be a race condition between the pidfile writing # and a process possibly asking for status. echo $! > $pidfile emit "$name started" return 0 }
# /etc/init.d/logstash start
# yum localinstall jdk-8u144-linux-x64.rpm -y # yum localinstall elasticsearch-6.0.0-beta2.rpm -y
安装路径 # /usr/share/elasticsearch/ 配置文件 # /etc/elasticsearch/elasticsearch.yml
# cat elasticsearch.yml | grep -Ev "^$|^#" cluster.name: elasticsearch node.name: es01 #其他节点修改相应的节点名 path.data: /data1/elasticsearch path.logs: /var/log/elasticsearch bootstrap.system_call_filter: false network.host: 192.168.162.58 #其他节点修改地址信息 http.port: 9200 discovery.zen.ping.unicast.hosts: ["192.168.162.58", "192.168.162.61", "192.168.162.62", "192.168.162.63", "192.168.162.64"] discovery.zen.minimum_master_nodes: 3 node.master: true node.data: true transport.tcp.compress: true
# mkdir -p /var/log/elasticsearch && chown elasticsearch.elasticsearch -R /var/log/elasticsearch # /etc/init.d/elasticsearch start
# yum localinstall kibana-6.0.0-beta2-x86_64.rpm -y
# cat /etc/kibana/kibana.yml | grep -Ev "^$|^#" server.port: 5601 server.host: "192.168.162.66" elasticsearch.url: "http://192.168.162.58:9200" #elasticsearch群集地址,任意一个es节点的地址即可 kibana.index: ".kibana" pid.file: /var/run/kibana/kibana.pid
修改kibana启动脚本
# mkdir -p /var/run/kibana # chown kibana.kibana -R /var/run/kibana
修改kibana启动脚本部分
start() { # Ensure the log directory is setup correctly. [ ! -d "/var/log/kibana/" ] && mkdir "/var/log/kibana/" chown "$user":"$group" "/var/log/kibana/" chmod 755 "/var/log/kibana/" # Setup any environmental stuff beforehand # Run the program! chroot --userspec "$user":"$group" "$chroot" sh -c " cd \"$chdir\" exec \"$program\" $args " >> /var/log/kibana/kibana.stdout 2>> /var/log/kibana/kibana.stderr & # Generate the pidfile from here. If we instead made the forked process # generate it there will be a race condition between the pidfile writing # and a process possibly asking for status. echo $! > $pidfile emit "$name started" return 0 } 启动 #/etc/init.d/kibana start
本文出自 “bamboo” 博客,请务必保留此出处http://wuyebamboo.blog.51cto.com/3344855/1963786
ELK + kafka + filebeat +kibana
原文地址:http://wuyebamboo.blog.51cto.com/3344855/1963786