码迷,mamicode.com
首页 > 系统相关 > 详细

linux单机部署kafka(filebeat+elk组合)

时间:2019-08-23 22:02:32      阅读:124      评论:0      收藏:0      [点我收藏+]

标签:plain   log   local   启动   none   yml   off   grep -v   event   

 

filebeat+elk组合之kafka单机部署

 

准备:

kafka下载链接地址:http://kafka.apache.org/downloads.html

在这里下载kafka_2.12-2.10.0.0.tgz(kafka和zookeeper都用同一个包里的)。

 

一、安装和配置jdk(下载jdk,配置环境即可)

JAVA_HOME=/opt/jdk1.8.0_131

CLASSPATH=.:$JAVA_HOME/lib.tools.jar

PATH=$JAVA_HOME/bin:$PATH

export JAVA_HOME CLASSPATH PATH

 

$ java -version

java version "1.8.0_131"

Java(TM) SE Runtime Environment (build 1.8.0_131-b11)

Java HotSpot(TM) Server VM (build 25.131-b11, mixed mode)

或者在bin/kafka-run-class.sh指定kafka jdk 环境变量

vi bin/kafka-run-class.sh

JAVA_HOME=/opt/jdk1.8.0_131

二、安装Kafka

1安装glibc

# yum -y install glibc.i686

2、解压kafka_2.12-2.10.0.0.tgz

先配置zookeeper

$cd  kafka_2.12-2.10.0.0

$vi config/zookeeper.properties

dataDir=/data/soft/kafka/data

dataLogDir=/data/soft/kafka/log

clientPort=2181

maxClientCnxns=100

tickTime=2000

initLimit=10

 

配置后直接启动zookeeper:

$bin/zookeeper-server-start.sh config/zookeeper.properties

 

如果没有报错,可以转后台启动:

$nohup bin/zookeeper-server-start.sh config/zookeeper.properties &

再配置kafka

$ vi config/server.properties

broker.id=0

listeners=PLAINTEXT://0.0.0.0:9092

advertised.listeners=PLAINTEXT://server20.srv:9092

num.network.threads=3

num.io.threads=8

socket.send.buffer.bytes=102400

socket.receive.buffer.bytes=102400

socket.request.max.bytes=104857600

log.dirs=/data/log/kafka

num.partitions=2

num.recovery.threads.per.data.dir=1

log.retention.check.interval.ms=300000

zookeeper.connect=localhost:2181

zookeeper.connection.timeout.ms=6000

 

启动kafka:

$ bin/kafka-server-start.sh config/server.properties

 

如果没有报错,可以转后台启动:

$nohup bin/kafka-server-start.sh config/server.properties &

检查启动情况:默认开启的端口为2181(zookeeper)和9202(kafka)。

3、测试kafka

(1)、创建topic

$bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test

 

(2)、查看创建的topic

$ bin/kafka-topics.sh --list --zookeeper localhost:2181

test

 

(3)、生产消息测试(模拟客户端发送消息)

$bin/kafka-console-producer.sh --broker-list 192.168.53.20:9092 --topic test

> ..hello world..          #输入内容回车

 

(4)、消费消息测试(模拟客户端接收信息)

$bin/kafka-console-consumer.sh --bootstrap-server 192.168.53.20:9202 --topic test --from-beginning

 

..hello world..                   #如果能正常接收到信息说明kafka部署正常

 

(5)、删除topic

$bin/kafka-topics.sh --delete --zookeeper localhost:2181 --topic test

 

完成以上代表kafka单包机安装成功。

三、配置filebeat

filebeat.yml文件添加配置信息,注释掉原来的logstash output。

#------------------- Kafka output ---------------------

output.kafka:

  hosts: ["server20.srv:9092"]

  topic: ‘kafka_logstash‘

 

四、配置logstash

logstash.conf文件添加配置信息,注释掉原来input{beats...}。

  input {

    kafka {

      codec => "json"

      bootstrap_servers => "server20.srv:9092"

      topics => ["kafka_logstash"]

      group_id => "kafka-consumer-group"

      decorate_events => true

      auto_offset_reset => "latest"

  }

在logstash服务器上配置好kafka访问地址:

$ cat /etc/hosts

122.9.10.106    server20.srv    8bet-kafka

五、kafka相关配置文件参考

$ cat config/server.properties | egrep -v ‘^$|#‘

broker.id=0

listeners=PLAINTEXT://0.0.0.0:9092

advertised.listeners=PLAINTEXT://server20.srv:9092

num.network.threads=3

num.io.threads=8

socket.send.buffer.bytes=102400

socket.receive.buffer.bytes=102400

socket.request.max.bytes=104857600

log.dirs=/data/log/kafka

num.partitions=2

num.recovery.threads.per.data.dir=1

offsets.topic.replication.factor=1

transaction.state.log.replication.factor=1

transaction.state.log.min.isr=1

log.retention.hours=168

log.segment.bytes=1073741824

log.retention.check.interval.ms=300000

zookeeper.connect=localhost:2181

zookeeper.connection.timeout.ms=6000

group.initial.rebalance.delay.ms=0

 

$cat config/zookeeper.properties | egrep -v ‘^$|#‘

dataDir=/data/soft/kafka/data

dataLogDir=/data/soft/kafka/zookeeper_log

clientPort=2181

maxClientCnxns=100

tickTime=2000

initLimit=10

$cat config/producer.properties | egrep -v ‘^$|#‘

bootstrap.servers=localhost:9092

compression.type=none

$cat config/consumer.properties | egrep -v ‘^$|#‘

bootstrap.servers=localhost:9092

group.id=kafka-consumer-group

六、配置完后测试消费消息连通,如果接受正常,则成功

$bin/kafka-console-consumer.sh --bootstrap-server server20.srv:9202 --topic test --from-beginning

 

 

linux单机部署kafka(filebeat+elk组合)

标签:plain   log   local   启动   none   yml   off   grep -v   event   

原文地址:https://www.cnblogs.com/immense/p/11402640.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!