标签:max apache enable file rsync -av sam sed doc connect
Kafka安装部署
Kafka依赖zookeeper,默认kafka有自带的zookeeper,但是一般情况使用自建的好一些
1、 安装zookeeper
1) 创建目录
mkdir /data/kafka/zookeeper/{log,data} -p
2) 下载zookeeper安装包
cd /data/kafka && wget https://downloads.apache.org/zookeeper/zookeeper-3.4.14/zookeeper-3.4.14.tar.gz
3) 解压
tar xf zookeeper-3.4.14.tar.gz && cd zookeeper-3.4.14/conf
4) 编辑配置文件
mv zoo_sample.cfg zoo.cfg
配置文件内容
cat zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/kafka/zookeeper/data
dataLogDir=/data/kafka/zookeeper/log
clientPort=2181
server.1=172.17.42.116:2888:3888
server.2=172.17.42.118:2888:3888
server.3=172.17.51.173:2888:3888
5) 写入节点ID
echo 1 > /data/kafka/zookeeper/data/myid
6) 配置zookeeper的日志切割
修改conf/log4j.properties
zookeeper.root.logger=INFO,ROLLINGFILE
zookeeper.log.dir=/data/kafka/zookeeper/logs
log4j.appender.ROLLINGFILE=org.apache.log4j.DailyRollingFileAppender
#log4j.appender.ROLLINGFILE.MaxFileSize=10MB
log4j.appender.ROLLINGFILE.DatePattern=‘.‘yyyy-MM-dd
7) 修改bin/zkEnv.sh
if [ "x${ZOO_LOG_DIR}" = "x" ]
then
ZOO_LOG_DIR="/data/kakfa/zookeeper/logs" //设置zookeeper日志目录
fi
if [ "x${ZOO_LOG4J_PROP}" = "x" ]
then
ZOO_LOG4J_PROP="INFO,ROLLINGFILE" //将CONSOLE修改为ROLLINGFILE
Fi
8) 修改zookeeper日志文件名
vim bin/zkServer.sh
_ZOO_DAEMON_OUT="$ZOO_LOG_DIR/zookeeper.log" //将zookeeper.out修改为zookeeper.log
ZOOMAIN="-Dzookeeper.4lw.commands.whitelist=* ${ZOOMAIN}" 添加至78行
9) 传送到其他两台机器,并更改myid
rsync -avz /data/kafka/zookeeper /data/kafka/zookeeper-3.4.14 172.117.42.118:/data/kafka
rsync -avz /data/kafka/zookeeper /data/kafka/zookeeper-3.4.14 172.117.51.173:/data/kafka
10) 更改节点2的myid值
echo 2 > /data/kafka/zookeeper/data/myid
11) 更改节点3的myid值
echo 3 > /data/kafka/zookeeper/data/myid
12) Zookeeper启动,停止,重启
/data/kafka/zookeeper-3.4.14/bin/zkServer.sh start
/data/kafka/zookeeper-3.4.14/bin/zkServer.sh stop
/data/kafka/zookeeper-3.4.14/bin/zkServer.sh restart
2、 Kafka部署
1) 下载kafka
wget https://archive.apache.org/dist/kafka/2.3.0/kafka_2.12-2.3.0.tgz
2) 编辑配置文件
cd /data/kafka/kafka_2.12-2.3.0/config
vim server.properties
broker.id=1 #id每台机器不能一样
delete.topic.enable=true
listeners=PLAINTEXT://192.168.233.167:9092 #填写本机IP地址,填写主机名要可以解析
advertised.listeners=PLAINTEXT://192.168.233.167:9092
host.name=192.168.233.167
advertised.host.name=192.168.233.167
num.network.threads=3
num.io.threads=9
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/data/kafka/kafka_2.12-2.3.0/logs #日志目录
num.partitions=9
num.recovery.threads.per.data.dir=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=192.168.233.167:2181,192.168.233.168:2181,192.168.233.169:2181#zookeeper集群地址
zookeeper.connection.timeout.ms=12000
auto.create.topics.enable=false
unclean.leader.election.enable=false
3) 配置节点2
broker.id=2 #id每台机器不能一样
listeners=PLAINTEXT://192.168.233.168:9092 #填写本机IP地址,填写主机名要可以解析
advertised.listeners=PLAINTEXT://192.168.233.168:9092
host.name=192.168.233.168
advertised.host.name=192.168.233.168
4) 配置节点3
broker.id=3 #id每台机器不能一样
listeners=PLAINTEXT://192.168.233.169:9092 #填写本机IP地址,填写主机名要可以解析
advertised.listeners=PLAINTEXT://192.168.233.169:9092
host.name=192.168.233.169
advertised.host.name=192.168.233.169
5) 启动kafka
/data/kafka/kafka_2.12-2.3.0/bin/kafka-server-start.sh -daemon /data/kafka/kafka_2.12-2.3.0/config/server.properties
6) Kafka管理工具
Kafka管理工具用docker方式部署
i. 安装docker
yum install docker-ce
ii. 下载镜像
docker pull dockerkafka/kafka-manager
iii. 启动kafka-manager
docker run -d -p 9000:9000 -e ZK_HOSTS=172.17.42.117:2181,172.17.42.118:2181,172.17.51.173:2181 --name kafka-manager dockerkafka/kafka-manager
标签:max apache enable file rsync -av sam sed doc connect
原文地址:https://www.cnblogs.com/sunshinea121/p/13253419.html