码迷,mamicode.com
首页 > 其他好文 > 详细

使用kafka和zookeeper 构建分布式编译环境

时间:2018-11-22 00:10:16      阅读:226      评论:0      收藏:0      [点我收藏+]

标签:pac   data   2.0   oracl   partition   mini   radmin   lis   and   

1:在每台机器上安装jdk, 脚本代码如下:

每一个机器上下载jdk,zookeeper,kafka

链接:https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html

      http://mirrors.hust.edu.cn/apache/zookeeper/zookeeper-3.4.13/

     http://mirrors.shu.edu.cn/apache/kafka/2.0.0/kafka_2.11-2.0.0.tgz

2:部署java的环境,设置环境变量

sudo tar -zxvf jdk-8u191-linux-x64.tar.gz -C /opt/

echo "export JAVAHOME=\"/opt/jdk1.8.0_191\"
export PATH=\"\$PATH:\$JAVAHOME/bin\"
export CLASSPATH=\".:\$JAVAHOME/lib:\$JAVAHOME/jre/lib\"
" >> ~/.bashrc

source ~/.bashrc

java -version

 3:部署zookeeper环境

sudo tar -zxvf zookeeper-3.4.13.tar.gz -C /opt/

#设置环境变量
export PATH="$PATH:$JAVAHOME/bin:/opt/zookeeper-3.4.13/bin:/opt/kafka_2.11-2.0.0/bin"

#三台服务器的环境配置
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/data/zookeeper
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
dataLogDir=/var/log/zookeeper

server.1=109.123.100.126:2888:3888
server.2=109.123.100.137:2888:3888
server.3=109.123.100.139:2888:3888

 

#新建目录并设置权限
sudo mkdir -p /var/log/zookeeper && sudo mkdir -p /data/zookeeper

sudo chmod -R a+w /var/log/zookeeper && sudo chmod -R a+w /data/zookeeper

 

#server 126
echo "1" > /data/zookeeper/myid

#server 137
echo "2" > /data/zookeeper/myid

#server 139
echo "3" > /data/zookeeper/myid

 

4:部署kafka环境

sudo tar -xvf kafka_2.11-2.0.0.tgz -C /opt

#设置环境变量
export PATH="$PATH:$JAVAHOME/bin:/opt/zookeeper-3.4.13/bin:/opt/kafka_2.11-2.0.0/bin"

 

#启动kafka集群,在每个机器上执行如下命令
kafka-server-start.sh -daemon /opt/kafka_2.11-2.0.0/config/server.properties

 

#通过jps 验证结果
scm@scm-100-137:~$ jps
5090 Jps
3650 QuorumPeerMain
5007 Kafka
#看到三个进程,说明成功了

 5:创建topic 模拟多个生产者一个消费者,和多个消费者,一个生产者

创建一个tizen-unified的topic, 表示这个topic中放的都是tizen-unified 的信息

kafka-topics.sh --create --replication-factor 2 --zookeeper 109.123.100.126:2181,109.123.100.137:2181,109.123.100.144:2181 --partitions 1 --topic tizen-unified

 开启一个生产者:

kafka-console-producer.sh --broker-list 109.123.100.126:9092 --topic tizen-unified

scm@scm-100-144:~$ kafka-console-producer.sh --broker-list 109.123.100.126:9092 --topic tizen-unified
>buildint package1
>building package2
>building package3
>building package4
>building package5

 开启两个消费者:(同属于一个group)

kafka-console-consumer.sh --bootstrap-server 109.123.100.126:9092 --topic tizen-unified --from-beginning --group tizen-worker

scm@scm-100-137:~$ kafka-console-consumer.sh --bootstrap-server 109.123.100.126:9092 --topic tizen-unified --from-beginning --group tizen-worker
building package3
building package4
building package5

scm@scm-100-126:~$ kafka-console-consumer.sh --bootstrap-server 109.123.100.126:9092 --topic tizen-unified --from-beginning --group tizen-worker
building package1
building package2

 

使用kafka和zookeeper 构建分布式编译环境

标签:pac   data   2.0   oracl   partition   mini   radmin   lis   and   

原文地址:https://www.cnblogs.com/Spider-spiders/p/9998040.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!