标签:hadoop read 2.7 -name echo orm blog centos shel
原创,转载请注明。谢谢
shell启动hadoop集群
1:明确启动顺序
1)启动zookeeper集群(分别在centos 4-02,centos6-02,centos7-02)
app/zookeeper-3.4.5/bin/zkServer.sh start //启动进程
app/zookeeper-3.4.5/bin/zkServer.sh status //查看状态
2)启动journalnode(分别在centos 4-02,centos6-02,centos7-02)
app/hadoop-2.7.2/sbin/hadoop-daemon.sh start journalnode
3)格式化HDFS(centos4-01)
hdfs namenode -format
因为已经启动过了,所以不用格式化了。
4)启动HDFS(centos4-01)
app/hadoop-2.7.2/sbin/start-dfs.sh
5)启动YARN(centos7-01,centos8-01,只在centos7-01上启动)
app/hadoop-2.7.2/sbin/start-yarn.sh
6)其他都是datanode
app/hadoop-2.7.2/sbin/hadoop-daemon.sh start datanode
代码如下:
1 expect eof 2 EOF 3 done<zoo.list 4 5 echo "-------------------------start-namenode----------------------------" 6 app/hadoop-2.7.2/sbin/hadoop-daemon.sh start namenode 7 ssh centos6-01 app/hadoop-2.7.2/sbin/hadoop-daemon.sh start namenode 8 9 echo "------------------------start-YARN----------------------------" 10 ssh centos7-01 app/hadoop-2.7.2/sbin/start-yarn.sh 11 12 echo "------------------------start-datanode----------------------------" 13 while read line 14 do 15 ip=$line 16 echo $line 17 echo $ip 18 /usr/bin/expect <<-EOF 19 spawn ssh -p22 centos@$ip 20 send "app/hadoop-2.7.2/sbin/hadoop-daemon.sh start datanode\r" 21 expect "*#" 22 send "exit\r" 23 interact 24 expect eof 25 EOF 26 done<data.list 27 echo "-------------------------OVER!----------------------------"
标签:hadoop read 2.7 -name echo orm blog centos shel
原文地址:http://www.cnblogs.com/wendu/p/6036755.html