码迷,mamicode.com
首页 > 其他好文 > 详细

hadoop分布式部署

时间:2018-10-30 18:46:07      阅读:155      评论:0      收藏:0      [点我收藏+]

标签:version   namenode   min   mapred   for   home   images   -o   html   

环境:
CentOS7.5
192.168.11.205 test2
192.168.11.206 test3
192.168.11.207 test4-8g

设置host

# vim /etc/hosts
192.168.11.205  test2
192.168.11.206  test3
192.168.11.207  test4-8

安装jdk(三个节点都需要)

https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html jdk官网

#  tar xvf jdk-8u151-linux-x64.tar.gz  -C /usr/local/
#  mv /usr/local/jdk1.8.0_151/ /usr/local/jdk
# vim /etc/profile            # 添加末尾
export JAVA_HOME=/usr/local/jdk
export PATH=$PATH:$JAVA_HOME:$JAVA_HOME/bin:$JAVA_HOME/lib
# source /etc/profile
# java -version
java version "1.8.0_151"
Java(TM) SE Runtime Environment (build 1.8.0_151-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.151-b12, mixed mode)

免密

[root@test2 ~]# ssh-keygen
[root@test2 ~]# ssh-copy-id test3
[root@test2 ~]# ssh-copy-id test4-8g

下载hadoop

https://hadoop.apache.org/releases.html 官网

[root@test2 ~]# tar xvf hadoop-2.8.5.tar.gz  

环境变量(三个节点都需要)

# vim .bash_profile     # 添加末尾
export HADOOP_HOME=/root/hadoop-2.8.5
export PATH=$PATH:$HADOOP_HOME:$HADOOP_HOME/bin:$HADOOP_HOME/lib:$HADOOP_HOME/share:$HADOOP_HOME/sbin
[root@test2 ~]#  source .bash_profile

配置hadoop

[root@test2 ~]# cd hadoop-2.8.5/etc/hadoop/
[root@test2 ~]# vim hadoop-env.sh
export JAVA_HOME=/usr/local/jdk
[root@test2 ~]# vim core-site.xml     # 
 <configuration>
 <property>
 <name>fs.defaultFS</name>
 <value>hdfs://test2:9000</value>
 </property>
 <property>
 <name>hadoop.tmp.dir</name>
 <value>file:/home/hadoop/tmp</value>
 </property>
 </configuration>
[root@test2 hadoop]# vim hdfs-site.xml
<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>/home/hadoop/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/hadoop/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
[root@test2 hadoop]# vim mapred-site.xml.template
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.task.io.sort.factor</name>
<value>100</value>
</property>
<property>
<name>mapreduce.reduce.shuffle.parallelcopies</name>
<value>50</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>0.0.0.0:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>0.0.0.0:19888</value>
</property>
</configuration>
[root@test2 hadoop]# vim yarn-site.xml 
<configuration>

<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>test2:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>test2:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>test2:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>test2:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>test2:8088</value>
</property>
<property>
<name>yarn.application.classpath</name>
<value>/root/hadoop2.8.5/etc/hadoop:/root/hadoop2.8.5/share/hadoop/common/lib/*:/root/hadoop-2.8.5/share/hadoop/common/*:/root/hadoop-2.8.5/share/hadoop/hdfs:/root/hadoop-2.8.5/share/hadoop/hdfs/lib/*:/root/hadoop-2.8.5/share/hadoop/hdfs/*:/root/hadoop-2.8.5/share/hadoop/mapreduce/*:/root/hadoop-2.8.5/share/hadoop/yarn:/root/hadoop-2.8.5/share/hadoop/yarn/lib/*:/root/hadoop-2.8.5/share/hadoop/yarn/* </value>
</property>
</configuration>

复制到其它节点

[root@test2 ~]# scp -r hadoop-2.8.5 test3:/root
[root@test2 ~]# scp -r hadoop-2.8.5 test4-8g:/root

格式化文件系统(三个节点都需要)

# hdfs namenode -format       #  格式化一个新的分布式文件系统

启动(三个节点都需要)

# start-all.sh        # Hadoop守护进程

查看服务(三个节点都需要)

# jps
24627 Jps
22692 ResourceManager
22968 NodeManager
22250 NameNode
22378 DataNode
22539 SecondaryNameNode

检测

http://192.168.11.205:50070

技术分享图片
技术分享图片

hadoop分布式部署

标签:version   namenode   min   mapred   for   home   images   -o   html   

原文地址:http://blog.51cto.com/13767724/2310841

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!