码迷,mamicode.com
首页 > 其他好文 > 详细

hadoop安装

时间:2016-04-08 15:35:55      阅读:175      评论:0      收藏:0      [点我收藏+]

标签:hadoop


安装JDK
安装Hadoop
配置环境变量
配置core-site.xml
配置hdfs-site.xml
配置mapred-site.xml
配置yarn-site.xml
配置slaves

安装JDK

  1. cd /usr/local/src

  2. wget http://download.oracle.com/otn-pub/java/jdk/8u73-b02/jdk-8u73-linux-x64.tar.gz?AuthParam=1458008151_64a44ef61864b914ee2cb5adb5a1ffb4

  3. tar -zxf jdk-8u73-linux-x64.tar.gz

  4. mv jdk1.8.0_73/ /usr/local/

  5. vim /etc/profile.d/java.sh

写入:

 JAVA_HOME=/usr/local/jdk1.8.0_73/ 
 JAVA_BIN=/usr/local/jdk1.8.0_73/bin 
 JRE_HOME=/usr/local/jdk1.8.0_73/jre 
 PATH=$PATH:/usr/local/jdk1.8.0_73/bin:/usr/local/jdk1.8.0_73/jre/bin 
 CLASSPATH=/usr/local/jdk1.8.0_73/jre/lib:/usr/local/jdk1.8.0_73/lib:/usr/local/jdk1.8.0_73/jre/lib/charsets.jar 
 export  JAVA_HOME  JAVA_BIN JRE_HOME  PATH  CLASSPATH
source /etc/profile.d/java.sh

修改三台服务器hostname./etc/hosts ./etc/sysconfig/network

IPHOSTNAME节点信息
172.16.1.212h1NameNode HMaster SecondaryNameNode ResourceManager zookeeper
172.16.1.213h2DataNode HRegionServer NodeManager ResourceManager zookeeper
172.16.1.214h3DataNode HRegionServer NodeManager ResourceManager zookeeper
h1 h2 h3分别生成ssh公钥`ssh-keygen -t rsa`
  1. [root@h2 ~]# scp /root/.ssh/id_rsa.pub root@h1:~/h2pub

  2. [root@h3 ~]# scp /root/.ssh/id_rsa.pub root@h1:~/h3pub


  1. [root@h1 ~]# cat ~/.ssh/id_rsa.pub ~/h2pub ~/h3pub > ~/.ssh/authorized_keys

  2. [root@h1 ~]# scp ~/.ssh/authorized_keys root@h2:~/.ssh/authorized_keys

  3. [root@h1 ~]# scp ~/.ssh/authorized_keys root@h3:~/.ssh/authorized_keys

安装Hadoop


  1. mkdir /home/hadoop

  2. cd !$

  3. wget http://apache.opencas.org/hadoop/common/hadoop-2.6.3/hadoop-2.6.3.tar.gz

  4. tar -zxvf hadoop-2.6.3.tar.gz

  5. mv hadoop-2.6.3 hadoop

配置环境变量

vim ~/.bashrc
加入


  1. #Hadoop Environment Variables

  2. export HADOOP_HOME=/home/hadoop/hadoop

  3. export HADOOP_INSTALL=$HADOOP_HOME

  4. export HADOOP_MAPRED_HOME=$HADOOP_HOME

  5. export HADOOP_COMMON_HOME=$HADOOP_HOME

  6. export HADOOP_HDFS_HOME=$HADOOP_HOME

  7. export YARN_HOME=$HADOOP_HOME

  8. export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native

  9. export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin

source ~/.bashrc

配置core-site.xml


  1. vim /home/hadoop/hadoop/etc/hadoop/core-site.xml


  1.  <configuration>

  2.  <property>

  3.     <name>fs.defaultFS</name>

  4.     <value>hdfs://h1:9000</value>

  5.  </property>

  6.  <property>

  7.     <name>io.file.buffer.size</name>

  8.     <value>131072</value>

  9.  </property>

  10.  <property>

  11.     <name>hadoop.tmp.dir</name>

  12.     <value>file:/home/hadoop/hadoop/tmp</value>

  13.  </property>

  14.  <property>

  15.     <name>hadoop.proxyuser.hduser.hosts</name>

  16.     <value>*</value>

  17.  </property>

  18.  <property>

  19.     <name>hadoop.proxyuser.hduser.groups</name>

  20.     <value>*</value>

  21.  </property>

  22.  <property>

  23.     <name>io.native.lib.available</name>

  24.     <value>true</value>

  25.  </property>

  26. </configuration>

配置hdfs-site.xml


  1. vim /home/hadoop/hadoop/etc/hadoop/hdfs-site.xml


  1. <configuration>

  2.  <property>

  3.     <name>dfs.namenode.secondary.http-address</name>

  4.     <value>h1:9011</value>

  5.  </property>

  6.  <property>

  7.     <name>dfs.namenode.name.dir</name>

  8.     <value>file:/home/hadoop/hadoop/dfs/name</value>

  9.  </property>

  10.  <property>

  11.     <name>dfs.datanode.data.dir</name>

  12.     <value>file:/home/hadoop/hadoop/dfs/data</value>

  13.  </property>

  14. <property>

  15.  <name>hadoop.tmp.dir</name>

  16.  <value>/home/hadoop/hadoop/tmp</value>

  17.  <description>A base for other temporary directories.</description>

  18. </property>

  19.  <property>

  20.     <name>dfs.replication</name>

  21.     <value>2</value>

  22.  </property>

  23. </configuration>

配置mapred-site.xml


  1. mv mapred-site.xml.template mapred-site.xml

  2. vim mapred-site.xml


  1. <configuration>

  2. <property>

  3. <name>mapred.job.tracker</name>

  4. <value>h1:9001</value>

  5. </property>

  6. </configuration>

配置yarn-site.xml


  1. <configuration>

  2. <property>

  3. <name>yarn.resourcemanager.hostname</name>

  4. <value>h1</value>

  5. </property>

  6. <property>

  7. <name>yarn.resourcemanager.address</name>

  8. <value>h1:8032</value>

  9. </property>

  10. <property>

  11. <name>yarn.resourcemanager.scheduler.address</name>

  12. <value>h1:8030</value>

  13. </property>

  14. <property>

  15. <name>yarn.resourcemanager.resource-tracker.address</name>

  16. <value>h1:8031</value>

  17. </property>

  18. <property>

  19. <name>yarn.resourcemanager.admin.address</name>

  20. <value>h1:8033</value>

  21. </property>

  22. <property>

  23. <name>yarn.resourcemanager.webapp.address</name>

  24. <value>h1:8088</value>

  25. </property>

  26. </configuration>

配置slaves

在里面添加DataNode


  1. [root@h1 hadoop]# vim slaves

  2. h2

  3. h3

将core-site.xml 传到h2.h3


  1. [root@h1 hadoop]# scp core-site.xml root@h2:/home/hadoop/hadoop/etc/hadoop

  2. [root@h1 hadoop]# scp core-site.xml root@h3:/home/hadoop/hadoop/etc/hadoop

格式化NameNode,并启动hadoop


  1. [root@h1 hadoop]# /home/hadoop/hadoop/bin/hdfs namenode -format

  2. [root@h1 hadoop]# sbin/start-all.sh

jps查看启动项目
Web访问172.16.1.212:50070查看hadoop状态

本文出自 “CGL的博客” 博客,请务必保留此出处http://chengongliang.blog.51cto.com/10693153/1761633

hadoop安装

标签:hadoop

原文地址:http://chengongliang.blog.51cto.com/10693153/1761633

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!