PS:
/etc/hosts:
127.0.1.1 ubuntu-hadoop01
192.168.56.102 ubuntu-hadoop02
192.168.56.103 ubuntu-hadoop03
/etc/hostname:
ubuntu-hadoop01
3、安装并运行Hadoop
conf/hadoop-env.sh:
export JAVA_HOME=/usr/lib/jvm/jdk1.6.0_45
conf/core-site.xml:
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://ubuntu-hadoop01:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/grid/hadoop/tmp</value>
</property>
</configuration>
conf/hdfs-site.xml:
<configuration>
<property>
<name>dfs.replication</name>
<!-- 副本数 -->
<value>1</value>
</property>
</configuration>
conf/mapred-site.xml:
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>ubuntu-hadoop01:9001</value>
</property>
</configuration>
conf/masters:
ubuntu-hadoop01
conf/slaves:
ubuntu-hadoop02
ubuntu-hadoop03
格式化Hadoop的文件系统HDFS:
输入命令:bin/hadoop namenode -format
启动Hadoop:
输入命令,启动所有进程:bin/start-all.sh
验证Hadoop安装是否成功:
http://localhost:50030 (MapReduce的Web页面)
http://localhost:50070 (HDFS的Web页面)
原文地址:http://guanhz.blog.51cto.com/5516778/1600893