标签:
Hadoop-2.4.1完全分布式环境搭建
一、配置步骤如下:
二、主机环境搭建:
在虚拟机上安装5台虚拟主机并分别安装好hadoop系统,然后分别完成以下操作。
a) 配置静态地址
sudo gedit /etc/network/interfaces
auto eth0
iface eth0 inet static
address 192.168.182.132
netmask 255.255.255.0
gateway 192.168.182.1
b) 配置网关
sudo gedit /etc/resolv.conf
nameserver 192.168.182.1
c) 重启网络
/etc/init.d/networking restart
sudo gedit /etc/hostname
sudo gedit /etc/hosts
192.168.182.132 master
192.168.182.134 slave1
192.168.182.135 slave2
192.168.182.136 slave3
192.168.182.137 slave4
三、添加用户
sudo addgroup hadoop
sudo adduser -ingroup hadoop hadoop
sudo gedit /etc/sudoers
按回车键后就会打开/etc/sudoers文件了,给hadoop用户赋予root用户同样的权限
在root ALL=(ALL:ALL) ALL下添加hadoop ALL=(ALL:ALL) ALL
四、配置master到slave的免密码登陆
sudo apt-get install ssh openssh-server
master生成authorized_key
ssh-keygen -t dsa -P ‘‘ -f ~/.ssh/id_dsa
cd .ssh/
cat id_dsa.pub >> authorized_keys
slave1添加master的authorized_key到本地
scp yss@master:~/.ssh/id_dsa.pub ./master_dsa.pub
cat master_dsa.pub >> authorized_keys
五、安装hadoop和jdk,配置环境变量
hadoop安装包版本是2.4.1,jdk使用的是1.7.0_65版本,分别到官网上去下载。
hadoop,jdk分别解压到/home/hadoop/hadoop-2.4.1,/home/hadoop/jdk1.7.0_65目录下,配置环境变量如下:
sudo gedit /etc/profile
HADOOP_HOME=/home/hadoop/hadoop-2.4.1
JAVA_HOME=/home/hadoop/jdk1.7.0_65
PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$PATH
CLASSPATH=.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib:$HADOOP_HOME/lib:$CLASSPATH
export HADOOP_HOME
export JAVA_HOME
export PATH
export CLASSPATH
source /etc/profile
注:配置环境变量应该在最后一步,各个节点都需要单独配置
六、配置hadoop环境
core-site.xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://master:9000</value>
<final>true</final>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/hadoop-2.4.1/tmp</value>
<description>A base for other temporary directories.</description>
</property>
</configuration>
Hdfs-site.xml
<configuration>
<property>
<name>dfs.name.dir</name>
<value>/home/hadoop/hadoop-2.4.1/name</value>
<final>true</final>
</property>
<property>
<name>dfs.data.dir</name>
<value>/home/hadoop/hadoop-2.4.1/data</value>
<final>true</final>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
<final>true</final>
</property>
</configuration>
mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>192.168.182.132:9001</value>
</property>
</configuration>
注:五、六步骤中都只是配置的master节点,master配置完成后将/home/hadoop/文件夹拷到各个slave
scp -r ./hadoop slave1:/home
七、启动Hadoop
在master节点执行下面命令:
hadoop namenode format
进去master节点/home/hadoop/hadoop-2.4.1/sbin目录,执行如下命令:
./start-all.sh
下面是停止Hadoop服务命令:
./stop-all.sh
hadoop@master: /home/hadoop/hadoop-2.4.1/sbin $jps
21211 Jps
7421 SecondaryNameNode
7154 NameNode
7968 ResourceManager
hadoop@ slave1: /home/hadoop/hadoop-2.4.1/sbin $jps
3612 NameNode
3723 Jps
3367 DataNode
http://master:8088/
标签:
原文地址:http://www.cnblogs.com/demo111/p/4246726.html