1.说明
System Version:Red Hat Enterprise Linux Server release 6.5 (Santiago) Hadoop Version:2.6.0
SSH免密需namenode1到所有节点,namenode2到所有节点。(重要)
ssh-keygen -t rsa ssh-copy-id namenode1 ssh-copy-id namenode1 ssh-copy-id namenode2 ssh-copy-id datanode1 ssh-copy-id datanode2 ssh-copy-id datanode3
2.规划
IP 主机名 NameNode JournalNode DataNode 192.168.199.126 namenode1 Y Y N 192.168.199.127 namenode2 Y Y N 192.168.199.128 datanode1 N Y Y 192.168.199.129 datanode2 N N Y 192.168.199.125 datanode3 N N Y
3.配置文件
core-site.xml <configuration> <property> <name>hadoop.tmp.dir</name> <value>/home/hadoop/</value> </property> <property> <name>fs.default.name</name> <value>hdfs://namenode1:9000</value> </property> <property> <name>fs.default</name> <value>hdfs://cluster1</value> </property> <property> <name>io.file.buffer.size</name> <value>131072</value> </property> </configuration> ================================================= hdfs-site.xml <configuration> <property> <name>dfs.permissions</name> <value>false</value> </property> <property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.nameservices</name> <value>cluster1</value> </property> <property> <name>dfs.ha.namenodes.cluster1</name> <value>namenode1,namenode2</value> </property> <property> <name>dfs.namenode.rpc-address.cluster1.namenode1</name> <value>namenode1:9000</value> </property> <property> <name>dfs.namenode.rpc-address.cluster1.namenode2</name> <value>namenode2:9000</value> </property> <property> <name>dfs.namenode.http-address.cluster1.namenode1</name> <value>namenode1:50070</value> </property> <property> <name>dfs.namenode.http-address.cluster1.namenode2</name> <value>namenode2:50070</value> </property> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://namenode1:8485;namenode2:8485;datanode1:8485/cluster1</value> </property> <property> <name>dfs.client.failover.proxy.provider.cluster1</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <property> <name>dfs.ha.fencing.methods</name> <value>sshfence</value> </property> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/home/hadoop/.ssh/id_rsa</value> </property> <property> <name>dfs.journalnode.edits.dir</name> <value>/home/hadoop/tmp/journal</value> </property> </configuration> ================================================= yarn-site.xml <configuration> <property> <name>yarn.resourcemanager.hostname</name> <value>namenode1</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration> ================================================= slaves datanode1 datanode2 datanode3 ================================================= yarn-site.xml <configuration> <property> <name>yarn.resourcemanager.hostname</name> <value>namenode1</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration> ================================================= mapred-site.xml <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration>
=================================================
4.启动。(以下命令注意执行顺序!)
4.1启动JournalNode集群
在namenode1、namenode2、datanode1上执行命令:
./sbin/hadoop-daemon.sh start journalnode
4.2格式化namenode1
在namenode1上执行命令:
hdfs namenode -format
4.3启动namenode1
在namenode1上执行命令:
./sbin/hadoop-daemon.sh start namenode
4.4格式化namenode2
在namenode2上执行命令:
hdfs namenode -bootstrapStandby
4.5启动namenode2
在namenode2上执行命令:
./sbin/hadoop-daemon.sh start namenode
这时使用浏览器访问http://namenode1:50070和http://namenode2:50070 。如果能够看到两个页面,证明NameNode启动成功了,两个NameNode的状态都是standby。
4.6转换active
在任一namenode上执行命令
hdfs haadmin -transitionToActive namenode1
再使用浏览器访问http://namenode1:50070和http://namenode2:50070,会发现namenode1节点变为active,namenode2还是standby。
4.7启动DataNodes
在namenode1执行:
./sbin/hadoop-daemons.sh start datanode
会启动3个datanode节点,执行完毕后检查。
这时,HA集群就启动成功。
4.8故障测试
在任一namenode上执行命令
hdfs haadmin -failover -forceactive namenode1 namenode2
执行后会有提示是否成功,再通过web界面观察namenode1和namenode2的状态。
4.9启动yarn(个人看法,做分析用,所以最后启动)
在namenode1上执行命令
./sbin/start-yarn.sh
4.10查看当前节点状态
hadoop dfsadmin -report
原文地址:http://thundermeng.blog.51cto.com/9414441/1686811