码迷,mamicode.com
首页 > 其他好文 > 详细

Hadoop2 NameNode HA配置

时间:2014-11-08 02:16:48      阅读:322      评论:0      收藏:0      [点我收藏+]

标签:hadoop   hadoop2   namenode ha   

Hadoop2 NameNode HA配置


Hadoop2 官方提供了两种NameNode HA的实现方式,分别基于QJMNFS,这里以基于QJMHDFS HA为例。


实验环境

系统版本:CentOS release 6.4 (Final)

Hadoop版本:Apache Hadoop2.5.1

Hive版本:Hive 0.13.1

 

IP列表

IP

Hostname

NameNode

DataNode

RM

NodeManager

JournalNode

192.168.20.54

had1

Y




Y

192.168.20.62

had4

Y





192.168.20.55

had2


Y


Y

Y

192.168.20.56

had3


Y

Y

Y

Y

 

我们添加新的had4主机做为Standby NameNode.


配置hosts和ssh

NameNode节点和DataNode节点中添加 had4host信息,

192.168.20.62   had4

 

配置新增加的had4主机与DataNode节点ssh互通,两个NameNode节点也要ssh互通。


配置core-site.xmlhdfs-site.xml

nameservice ID用于唯一标识一个hdfs实例,而NameNodeID用于标识hdfs实例中的不同的NameNode节点。在与HA相关的一些配置项中,就需要以nameservice IDNameNode ID来做为后缀。

 

core-site.xml中更改fs.defaultFS配置项的值,mycluster为唯一标识hdfs实例的nameservice ID的名称

<property>
 <name>fs.defaultFS</name>
 <value>hdfs://mycluster</value>
 <final>true</final>
 <description>The name of the default file system.  A URI whose
 scheme and authority determine the FileSystem implementation.  The
 uri‘s scheme determines the config property (fs.SCHEME.impl) naming
  theFileSystem implementation class.  Theuri‘s authority is used to
 determine the host, port, etc. for a filesystem.</description>
</property>


hdfs-site.xml中添加的配置项,其中myclusternameservice ID,nn1nn2分别对应had1had4

<property>
 <name>dfs.nameservices</name>
 <value>mycluster</value>
</property>
<property>
 <name>dfs.ha.namenodes.mycluster</name>
 <value>nn1,nn2</value>
</property>
<property>
 <name>dfs.namenode.rpc-address.mycluster.nn1</name>
 <value>had1:8020</value>
</property>
<property>
 <name>dfs.namenode.rpc-address.mycluster.nn2</name>
 <value>had4:8020</value>
</property>
<property>
 <name>dfs.namenode.shared.edits.dir</name>
 <value>qjournal://had1:8485;had2:8485;had3:8485/mycluster</value>
</property>
<property>
 <name>dfs.client.failover.proxy.provider.mycluster</name>
 <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
 <name>dfs.ha.fencing.methods</name>
 <value>sshfence</value>
</property>
<property>
 <name>dfs.ha.fencing.ssh.private-key-files</name>
 <value>/home/hadoop/.ssh/id_rsa</value>
</property>

dfs.journalnode.edits.dir配置项用于设置journalnode节点保存本地状态的目录。

 

配置完之后,分发到所有NameNodeDataNode节点。


HA启动步骤

1、关闭当前的NameNode (had1)

  $hadoop-daemon.shh stop namenode

 

2、在had1,had2,had3启动JournalNode

  $hadoop-daemon.shh start journalnode

 

3、复制had1 NameNode元数据到had4对应的位置,这里是/data1/dfs/name

  $scp /data1/dfs/name/current/ had4://data1/dfs/name/

 

4had4启动NameNode

$ hdfs namenode -bootstrapStandby
   14/11/0715:00:49 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = had4/192.168.20.62
STARTUP_MSG:   args = [-bootstrapStandby]
STARTUP_MSG:   version = 2.5.1
STARTUP_MSG:   classpath =/opt/hadoop/etc/hadoop:/opt/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/opt/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/opt/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/opt/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/opt/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/opt/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/opt/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/opt/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/opt/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/opt/hadoop/share/hadoop/common/lib/activation-1.1.jar:/opt/hadoop/share/hadoop/common/lib/jsr305-1.3.9.jar:/opt/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/opt/hadoop/share/hadoop/common/lib/hadoop-annotations-2.5.1.jar:/opt/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/opt/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/opt/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/opt/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/opt/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/opt/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/opt/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/opt/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/opt/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/opt/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/opt/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/opt/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/opt/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/opt/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/opt/hadoop/share/hadoop/common/lib/hadoop-auth-2.5.1.jar:/opt/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/opt/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/opt/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/opt/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/opt/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/opt/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/opt/hadoop/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/opt/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/opt/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/opt/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/opt/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/opt/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/opt/hadoop/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/opt/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/opt/hadoop/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/opt/hadoop/share/hadoop/common/lib/junit-4.11.jar:/opt/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/opt/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/opt/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/opt/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/opt/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/opt/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/opt/hadoop/share/hadoop/common/lib/xz-1.0.jar:/opt/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/opt/hadoop/share/hadoop/common/lib/asm-3.2.jar:/opt/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/opt/hadoop/share/hadoop/common/lib/commons-el-1.0.jar:/opt/hadoop/share/hadoop/common/hadoop-common-2.5.1.jar:/opt/hadoop/share/hadoop/common/hadoop-nfs-2.5.1.jar:/opt/hadoop/share/hadoop/common/hadoop-common-2.5.1-tests.jar:/opt/hadoop/share/hadoop/hdfs:/opt/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/opt/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/opt/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/opt/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/opt/hadoop/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/opt/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/opt/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/opt/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/opt/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/opt/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.5.1.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.5.1-tests.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.5.1.jar:/opt/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/opt/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/opt/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/opt/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/opt/hadoop/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/opt/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/opt/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/opt/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/opt/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/opt/hadoop/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/opt/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/opt/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/opt/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/opt/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/opt/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/opt/hadoop/share/hadoop/yarn/lib/jline-0.9.94.jar:/opt/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/opt/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/opt/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/opt/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/opt/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/opt/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/opt/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/opt/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/opt/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/opt/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/opt/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/opt/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/opt/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/opt/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/opt/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/opt/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/opt/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.5.1.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.5.1.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.5.1.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.5.1.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.5.1.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.5.1.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.5.1.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.5.1.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.5.1.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.5.1.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.5.1.jar:/opt/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/opt/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/opt/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/opt/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.5.1.jar:/opt/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/opt/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/opt/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/opt/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/opt/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/opt/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/opt/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/opt/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/opt/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/opt/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/opt/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/opt/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/opt/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/opt/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/opt/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/opt/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/opt/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.5.1.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.5.1.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.5.1.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.5.1.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.5.1.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.5.1.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.5.1-tests.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.1.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.5.1.jar:/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build =https://git-wip-us.apache.org/repos/asf/hadoop.git -r2e18d179e4a8065b6a9f29cf2de9451891265cce; compiled by ‘jenkins‘ on2014-09-05T23:11Z
STARTUP_MSG:   java = 1.7.0_67
************************************************************/
14/11/07 15:00:49 INFO namenode.NameNode:registered UNIX signal handlers for [TERM, HUP, INT]
14/11/07 15:00:49 INFO namenode.NameNode:createNameNode [-bootstrapStandby]
=====================================================
About to bootstrap Standby ID nn2 from:
          Nameservice ID: mycluster
       Other Namenode ID: nn1
 Other NN‘s HTTP address: http://had1:50070
 Other NN‘s IPC  address:had1/192.168.20.54:8020
            Namespace ID: 1793979241
           Block pool ID: BP-2107976076-192.168.20.54-1414983659496
               Cluster ID:CID-3e6c6bdd-5f77-4c8c-9d9d-bff27415501d
          Layout version: -57
=====================================================
Re-format filesystem in Storage Directory/data1/dfs/name ? (Y or N) N
Format aborted in Storage Directory/data1/dfs/name
14/11/07 15:00:54 INFO util.ExitUtil:Exiting with status 5
14/11/07 15:00:54 INFO namenode.NameNode:SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode athad4/192.168.20.62
************************************************************/
 
 
$ hadoop-daemon.sh start namenode


5、初始化JN,启动had1NameNode(had1)

$ hdfs namenode -initializeSharedEdits
$ hadoop-daemon.sh start namenode

 

两个NameNode启动后都是standby状态,我们使用以下的命令来将had4提升为active状态

$ hdfs haadmin -transitionToActive nn2


还可以使用以下命令查看namenode的状态:

$ hdfs haadmin -getServiceState nn2
 active

 

查看hdfs haadmin的帮助信息

$ hdfs haadmin -help
Usage: DFSHAAdmin [-ns<nameserviceId>]
   [-transitionToActive <serviceId> [--forceactive]]
   [-transitionToStandby <serviceId>]
   [-failover [--forcefence] [--forceactive] <serviceId><serviceId>]
   [-getServiceState <serviceId>]
   [-checkHealth <serviceId>]
   [-help <command>]
 
Generic options supported are
-conf <configuration file>     specify an application configuration file
-D <property=value>            use value for given property
-fs <local|namenode:port>      specify a namenode
-jt <local|jobtracker:port>    specify a job tracker
-files <comma separated list offiles>    specify comma separatedfiles to be copied to the map reduce cluster
-libjars <comma separated list ofjars>    specify comma separated jarfiles to include in the classpath.
-archives <comma separated list ofarchives>    specify comma separatedarchives to be unarchived on the compute machines.

此外,与ZooKeeper结合还可以实现HDFS的自动failover



本文出自 “与IT一起的日子” 博客,请务必保留此出处http://raugher.blog.51cto.com/3472678/1574249

Hadoop2 NameNode HA配置

标签:hadoop   hadoop2   namenode ha   

原文地址:http://raugher.blog.51cto.com/3472678/1574249

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!