标签:cin rsa config 步骤 集群 prope dfs 配置文件 core
一:手动HA切换搭建
实现步骤:
1.修改hdfs-site.xml
<configuration>
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
</property>
<property>
<name>dfs.ha.namenodes.mycluster</name>
<value>nn1,nn2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn1</name>
<value>hadoop01:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn2</name>
<value>hadoop02:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn1</name>
<value>hadoop01:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn2</name>
<value>hadoop02:50070</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://hadoop01:8485;hadoop02:8485;hadoop03:8485/mycluster</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/hadoop/.ssh/id_rsa</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.connect-timeout</name>
<value>30000</value>
</property>
<property>
<name>dfs.permissions.enabled</name>
<value>false</value>
</property>
</configuration>
2.修改core-site.xml配置文件
<property>
<name>fs.defaultFS</name>
<value>hdfs://mycluster</value>
</property>
3.配置yarn-site.xml配置文件
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop02</value>
</property>
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
<property>
<name>yarn.log-aggregation.retain-seconds</name>
<value>604800</value>
</property>
</configuration>
4.配置mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>hadoop02:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>hadoop02:19888</value>
</property>
</configuration>
5.完成配置之后,就可以把配置文件分发到其他的节点上,分发之前必须保证/etc/hosts文件和主机名称做好映射
[root@hadoop01 hadoop-2.7.2]# scp -r etc/hadoop/ hadoop02:/opt/app/hadoop-2.7.2/etc/
[root@hadoop01 hadoop-2.7.2]# scp -r etc/hadoop/ hadoop03:/opt/app/hadoop-2.7.2/etc/
注解: 1):把hadoop的目录的用户权限改为hadoop.如果是在root启动直接启动就可以了
2):启动之前,需要把残留进程以及相关临时文件清除掉 /tmp hadoop目录中/data/tmp 全部清空
3):启动之前,先把hadoop的权限修改为hadoop用户
*6*.启动的顺序
1):先启动zookeeper
[hadoop@hadoop01 zookeeper-3.4.10]$ bin/zkServer.sh start
2):再启动jouranlnode
[hadoop@hadoop03 hadoop-2.7.2]$ sbin/hadoop-daemon.sh start journalnode
注解: 所有节点都要进行启动
3):格式化namenode
[hadoop@hadoop01 hadoop-2.7.2]$ bin/hadoop namenode -format
4):先启动格式化好的namenode
[hadoop@hadoop01 hadoop-2.7.2]$ sbin/hadoop-daemon.sh start namenode
5):要在第二台节点上,同步namenode1的元数据
[hadoop@hadoop02 hadoop-2.7.2]$ bin/hdfs namenode -bootstrapStandby
6);手动切换 active
[hadoop@hadoop01 hadoop-2.7.2]$ bin/hdfs haadmin -transitionToActive nn1
二:自动HA的搭建
1.配置hdfs-site.xml配置文件
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
说明: 开启自动故障切换
2.配置core-site.xml配置文件
<property>
<name>ha.zookeeper.quorum</name>
<value>hadoop01:2181,hadoop02:2181,hadoop03:2181</value>
</property>
说明:把以上的配置分发到其他的节点上
3.初始化HA状态到zookeeper中
[hadoop@hadoop01 hadoop-2.7.2]$ bin/hdfs zkfc -formatZK
4.启动hdfs
大数据集群HA的搭建
标签:cin rsa config 步骤 集群 prope dfs 配置文件 core
原文地址:https://www.cnblogs.com/yjmxx1314/p/11157420.html