标签:hadoop ha
一 、Hadoop Ha 安装准备工作 1.zookeeper集群 master slave1 slave2 Hadoop集群 master Namenode1 ResourceManager1 Journalnode1 slave1 Namenode1 ResourceManager2 Journalnode2 slave2 DataNode1 slave3 DataNode2 2.修改主机名 hostnamectl set-hostname master(slave1,slave2,slave3) 3.修改/etc/hosts文件 192.168.197.139 master(slave1,slave2,slave3) 4.设置ssh免密码登录 ssh-keygen 一直回车 ssh-copy-id master(slave1,slave2,slave3) 5.安装jdk 用xftp上传jdk、zookeeper、Hadoop包到/usr/local下 tar xzvf jdk* mv jdk* java 修改环境变量/etc/profile export JAVA_HOME=/usr/local/java export JRE_HOME=/usr/java/jre export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib export PATH=$PATH:$JAVA_HOME/bin 生成环境变量 source /etc/profile 6.将jdk拷贝到其他节点 scp -r /usr/local/java slave1:/usr/local 7.同步时间 yum install -y ntp ntpdate 210.72.145.44 二、安装zookeeper 1.解压zookeeper tar xzvf zookeeper* mv zookeeper* zookeeper 2.修改配置文件 cd /usr/local/zookeeper/conf mv zoo* zoo.cfg vi zoo.cfg 修改 dataDir=/usr/local/zookeeper/data 添加server.1=master:2888:3888 server.1=master:2888:3888 server.1=master:2888:3888 mkdir -p /usr/local/zookeeper/data echo 1(2.3) > /usr/local/zookeeper/data/myid 3.关闭防火墙 systemctl stop firewalld systemctl disable firewalld 关闭SELINUX vi /etc/selinux/config SELINUX=enfocing修改为diabled 4.修改环境变量/etc/profile export ZOOKEEPER_HOME=/usr/local/zookeeper export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin: $ZOOKEEPER_HOME/bin 生成环境变量 source /etc/profile 5.将配置好的zookeeper拷贝到其他节点,同时修改各自的myid文件 scp -r /usr/local/zookeeper slave1:/usr/local scp -r /etc/profile slave1:/etc/profile 三、Hadoop集群安装 1.解压 tar xzvf haooop* mv hadoop* hadoop 2.修改环境变量 cd /etc/profile export HADOOP_HOME=/usr/local/hadoop #export HADOOP_OPTS=\"-Djava.library.path=$HADOOP_PREFIX/lib: $HADOOP_PREFIX/lib/native\" export LD_LIBRARY_PATH=$HADOOP_HOME/lib/native export HADOOP_COMMON_LIB_NATIVE_DIR=/usr/local/hadoop/lib/native export HADOOP_OPTS=\"-Djava.library.path=/usr/local/hadoop/lib\" #export HADOOP_ROOT_LOGGER=DEBUG,console export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin 3.修改配置文件 cd /usr/local/hadoop/etc/hadoop (1)修改hadoo-env.sh vi hadoop-env.sh export JAVA_HOME=/usr/local/java (2)修改core-site.xml
fs.defaultFShdfs://ns1hadoop.tmp.dir/usr/local/hadoop/tmpha.zookeeper.quorummaster:2181,slave1:2181,slave2:2181(3)修改hdfs-site.xml
dfs.nameservicesns1dfs.ha.namenodes.ns1nn1,nn1dfs.namenode.rpc-address.ns1.nn1master:9000dfs.namenode.http-address.ns1.nn1master:50070dfs.namenode.rpc-address.ns1.nn2slave1:9000dfs.namenode.http-address.ns1.nn2slave1:50070dfs.namenode.shared.edits.dirqjournal://master:8485;slave1:8485;/ns1dfs.journalnode.edits.dir/usr/local/hadoop/journaldfs.ha.automatic-failover.enabledtruedfs.client.failover.proxy.provider.ns1org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProviderdfs.ha.fencing.methods sshfence shell(/bin/true) dfs.ha.fencing.ssh.private-key-files/root/.ssh/id_rsadfs.ha.fencing.ssh.connect-timeout30000(4)修改mapred-site.xml
mapreduce.framework.nameyarn(6)修改yarn-site.xml
yarn.resourcemanager.ha.enabledtrueyarn.resourcemanager.cluster-idyrcyarn.resourcemanager.ha.rm-idsrm1,rm2yarn.resourcemanager.hostname.rm1masteryarn.resourcemanager.hostname.rm2slave1yarn.resourcemanager.zk-addressmaster:2181,slave1:2181,slave2:2181yarn.nodemanager.aux-servicesmapreduce_shuffle(6)修改slaves slave2 slave3 4、将配置好的hadoop拷贝到其他节点 scp -r /usr/local/hadoop slave1(slave2,slave3):/usr/local 5.scp -r /etc/profile slave1(slave2,slave3):/etc/profile source /etc/profile 四、启动zookeeper集群 zkServer.sh start 打开zookeeper zkServer.sh status zkServer.sh stop 关闭zookeeper zkServer.sh restart 重启zookeeper 五、在master和slave1上启动Journalnode hadoop-daemon.sh start journalnode 六、格式化HDFS(master) 1. hdfs namenode -format 2. 格式化zookeeper hdfs zkfc -formatZK 七、在master上启动Hadoop集群 start-all.sh
本文出自 “12927965” 博客,转载请与作者联系!
标签:hadoop ha
原文地址:http://12937965.blog.51cto.com/12927965/1983805