码迷,mamicode.com
首页 > 其他好文 > 详细

Tachyon基本使用07-----Tachyon集群容错1

时间:2014-10-15 15:15:11      阅读:237      评论:0      收藏:0      [点我收藏+]

标签:tachyon   hadoop   spark   

一、安装zookeeper集群

1.下载并解压zookeeper

[root@node1 soft]# tar -zxf zookeeper-3.4.5.tar.gz -C /usr/local/
[root@node1 soft]# ln -s /usr/local/zookeeper-3.4.5/ /usr/local/zookeeper
[root@node1 soft]#

2.创建zookeeper数据目录

[root@node1 zookeeper]#pwd /usr/local/zookeeper
[root@node1 zookeeper]#mkdir data
[root@node1 zookeeper]#

3.创建zookeeper日志目录

[root@node1 zookeeper]#pwd /usr/local/zookeeper
[root@node1 zookeeper]#mkdir logs
[root@node1 zookeeper]#

4.配置zookeeper环境变量

[root@node1 zookeeper]#cat /etc/profile.d/zookeeper.sh
ZOO_HOME=/usr/local/zookeeper
ZOO_LOG_DIR=$ZOO_HOME/logs
PATH=$ZOO_HOME/bin:$PATH
[root@node1 zookeeper]#. /etc/profile.d/zookeeper.sh
[root@node1 zookeeper]#

5.配置zoo.cfg

[root@node1 conf]# pwd /usr/local/zookeeper/conf
[root@node1 conf]# cp zoo_sample.cfg zoo.cfg
[root@node1 conf]# cat zoo.cfg |grep -v ^$ |grep -v ^#
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/usr/local/zookeeper/data
clientPort=2181
server.1=node1:2888:3888
server.2=node2:2888:3888
server.3=node3:2888:3888
[root@node1 conf]#

6.复制配置文件到其它节点

[root@node1 ~]# scp -r /usr/local/zookeeper-3.4.5/node2:/usr/local/
[root@node1 ~]# scp -r/usr/local/zookeeper-3.4.5/ node3:/usr/local/
root@node1 ~]# ssh node2ln -s /usr/local/zookeeper-3.4.5/ /usr/local/zookeeper
[root@node1 ~]# sshnode3 ln -s /usr/local/zookeeper-3.4.5/ /usr/local/zookeeper
[root@node1 ~]# scp/etc/profile.d/zookeeper.sh node2:/etc/profile.d/
zookeeper.sh                                    100%   82    0.1KB/s   00:00    
[root@node1 ~]# scp/etc/profile.d/zookeeper.sh node3:/etc/profile.d/
zookeeper.sh                                    100%   82    0.1KB/s   00:00    
[root@node1 ~]#

7.配置节点ID

[root@node1 ~]# echo 1 >/usr/local/zookeeper/data/myid
[root@node1 ~]# ssh node2 "echo 2 >/usr/local/zookeeper/data/myid"
[root@node1 ~]# ssh node3 "echo 3 >/usr/local/zookeeper/data/myid"

8.启动和停止zookeeper

[root@node1 ~]#zkServer.sh stop
JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Stopping zookeeper ...
STOPPED
[root@node1 ~]#zkServer.sh start
JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@node1 ~]#

9.查看节点信息

[root@node1 ~]# jps
5001 Jps
4821 QuorumPeerMain
[root@node1 ~]# ssh
node2 jps
3907 Jps
3816 QuorumPeerMain
[root@node1 ~]# ssh
node3 jps
3957 Jps
3875 QuorumPeerMain
[root@node1 ~]#

10.zkCli.sh连接zookeeper服务器

[root@node1 ~]# zkCli.sh
Connecting to localhost:2181
WatchedEvent state:SyncConnected type:None path:null
[zk:localhost:2181(CONNECTED) 0] ls /
[zookeeper]
[zk:localhost:2181(CONNECTED) 1]

11.查看zookeeper角色

[root@node1 ~]#zkServer.sh status
JMX enabled by default
Using config:/usr/local/zookeeper/bin/../conf/zoo.cfg
Mode: follower
[root@node1 ~]# ssh node2 zkServer.sh status
JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Mode: leader
[root@node1 ~]# ssh node3 zkServer.sh status
JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Mode: follower
[root@node1 ~]#

二、安装Hadoop集群

1.下载并解压hadoop

[root@node1 soft]# tar -zxf hadoop-2.2.0.x86_64.tar.gz -C /usr/local/
[root@node1 soft]# ln -s /usr/local/hadoop-2.2.0/ /usr/local/hadoop

2.配置hadoop环境变量

[root@node1 soft]# cat /etc/profile.d/hadoop.sh
HADOOP_HOME=/usr/java/hadoop
PATH=$HADOOP_HOME/bin:$PATH
[root@node1 soft]# . /etc/profile.d/hadoop.sh

3.修改hadoo-env.sh

[root@node1 hadoop]# pwd /usr/local/hadoop/etc/hadoop
[root@node1 hadoop]# cat hadoop-env.sh |grep JAVA_HOME|grep -v ^#
export JAVA_HOME=/usr/local/default
[root@node1 hadoop]#

4.修改core-site.xml

[hadoop@node1 hadoop]$pwd /usr/local/hadoop/etc/hadoop
[hadoop@node1 hadoop]$cat core-site.xml |grep -v ^#|grep -v ^$
<?xmlversion="1.0" encoding="UTF-8"?>
<?xml-stylesheettype="text/xsl" href="configuration.xsl"?> 
<configuration>
   <!-- 指定hdfs的nameservice为ns1 -->
   <property>
      <name>fs.defaultFS</name>
      <value>hdfs://ns1</value>
   </property>
   <!-- 指定hadoop临时目录 -->
   <property>
      <name>hadoop.tmp.dir</name>
      <value>/hadoop/tmp</value>
   </property>
   <!-- 指定zookeeper地址 -->
   <property>
     <name>ha.zookeeper.quorum</name>
      <value>node1:2181,node2:2181,node3:2181</value>
   </property>
</configuration>
[hadoop@ node1 hadoop]$

5.修改hdfs-site.xml

[hadoop@node1 hadoop]$pwd /usr/local/hadoop/etc/hadoop
[hadoop@node1 hadoop]$cat hdfs-site.xml |grep -v ^#|grep -v ^$
<?xmlversion="1.0" encoding="UTF-8"?>
<?xml-stylesheettype="text/xsl" href="configuration.xsl"?>
<configuration>
   <!--指定hdfs的nameservice为ns1,需要和core-site.xml中的保持一致 -->
   <property>
      <name>dfs.nameservices</name>
      <value>ns1</value>
   </property>
   <!-- ns1下面有两个NameNode,分别是nn1,nn2 -->
   <property>
      <name>dfs.ha.namenodes.ns1</name>
      <value>nn1,nn2</value>
   </property>
   <!-- nn1的RPC通信地址 -->
   <property>
     <name>dfs.namenode.rpc-address.ns1.nn1</name>
      <value>node1:9000</value>
   </property>
   <!-- nn1的http通信地址 -->
   <property>
      <name>dfs.namenode.http-address.ns1.nn1</name>
      <value>node1:50070</value>
   </property>
   <!-- nn2的RPC通信地址 -->
   <property>
     <name>dfs.namenode.rpc-address.ns1.nn2</name>
      <value>node2:9000</value>
   </property>
   <!-- nn2的http通信地址 -->
   <property>
     <name>dfs.namenode.http-address.ns1.nn2</name>
      <value>node2:50070</value>
   </property>
   <!-- 指定NameNode的元数据在JournalNode上的存放位置 -->
   <property>
     <name>dfs.namenode.shared.edits.dir</name>
      <value>qjournal://node1:8485;node2:8485;node3:8485/ns1</value>
   </property>
   <!-- 指定JournalNode在本地磁盘存放数据的位置 -->
   <property>
     <name>dfs.journalnode.edits.dir</name>
     <value>/hadoop/journal</value>
   </property>
   <!-- 开启NameNode失败自动切换 -->
   <property>
     <name>dfs.ha.automatic-failover.enabled</name>
      <value>true</value>
   </property>
   <!-- 配置失败自动切换实现方式 -->
   <property>
     <name>dfs.client.failover.proxy.provider.ns1</name>
      <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
   </property>
   <!-- 配置隔离机制 -->
   <property>
     <name>dfs.ha.fencing.methods</name>
      <value>sshfence</value>
   </property>
   <!-- 使用隔离机制时需要ssh免登陆 -->
   <property>
     <name>dfs.ha.fencing.ssh.private-key-files</name>
      <value>/root/.ssh/id_rsa</value>
   </property>
</configuration>
[hadoop@node1 hadoop]$

6.修改slaves

[hadoop@hadoop01 hadoop]$ pwd /usr/local/hadoop/etc/hadoop
[root@node1 hadoop]# cat
slaves 
node1
node2
node3
[root@node1 hadoop]#

7.修改yarn-site.xml

[hadoop@node1 hadoop]$pwd /usr/local/hadoop/etc/hadoop
[hadoop@node1 hadoop]$cat yarn-site.xml |grep -v ^$|grep -v ^#
<?xmlversion="1.0"?>
<configuration>
   <!-- 指定resourcemanager地址 -->
   <property>
     <name>yarn.resourcemanager.hostname</name>
      <value>node1</value>
   </property>
   <!-- 指定nodemanager启动时加载server的方式为shuffle server -->
   <property>
     <name>yarn.nodemanager.aux-services</name>
      <value>mapreduce_shuffle</value>
   </property>
</configuration>
[hadoop@node1 hadoop]$

8.修改mapred-site.xml

[root@node1 hadoop]# cpmapred-site.xml.template mapred-site.xml 
[hadoop@node1 hadoop]$pwd /usr/local/hadoop/etc/hadoop
[hadoop@node1 hadoop]$cat mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheettype="text/xsl" href="configuration.xsl"?>
<configuration>
   <!-- 指定mr框架为yarn方式 -->
   <property>
     <name>mapreduce.framework.name</name>
      <value>yarn</value>
   </property>
</configuration>
[hadoop@node1 hadoop]$

9.将配置好的hadoop拷贝到其他节点

[root@node1 ~]# scp -r/usr/local/hadoop-2.2.0/ node2:/usr/local/
[root@node1 ~]# scp -r/usr/local/hadoop-2.2.0/ node3:/usr/local/
[root@node1 ~]# sshnode2 ln -s /usr/local/hadoop-2.2.0/ /usr/local/hadoop
[root@node1 ~]# ssh node3ln -s /usr/local/hadoop-2.2.0/ /usr/local/Hadoop
[root@node1 ~]# scp/etc/profile.d/hadoop.sh node2:/etc/profile.d/
hadoop.sh                                     100%   58    0.1KB/s   00:00    
[root@node1 ~]# scp/etc/profile.d/hadoop.sh node3:/etc/profile.d/
hadoop.sh                                     100%   58    0.1KB/s   00:00    
[root@node1 ~]#

10.启动zookeeper集群(所有zk节点)

[root@node1 ~]#zkServer.sh start
JMX enabled by default
Using config:/usr/local/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ...STARTED
[root@node1 ~]# ssh node2 zkServer.sh start
JMX enabled by default
Using config:/usr/local/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ...STARTED 
[root@node1 ~]# ssh node3 zkServer.sh start
JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@node1 ~]#

11.启动journalnode

[root@node1 hadoop]#sbin/hadoop-daemons.sh start journalnode

12.格式化HDFS

[root@node1 hadoop]#hadoop namenode -format 
[root@node1 hadoop]# scp-r /hadoop/tmp/ node2:/hadoop/

13.格式化zk

[root@node1 hadoop]# hdfs zkfc -formatZK

14.启动HDFS

[root@node1 sbin]#./start-dfs.sh

15.启动YARN

[root@node1 sbin]#  ./start-yarn.sh

三、修改Tachyon底层文件系统为HDFS

1.修改Tachyontachyon-env.sh配置文件

#export TACHYON_UNDERFS_ADDRESS=$TACHYON_HOME/underfs
export TACHYON_UNDERFS_ADDRESS=hdfs://node1:9000/
-Dtachyon.master.journal.folder=hdfs://node1:9000/tmp/journal/

2.安装maven

[root@node1 soft]# tar -zxf apache-maven-3.2.3-bin.tar.gz -C /usr/local/
[root@node1 soft]# ln -s /usr/local/apache-maven-3.2.3/ /usr/local/maven
[root@node1 soft]# cat /etc/profile.d/maven.sh
MAVEN_HOME=/usr/local/maven
PATH=$MAVEN_HOME/bin:$PATH
[root@node1 soft]# . /etc/profile.d/maven.sh
[root@node1 soft]#

3.重新编译Tachyon

[root@node1 tachyon]#cat pom.xml |grep hadoop.version|head -n 1
   <hadoop.version>2.2.0</hadoop.version>
[root@node1 tachyon]#mvn -Dhadoop.version=2.2.0 clean package

4.复制Tachyon配置到其它节点

[root@node1 ~]# scp -r /usr/local/tachyon-0.5.0/ node2:/usr/local/
[root@node1 ~]# scp -r /usr/local/tachyon-0.5.0/ node3:/usr/local/

5.格式化Tachyon

[root@node1 tachyon]# tachyon format

6.启动Tachyon

[root@node1 tachyon]# tachyon-start.sh all Mount

7.测试Tachyon

[root@node1 tachyon]#tachyon tfs copyFromLocal /boot/initrd-2.6.32-358.el6.x86_64kdump.img /initrd
[root@node1 tachyon]#tachyon tfs ls /
4252.11 KB10-08-201416:48:45:715  In Memory      /initrd
[root@node1 tachyon]#



本文出自 “tachyon” 博客,请务必保留此出处http://ucloud.blog.51cto.com/3869454/1564186

Tachyon基本使用07-----Tachyon集群容错1

标签:tachyon   hadoop   spark   

原文地址:http://ucloud.blog.51cto.com/3869454/1564186

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!