码迷,mamicode.com
首页 > 数据库 > 详细

hadoop redis mongodb

时间:2016-04-28 18:28:59      阅读:291      评论:0      收藏:0      [点我收藏+]

标签:hadoop redis mongodb

一、环境

系统        CentOS7.0 64位

namenode01    192.168.0.220

namenode02    192.168.0.221

datanode01    192.168.0.222

datanode02    192.168.0.223

datanode03    192.168.0.224

二、配置基础环境

在所有的机器上添加本地hosts文件解析

[root@namenode01 ~]# tail -5 /etc/hosts
192.168.0.220	namenode01
192.168.0.221	namenode02
192.168.0.222	datanode01
192.168.0.223	datanode02
192.168.0.224	datanode03

在5台机器上创建hadoop用户,并设置密码是hadoop,这里只以naemenode01为例子

[root@namenode01 ~]# useradd hadoop
[root@namenode01 ~]# passwd hadoop
Changing password for user hadoop.
New password: 
BAD PASSWORD: The password is shorter than 8 characters
Retype new password: 
passwd: all authentication tokens updated successfully.

配置5台机器hadoop用户之间互相免密码ssh登录

#namenode01的操作
[root@namenode01 ~]# su - hadoop
[hadoop@namenode01 ~]$ ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): 
Created directory ‘/home/hadoop/.ssh‘.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
1c:7e:89:9d:14:9a:10:fc:69:1e:11:3d:6d:18:a5:01 hadoop@namenode01
The key‘s randomart image is:
+--[ RSA 2048]----+
|     .o.E++=.    |
|      ...o++o    |
|       .+ooo     |
|       o== o     |
|       oS.=      |
|        ..       |
|                 |
|                 |
|                 |
+-----------------+
[hadoop@namenode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode01
[hadoop@namenode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode02
[hadoop@namenode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode01
[hadoop@namenode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode02
[hadoop@namenode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode03

#验证结果
[hadoop@namenode01 ~]$ ssh namenode01 hostname
namenode01
[hadoop@namenode01 ~]$ ssh namenode02 hostname
namenode02
[hadoop@namenode01 ~]$ ssh datanode01 hostname
datanode01
[hadoop@namenode01 ~]$ ssh datanode02 hostname
datanode02
[hadoop@namenode01 ~]$ ssh datanode03 hostname
datanode03

#在namenode02上操作
[root@namenode02 ~]# su - hadoop
[hadoop@namenode02 ~]$ ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
a9:f5:0d:cb:c9:88:7b:71:f5:71:d8:a9:23:c6:85:6a hadoop@namenode02
The key‘s randomart image is:
+--[ RSA 2048]----+
|                 |
|                 |
|            .  o.|
|         . ...o.o|
|        S +....o |
|       +.E.O o.  |
|      o ooB o .  |
|       ..        |
|      ..         |
+-----------------+

[hadoop@namenode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode01
[hadoop@namenode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode02
[hadoop@namenode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode01
[hadoop@namenode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode02
[hadoop@namenode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode03

#验证结果
[hadoop@namenode02 ~]$ ssh namenode01 hostname
namenode01
[hadoop@namenode02 ~]$ ssh namenode02 hostname
namenode02
[hadoop@namenode02 ~]$ ssh datanode01 hostname
datanode01
[hadoop@namenode02 ~]$ ssh datanode02 hostname
datanode02
[hadoop@namenode02 ~]$ ssh datanode03 hostname
datanode03

#在datanode01上操作
[root@datanode01 ~]# su - hadoop
[hadoop@datanode01 ~]$ ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
48:72:20:69:64:e7:81:b7:03:64:41:5e:fa:88:db:5e hadoop@datanode01
The key‘s randomart image is:
+--[ RSA 2048]----+
| +O+=            |
| +=*.o           |
| .ooo.o          |
| . oo+ .         |
|. . ... S        |
| o               |
|. . E            |
| . .             |
|  .              |
+-----------------+

[hadoop@datanode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode01
[hadoop@datanode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode02
[hadoop@datanode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode01
[hadoop@datanode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode02
[hadoop@datanode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode03

#验证结果
[hadoop@datanode01 ~]$ ssh namenode01 hostname
namenode01
[hadoop@datanode01 ~]$ ssh namenode02 hostname
namenode02
[hadoop@datanode01 ~]$ ssh datanode01 hostname
datanode01
[hadoop@datanode01 ~]$ ssh datanode02 hostname
datanode02
[hadoop@datanode01 ~]$ ssh datanode03 hostname
datanode03

#datanode02上操作
[hadoop@datanode02 ~]$ ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
32:aa:88:fa:ce:ec:51:6f:de:f4:06:c9:4e:9c:10:31 hadoop@datanode02
The key‘s randomart image is:
+--[ RSA 2048]----+
|      E.         |
|      ..         |
|       .         |
|      .          |
|    . o+So       |
|   . o oB        |
|  . . oo..       |
|.+ o o o...      |
|=+B   . ...      |
+-----------------+

[hadoop@datanode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode01
[hadoop@datanode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode02
[hadoop@datanode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode01
[hadoop@datanode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode02
[hadoop@datanode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode03

#验证结果
[hadoop@datanode02 ~]$ ssh namenode01 hostname
namenode01
[hadoop@datanode02 ~]$ ssh namenode02 hostname
namenode02
[hadoop@datanode02 ~]$ ssh datanode01 hostname
datanode01
[hadoop@datanode02 ~]$ ssh datanode02 hostname
datanode02
[hadoop@datanode02 ~]$ ssh datanode03 hostname
datanode03

#datanode03上操作
[root@datanode03 ~]# su - hadoop
[hadoop@datanode03 ~]$ ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
f3:f3:3c:85:61:c6:e4:82:58:10:1f:d8:bf:71:89:b4 hadoop@datanode03
The key‘s randomart image is:
+--[ RSA 2048]----+
|      o=.        |
|      ..o.. .    |
|       o.+ * .   |
|      . . E O    |
|        S  B o   |
|         o. . .  |
|          o  .   |
|           +.    |
|            o.   |
+-----------------+

[hadoop@datanode03 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode01
[hadoop@datanode03 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode02
[hadoop@datanode03 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode01
[hadoop@datanode03 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode02
[hadoop@datanode03 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode03

#验证结果
[hadoop@datanode03 ~]$ ssh namenode01 hostname
namenode01
[hadoop@datanode03 ~]$ ssh namenode02 hostname
namenode02
[hadoop@datanode03 ~]$ ssh datanode01 hostname
datanode01
[hadoop@datanode03 ~]$ ssh datanode02 hostname
datanode02
[hadoop@datanode03 ~]$ ssh datanode03 hostname
datanode03

三、安装jdk环境

[root@namenode01 ~]# wget http://download.oracle.com/otn-pub/java/jdk/8u74-b02/jdk-8u74-linux-x64.tar.gz?AuthParam=1461828883_648d68bc6c7b0dfd253a6332a5871e06
[root@namenode01 ~]# tar xf jdk-8u74-linux-x64.tar.gz -C /usr/local/

#配置环境变量配置文件
[root@namenode01 ~]# cat /etc/profile.d/java.sh
JAVA_HOME=/usr/local/jdk1.8.0_74
JAVA_BIN=/usr/local/jdk1.8.0_74/bin
JRE_HOME=/usr/local/jdk1.8.0_74/jre
PATH=$PATH:/usr/local/jdk1.8.0_74/bin:/usr/local/jdk1.8.0_74/jre/bin
CLASSPATH=/usr/local/jdk1.8.0_74/jre/lib:/usr/local/jdk1.8.0_74/lib:/usr/local/jdk1.8.0_74/jre/lib/charsets.jar
export JAVA_HOME PATH

#加载环境变量
[root@namenode01 ~]# source /etc/profile.d/java.sh
[root@namenode01 ~]# which java
/usr/local/jdk1.8.0_74/bin/java

#测试结果
[root@namenode01 ~]# java -version
java version "1.8.0_74"
Java(TM) SE Runtime Environment (build 1.8.0_74-b02)
Java HotSpot(TM) 64-Bit Server VM (build 25.74-b02, mixed mode)

#将环境变量配置文件和二进制包复制到其余的4台机器上
[root@namenode01 ~]# scp -r /usr/local/jdk1.8.0_74 namenode02:/usr/local/
[root@namenode01 ~]# scp -r /usr/local/jdk1.8.0_74 datanode01:/usr/local/
[root@namenode01 ~]# scp -r /usr/local/jdk1.8.0_74 datanode02:/usr/local/
[root@namenode01 ~]# scp -r /usr/local/jdk1.8.0_74 datanode03:/usr/local/
[root@namenode01 ~]# scp /etc/profile.d/java.sh namenode02:/etc/profile.d/                                                                                                      100%  308     0.3KB/s   00:00    
[root@namenode01 ~]# scp /etc/profile.d/java.sh datanode01:/etc/profile.d/                                                                                            100%  308     0.3KB/s   00:00    
[root@namenode01 ~]# scp /etc/profile.d/java.sh datanode02:/etc/profile.d/                                                                                                         100%  308     0.3KB/s   00:00    
[root@namenode01 ~]# scp /etc/profile.d/java.sh datanode03:/etc/profile.d/

#测试结果,以namenode02为例子
[root@namenode02 ~]# source /etc/profile.d/java.sh 
[root@namenode02 ~]# java -version
java version "1.8.0_74"
Java(TM) SE Runtime Environment (build 1.8.0_74-b02)
Java HotSpot(TM) 64-Bit Server VM (build 25.74-b02, mixed mode)

四、安装hadoop

#下载hadoop软件
[root@namenode01 ~]# wget http://apache.fayea.com/hadoop/common/hadoop-2.5.2/hadoop-2.5.2.tar.gz
[root@namenode01 ~]# tar xf hadoop-2.5.2.tar.gz -C /usr/local/
[root@namenode01 ~]# chown -R hadoop.hadoop /usr/local/hadoop-2.5.2/
[root@namenode01 ~]# ln -sv /usr/local/hadoop-2.5.2/ /usr/local/hadoop
‘/usr/local/hadoop’ -> ‘/usr/local/hadoop-2.5.2/’

#添加hadoop的环境变量配置文件
[root@namenode01 ~]# cat /etc/profile.d/hadoop.sh
HADOOP_HOME=/usr/local/hadoop
PATH=$HADOOP_HOME/bin:$PATH
export HADOOP_BASE PATH

#切换到hadoop用户下,检查jdk环境是否正常
[root@namenode01 ~]# su - hadoop
Last login: Thu Apr 28 15:17:16 CST 2016 from datanode01 on pts/1
[hadoop@namenode01 ~]$ java -version
java version "1.8.0_74"
Java(TM) SE Runtime Environment (build 1.8.0_74-b02)
Java HotSpot(TM) 64-Bit Server VM (build 25.74-b02, mixed mode)

#开始编辑hadoop的配置文件
#编辑hadoop的环境变量文件
[hadoop@namenode01 ~]$ vim /usr/local/hadoop/etc/hadoop/hadoop-env.sh
export JAVA_HOME=/usr/local/jdk1.8.0_74        #修改JAVA_HOME变量的值

#编辑core-site.xml文件
[hadoop@namenode01 ~]$ vim /usr/local/hadoop/etc/hadoop/core-site.xml
<configuration>
        <property>
                <name>hadoop.tmp.dir</name>
                <value>/home/hadoop/temp</value>
        </property>
        <property>
                <name>fs.defaultFS</name>
                <value>hdfs://mycluster</value>
        </property>
        <property>
                <name>io.file.buffers.size</name>
                <value>131072</value>
        </property>
</configuration>

#编辑hdfs-site.xml文件
[hadoop@namenode01 ~]$ vim /usr/local/hadoop/etc/hadoop/hdfs-site.xml
<configuration>
	<property>
		<name>dfs.namenode.name.dir</name>
		<value>/data/hdfs/dfs/name</value>    #namenode目录
	</property>
	<property>
		<name>dfs.datanode.data.dir</name>
		<value>/data/hdfs/data</value>        #datanode目录
	</property>
	<property>
		<name>dfs.permissions</name>
		<value>false</value>
	</property>
	<property>
		<name>dfs.nameservices</name>
		<value>mycluster</value>        #和core-site.xml文件中保持一致
	</property>
	<property>
		<name>dfs.ha.namenodes.mycluster</name>
		<value>namenode01,namenode02</value>        #namenode节点
	</property>
	<property>
		<name>dfs.namenode.rpc-address.mycluster.namenode01</name>
		<value>namenode01:8020</value>
	</property>
	<property>
		<name>dfs.namenode.rpc-address.mycluster.namenode02</name>
		<value>namenode02:8020</value>
	</property>
	<property>
		<name>dfs.namenode.http-address.mycluster.namenode01</name>
		<value>namenode01:50070</value>
	</property>
	<property>
		<name>dfs.namenode.http-address.mycluster.namenode02</name>
		<value>namenode02:50070</value>
	</property>
	<property>
	        #namenode往journalnode写edits文件,填写所有的journalnode节点
		<name>dfs.namenode.shared.edits.dir</name>
		<value>qjournal://namenode01:8485;namenode02:8485;datanode01:8485;datanode02:8485;datanode03:8485/mycluster</value>
	</property>
	<property>
		<name>dfs.journalnode.edits.dir</name>
		<value>/data/hdfs/journal</value>    #journalnode目录
	</property>
	<property>
		<name>dfs.client.faliover.proxy.provider.mycluster</name>
		<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
	</property>
	<property>
		<name>dfs.ha.fening.methods</name>
		<value>sshfence</value>        #通过什么方法进行fence操作
	</property>
	<property>
		<name>dfs.ha.fencing.ssh.private-key-files</name>
		<value>/home/hadoop/.ssh/id_rsa</value>    #主机之间的认证
	</property>
	<property>
		<name>dfs.ha.fencing.ssh.connect-timeout</name>
		<value>6000</value>
	</property>
	<property>
		<name>dfs.ha.automatic-failover.enabled</name>
		<value>false</value>    #关闭主备自动切换,后面通过zookeeper来切换
	</property>
	<property>
		<name>dfs.replication</name>
		<value>3</value>        #replicaion的数量,默认为3分,少于这个数量会报错
	</property>
	<property>
		<name>dfs.webhdfs.enabled</name>
		<value>true</value>
	</property>
	<property>
		<name>dfs.permissions</name>
		<value>false</value>
	</property>
</configuration>

#编辑yarn-site.xml文件
[hadoop@namenode01 ~]$ vim /usr/local/hadoop/etc/hadoop/yarn-site.xml 
<configuration>
	<property>
		<name>yarn.nodemanager.aux-service</name>
		<value>mapreduce_shuffle</value>
	</property>
	<property>
		<name>yarn.resourcemanager.address</name>
		<value>namenode01:8032</value>
	</property>
	<property>
		<name>yarn.resourcemanager.scheduler.address</name>
		<value>namenode01:8030</value>
	</property>
	<property>
		<name>yarn.resourcemanager.resource-tracker.address</name>
		<value>namenode01:8031</value>
	</property>
	<property>
		<name>yarn.resourcemanager.admin.address</name>
		<value>namenode01:8033</value>
	</property>
	<property>
		<name>yarn.resourcemanager.webapp.address</name>
		<value>namenode01:8033</value>
	</property>
	<property>
		<name>yarn.nodemanager.resource.memory-mb</name>
		<value>15360</value>
	</property>
</configuration>

#编辑mapred-site.xml文件
[hadoop@namenode01 ~]$ cp /usr/local/hadoop/etc/hadoop/mapred-site.xml.template /usr/local/hadoop/etc/hadoop/mapred-site.xml
[hadoop@namenode01 ~]$ vim /usr/local/hadoop/etc/hadoop/mapred-site.xml
<configuration>
	<property>
		<name>mapreduce.framework.name</name>
		<value>yarn</value>
	</property>
	<property>
		<name>mapredue.jobtracker.http.address</name>
		<value>namenode01:50030</value>
	</property>
	<property>
		<name>mapreduce.jobhistory.address</name>
		<value>namenode01:10020</value>
	</property>
	<property>
		<name>mapreduce.jobhistory.webapp.address</name>
		<value>namenode01:19888</value>
	</property>
</configuration>

#编辑slaves配置文件
[hadoop@namenode01 ~]$ cat /usr/local/hadoop/etc/hadoop/slaves 
datanode01
datanode02
datanode03

#在namenodee01上切换到root用户下,创建相应的目录
[root@namenode01 ~]# mkdir /data/hdfs
[root@namenode01 ~]# chown hadoop.hadoop /data/hdfs/

#将hadoop用户的环境变量配置文件复制到其余4台机器上
[root@namenode01 ~]# scp /etc/profile.d/hadoop.sh namenode02:/etc/profile.d/
[root@namenode01 ~]# scp /etc/profile.d/hadoop.sh datanode01:/etc/profile.d/
[root@namenode01 ~]# scp /etc/profile.d/hadoop.sh datanode02:/etc/profile.d/  
[root@namenode01 ~]# scp /etc/profile.d/hadoop.sh datanode03:/etc/profile.d/

#复制hadoop安装文件到其余的4台机器上
[root@namenode01 ~]# scp -r /usr/local/hadoop-2.5.2/ namenode02:/usr/local/
[root@namenode01 ~]# scp -r /usr/local/hadoop-2.5.2/ datanode01:/usr/local/
[root@namenode01 ~]# scp -r /usr/local/hadoop-2.5.2/ datanode02:/usr/local/
[root@namenode01 ~]# scp -r /usr/local/hadoop-2.5.2/ datanode03:/usr/local/

#修改目录的权限,以namenode02为例
[root@namenode02 ~]# chown -R hadoop.hadoop /usr/local/hadoop-2.5.2/
[root@namenode02 ~]# ln -sv /usr/local/hadoop-2.5.2/ /usr/local/hadoop
‘/usr/local/hadoop’ -> ‘/usr/local/hadoop-2.5.2/’
[root@namenode02 ~]# ll /usr/local |grep hadoop
lrwxrwxrwx  1 root   root     24 Apr 28 17:19 hadoop -> /usr/local/hadoop-2.5.2/
drwxr-xr-x  9 hadoop hadoop  139 Apr 28 17:16 hadoop-2.5.2

#创建目录
[root@namenode02 ~]# mkdir /data/hdfs
[root@namenode02 ~]# chown -R hadoop.hadoop /data/hdfs/

#检查jdk环境
[root@namenode02 ~]# su - hadoop
Last login: Thu Apr 28 15:12:24 CST 2016 on pts/0
[hadoop@namenode02 ~]$ java -version
java version "1.8.0_74"
Java(TM) SE Runtime Environment (build 1.8.0_74-b02)
Java HotSpot(TM) 64-Bit Server VM (build 25.74-b02, mixed mode)
[hadoop@namenode02 ~]$ which hadoop
/usr/local/hadoop/bin/hadoop

五、启动hadoop

#在所有服务器执行hadoop-daemon.sh start journalnode,要在hadoop用户下执行
#只贴出namenoe01的过程
[hadoop@namenode01 ~]$ /usr/local/hadoop/sbin/hadoop-daemon.sh start journalnode
starting journalnode, logging to /usr/local/hadoop-2.5.2/logs/hadoop-hadoop-journalnode-namenode01.out

#在namenode01上执行
[hadoop@namenode01 ~]$ hadoop namenode -format


本文出自 “ly36843运维” 博客,请务必保留此出处http://ly36843.blog.51cto.com/3120113/1768665

hadoop redis mongodb

标签:hadoop redis mongodb

原文地址:http://ly36843.blog.51cto.com/3120113/1768665

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!