Hadoop 2.6.0分布式部署参考手册
关于本参考手册的word文档,可以到如下地址下载:http://download.csdn.net/detail/u012875880/8285323
本列中,操作系统为Centos 7.0,JDK版本为Oracle HotSpot 1.7,Hadoop版本为Apache Hadoop 2.6.0,操作用户为hadoop。
集群各节点信息参考如下:
主机名 |
IP地址 |
角色 |
ResourceManager |
172.15.0.2 |
ResourceManager & MR JobHistory Server |
NameNode |
172.15.0.3 |
NameNode |
SecondaryNameNode |
172.15.0.4 |
SecondaryNameNode |
DataNode01 |
172.15.0.5 |
DataNode & NodeManager |
DataNode02 |
172.15.0.6 |
DataNode & NodeManager |
DataNode03 |
172.15.0.7 |
DataNode & NodeManager |
DataNode04 |
172.15.0.8 |
DataNode & NodeManager |
DataNode05 |
172.15.0.9 |
DataNode & NodeManager |
注:上述表中用”&”连接多个角色,如主机”ResourceManager”有两个角色,分别为ResourceManager和MR JobHistory Server。
useradd hadoop
用户“hadoop”即为Hadoop集群的安装和使用用户。
Centos 7自带的JDK版本为 OpenJDK 1.7,本例中需要将其更换为Oracle HotSpot 1.7版,本例中采用解压二进制包方式安装,安装目录为/opt/。
① 查看当前JDK rpm包
rpm -qa | grep jdk
java-1.7.0-openjdk-1.7.0.51-2.4.5.5.el7.x86_64 java-1.7.0-openjdk-headless-1.7.0.51-2.4.5.5.el7.x86_64
② 删除自带JDK
rpm -e --nodeps java-1.7.0-openjdk-1.7.0.51-2.4.5.5.el7.x86_64
rpm -e --nodeps java-1.7.0-openjdk-headless-1.7.0.51-2.4.5.5.el7.x86_64
③ 安装指定JDK
进入安装包所在目录并解压
④ 配置环境变量
编辑~/.bashrc或者/etc/profile,添加如下内容:
#JAVA export JAVA_HOME=/opt/jdk1.7 export PATH=$PATH:$JAVA_HOME/bin export CLASSPATH=$JAVA_HOME/lib export CLASSPATH=$CLASSPATH:$JAVA_HOME/jre/lib
① 需要设置如上表格所示8台主机间的SSH无密码登陆。
② 进入hadoop用户的根目录下并通过命令ssh-keygen -t rsa 生成秘钥对
③ 创建公钥认证文件authorized_keys并将生成的~/.ssh目录下的id_rsa.pub文件 的内容输出至该文件:
more id_rsa.pub > auhorized_keys
④ 分别改变~/.ssh目录和authorized_keys文件的权限:
chmod 700 ~/.ssh;chmod 600 ~/.ssh/authorized_keys
⑤ 每个节点主机都重复以上步骤,并将各自的~/.ssh/id_rsa.pub文件的公钥拷贝至其 他主机。
对于以上操作,也可以通过一句命令搞定:
rm -rf ~/.ssh;ssh-keygen -t rsa;chmod 700 ~/.ssh;more ~/.ssh/id_rsa.pub > ~/.ssh/authorized_keys;chmod 600 ~/.ssh/authorized_keys;
注:在centos 6中可以用dsa方式:ssh-keygen -t dsa命令来设置无密码登陆,在centos 7中只能用rsa方式,否则只能ssh无密码登陆本机,无能登陆它机。
分别编辑各节点上的/etc/hosts文件,添加如下内容:
172.15.0.2 ResourceManager 172.15.0.3 NameNode 172.15.0.4 SecondaryNameNode 172.15.0.5 DataNode01 172.15.0.6 DataNode02 172.15.0.7 DataNode03 172.15.0.8 DataNode04 172.15.0.9 DataNode05 172.15.0.5 NodeManager01 172.15.0.6 NodeManager02 172.15.0.7 NodeManager03 172.15.0.8 NodeManager04 172.15.0.9 NodeManager05
以下操作内容为通用操作部分,及在每个节点上的内容一样。
分别在每个节点上重复如下操作:
① 将hadoop安装包(hadoop-2.6.0.tar)拷贝至/opt目录下,并解压:
tar -xvf hadoop-2.6.0.tar
解压后的hadoop-2.6.0目录(/opt/hadoop-2.6.0)即为hadoop的安装根目录。
② 更改hadoop安装目录hadoop-2.6.0的所有者为hadoop用户:
chown -R hadoop.hadoop /opt/hadoop-2.6.0
③ 添加环境变量:
#hadoop export HADOOP_HOME=/opt/hadoop-2.6.0 export PATH=$PATH:$HADOOP_HOME/bin export PATH=$PATH:$HADOOP_HOME/sbin
分别将如下配置文件解压并分发至每个节点的Hadoop“$HADOOP_HOME/etc/hadoop”目录中,如提示是否覆盖文件,确认即可。
Hadoop配置文件:http://download.csdn.net/detail/u012875880/8285261
注:关于各节点的配置参数设置,请参考后面的“附录1”或“附录2”
安装完毕后,需登陆NameNode节点或任一DataNode节点执行hdfs namenode -format格式化集群HDFS文件系统;
注:如果非第一次格式化HDFS文件系统,则需要在进行格式化操作前分别将NameNode和各个DataNode节点的dfs.namenode.name.dir目录(在本例中为/home/hadoop/hadoopdata)下的所有内容清空。
分别登陆如下主机并执行相应命令:
① 登陆ResourceManger执行start-yarn.sh命令启动集群资源管理系统yarn
② 登陆NameNode执行start-dfs.sh命令启动集群HDFS文件系统
③ 分别登陆SecondaryNameNode、DataNode01、DataNode02、DataNode03、DataNode04 节点执行jps命令,查看每个节点是否有如下Java进程运行:
ResourceManger节点运行的进程:ResouceNamager
NameNode节点运行的进程:NameNode
SecondaryNameNode节点运行的进程:SecondaryNameNode
各个DataNode节点运行的进程:DataNode & NodeManager
如果以上操作正常则说明Hadoop集群已经正常启动。
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://NameNode:9000</value> <description>NameNode URI</description> </property> </configuration>
<configuration> <property> <name>dfs.namenode.name.dir</name> <value>file:///home/hadoop/hadoopdata/hdfs/namenode</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:///home/jack/hadoopdata/hdfs/datanode</value> </property <property> <name>dfs.namenode.secondary.http-address</name> <value>SecondaryNameNode:50090</value> </property> </configuration>
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> <description>Execution framework set to Hadoop YARN.</description> </property> </configuration>
<configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> <description>Shuffle service that needs to be set for Map Reduce applications.</description> </property> </configuration>
JAVA_HOME表示当前的Java安装目录
export JAVA_HOME=/opt/jdk-1.7
集群中的master节点(NameNode、ResourceManager)需要配置其所拥有的slaver节点,其中:
NameNode节点的slaves内容为:
DataNode01 DataNode02 DataNode03 DataNode04 DataNode05
ResourceManager节点的slaves内容为:
NodeManager01 NodeManager02 NodeManager03 NodeManager04 NodeManager05
注:以下的红色字体部分的配置参数为必须配置的部分,其他配置皆为默认配置。
<configuration>
<!--Configurations for NameNode(SecondaryNameNode)、DataNode、NodeManager:-->
<property>
<name>fs.defaultFS</name>
<value>hdfs://NameNode:9000</value>
<description>NameNode URI</description>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
<description>Size of read/write buffer used in SequenceFiles,The default value is 131072</description>
</property>
</configuration>
l 属性”fs.defaultFS“表示NameNode节点地址,由”hdfs://主机名(或ip):端口号”组成。
<configuration>
<!--Configurations for NameNode:-->
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///home/hadoop/hadoopdata/hdfs/namenode</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>SecondaryNameNode:50090</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property
<property>
<name>dfs.blocksize</name>
<value>268435456</value>
</property>
<property>
<name>dfs.namenode.handler.count</name>
<value>100</value>
</property>
<!--Configurations for DataNode:-->
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///home/hadoop/hadoopdata/hdfs/datanode</value>
</property>
</configuration>
<configuration>
<!--Configurations for MapReduce Applications:-->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
<description>Execution framework set to Hadoop YARN.</description>
</property>
<property>
<name>mapreduce.map.memory.mb</name>
<value>1024</value>
<description>Larger resource limit for maps.</description>
</property>
<property>
<name>mapreduce.map.java.opts</name>
<value>Xmx1024M</value>
<description>Larger heap-size for child jvms of maps.</description>
</property>
<property>
<name>mapreduce.reduce.memory.mb</name>
<value>1024</value>
<description>Larger resource limit for reduces.</description>
</property>
<property>
<name>mapreduce.reduce.java.opts</name>
<value>Xmx2560M</value>
<description></description>
</property>
<property>
<name>mapreduce.task.io.sort.mb</name>
<value>512</value>
<description></description>
</property>
<property>
<name>mapreduce.task.io.sort.factor</name>
<value>10</value>
<description>More streams merged at once while sorting files.</description>
</property>
<property>
<name>mapreduce.reduce.shuffle.parallelcopies</name>
<value>5</value>
<description>Higher number of parallel copies run by reduces to fetch outputs from very large number of maps.</description>
</property>
<!--Configurations for MapReduce JobHistory Server:-->
<property>
<name>mapreduce.jobhistory.address</name>
<value>ResourceManager:10020</value>
<description>MapReduce JobHistory Server host:port Default port is 10020</description>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>ResourceManager:19888</value>
<description>MapReduce JobHistory Server Web UI host:port Default port is 19888</description>
</property>
<property>
<name>mapreduce.jobhistory.intermediate-done-dir</name>
<value>/mr-history/tmp</value>
<description>Directory where history files are written by MapReduce jobs. Defalut is "/mr-history/tmp"</description>
</property>
<property>
<name>mapreduce.jobhistory.done-dir</name>
<value>/mr-history/done</value>
<description>Directory where history files are managed by the MR JobHistory Server.Default value is "/mr-history/done"</description>
</property>
</configuration>
<configuration>
<!--Configurations for ResourceManager and NodeManager:-->
<property>
<name>yarn.acl.enable</name>
<value>false</value>
<description>Enable ACLs? Defaults to false. The value of the optional is "true" or "false"</description>
</property>
<property>
<name>yarn.admin.acl</name>
<value>*</value>
<description>ACL to set admins on the cluster. ACLs are of for comma-separated-usersspacecomma-separated-groups. Defaults to special value of * which means anyone. Special value of just space means no one has access</description>
</property>
<property>
<name>yarn.log-aggregation-enable</name>
<value>false</value>
<description>Configuration to enable or disable log aggregation</description>
</property>
<!--Congrations for ResourceManager:-->
<property>
<name>yarn.resourcemanager.address</name>
<value>ResourceManager:8032</value>
<description>ResourceManager host:port for clients to submit jobs.NOTES:host:port If set, overrides the hostname set in yarn.resourcemanager.hostname.</description>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>ResourceManager:8030</value>
<description>ResourceManager host:port for ApplicationMasters to talk to Scheduler to obtain resources.NOTES:host:port If set, overrides the hostname set in yarn.resourcemanager.hostname</description>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>ResourceManager:8031</value>
<description>ResourceManager host:port for NodeManagers.NOTES:host:port If set, overrides the hostname set in yarn.resourcemanager.hostname</description>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>ResourceManager:8033</value>
<description>ResourceManager host:port for administrative commands.NOTES:host:port If set, overrides the hostname set in yarn.resourcemanager.hostname.</description>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>ResourceManager:8088</value>
<description>ResourceManager web-ui host:port. NOTES:host:port If set, overrides the hostname set in yarn.resourcemanager.hostname</description>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>ResourceManager</value>
<description>ResourceManager host</description>
</property>
<property>
<name>yarn.resourcemanager.scheduler.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
<description>ResourceManager Scheduler class CapacityScheduler (recommended), FairScheduler (also recommended), or FifoScheduler.The default value is "org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler".
</description>
</property>
<property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>1024</value>
<description>Minimum limit of memory to allocate to each container request at the Resource Manager.NOTES:In MBs</description>
</property>
<property>
<name>yarn.scheduler.maximum-allocation-mb</name>
<value>8192</value>
<description>Maximum limit of memory to allocate to each container request at the Resource Manager.NOTES:In MBs</description>
</property>
<!--Congrations for History Server:-->
<property>
<name>yarn.log-aggregation.retain-seconds</name>
<value>-1</value>
<description>How long to keep aggregation logs before deleting them. -1 disables. Be careful, set this too small and you will spam the name node.</description>
</property>
<property>
<name>yarn.log-aggregation.retain-check-interval-seconds</name>
<value>-1</value>
<description>Time between checks for aggregated log retention. If set to 0 or a negative value then the value is computed as one-tenth of the aggregated log retention time. Be careful, set this too small and you will spam the name node.</description>
</property>
<!--Configurations for Configurations for NodeManager:-->
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>8192</value>
<description>Resource i.e. available physical memory, in MB, for given NodeManager.
The default value is 8192.
NOTES:Defines total available resources on the NodeManager to be made available to running containers
</description>
</property>
<property>
<name>yarn.nodemanager.vmem-pmem-ratio</name>
<value>2.1</value>
<description>Maximum ratio by which virtual memory usage of tasks may exceed physical memory.
The default value is 2.1
NOTES:The virtual memory usage of each task may exceed its physical memory limit by this ratio. The total amount of virtual memory used by tasks on the NodeManager may exceed its physical memory usage by this ratio.
</description>
</property>
<property>
<name>yarn.nodemanager.local-dir</name>
<value>${hadoop.tmp.dir}/nm-local-dir</value>
<description>Comma-separated list of paths on the local filesystem where intermediate data is written.
The default value is "${hadoop.tmp.dir}/nm-local-dir"
NOTES:Multiple paths help spread disk i/o.
</description>
</property>
<property>
<name>yarn.nodemanager.log-dirs</name>
<value>${yarn.log.dir}/userlogs</value>
<description>Comma-separated list of paths on the local filesystem where logs are written
The default value is "${yarn.log.dir}/userlogs"
NOTES:Multiple paths help spread disk i/o.
</description>
</property>
<property>
<name>yarn.nodemanager.log.retain-seconds</name>
<value>10800</value>
<description>Default time (in seconds) to retain log files on the NodeManager Only applicable if log-aggregation is disabled.
The default value is "10800"
</description>
</property>
<property>
<name>yarn.nodemanager.remote-app-log-dir</name>
<value>/logs</value>
<description>HDFS directory where the application logs are moved on application completion. Need to set appropriate permissions. Only applicable if log-aggregation is enabled.
The default value is "/logs" or "/tmp/logs"
</description>
</property>
<property>
<name>yarn.nodemanager.remote-app-log-dir-suffix</name>
<value>logs</value>
<description>Suffix appended to the remote log dir. Logs will be aggregated to ${yarn.nodemanager.remote-app-log-dir}/${user}/${thisParam} Only applicable if log-aggregation is enabled.</description>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
<description>Shuffle service that needs to be set for Map Reduce applications.</description>
</property>
</configuration>
JAVA_HOME表示当前的Java安装目录
export JAVA_HOME=/opt/jdk-1.7
集群中的master节点(NameNode、ResourceManager)需要配置其所拥有的slaver节点,其中:
NameNode节点的slaves内容为:
DataNode01 DataNode02 DataNode03 DataNode04 DataNode05
ResourceManager节点的slaves内容为:
NodeManager01 NodeManager02 NodeManager03 NodeManager04 NodeManager05
This section deals with important parameters to be specified in the given configuration files:
Parameter |
Value |
Notes |
fs.defaultFS |
NameNode URI |
hdfs://host:port/ |
io.file.buffer.size |
131072 |
Size of read/write buffer used in SequenceFiles. |
o Configurations for NameNode:
Parameter |
Value |
Notes |
dfs.namenode.name.dir |
Path on the local filesystem where the NameNode stores the namespace and transactions logs persistently. |
If this is a comma-delimited list of directories then the name table is replicated in all of the directories, for redundancy. |
dfs.namenode.hosts /dfs.namenode.hosts.exclude |
List of permitted/excluded DataNodes. |
If necessary, use these files to control the list of allowable datanodes. |
dfs.blocksize |
268435456 |
HDFS blocksize of 256MB for large file-systems. |
dfs.namenode.handler.count |
100 |
More NameNode server threads to handle RPCs from large number of DataNodes. |
o Configurations for DataNode:
Parameter |
Value |
Notes |
dfs.datanode.data.dir |
Comma separated list of paths on the local filesystem of a DataNode where it should store its blocks. |
If this is a comma-delimited list of directories, then data will be stored in all named directories, typically on different devices. |
o Configurations for ResourceManager and NodeManager:
Parameter |
Value |
Notes |
yarn.acl.enable |
true / false |
Enable ACLs? Defaults to false. |
yarn.admin.acl |
Admin ACL |
ACL to set admins on the cluster. ACLs are of for comma-separated-usersspacecomma-separated-groups. Defaults to special value of * which means anyone. Special value of just space means no one has access. |
yarn.log-aggregation-enable |
false |
Configuration to enable or disable log aggregation |
o Configurations for ResourceManager:
Parameter |
Value |
Notes |
yarn.resourcemanager.address |
ResourceManager host:port for clients to submit jobs. |
host:port |
yarn.resourcemanager.scheduler.address |
ResourceManager host:port for ApplicationMasters to talk to Scheduler to obtain resources. |
host:port |
yarn.resourcemanager.resource-tracker.address |
ResourceManager host:port for NodeManagers. |
host:port |
yarn.resourcemanager.admin.address |
ResourceManager host:port for administrative commands. |
host:port |
yarn.resourcemanager.webapp.address |
ResourceManager web-ui host:port. |
host:port |
yarn.resourcemanager.hostname |
ResourceManager host. |
host |
yarn.resourcemanager.scheduler.class |
ResourceManager Scheduler class. |
CapacityScheduler (recommended), FairScheduler (also recommended), or FifoScheduler |
yarn.scheduler.minimum-allocation-mb |
Minimum limit of memory to allocate to each container request at theResource Manager. |
In MBs |
yarn.scheduler.maximum-allocation-mb |
Maximum limit of memory to allocate to each container request at theResource Manager. |
In MBs |
yarn.resourcemanager.nodes.include-path /yarn.resourcemanager.nodes.exclude-path |
List of permitted/excluded NodeManagers. |
If necessary, use these files to control the list of allowable NodeManagers. |
o Configurations for NodeManager:
Parameter |
Value |
Notes |
yarn.nodemanager.resource.memory-mb |
Resource i.e. available physical memory, in MB, for givenNodeManager |
Defines total available resources on the NodeManager to be made available to running containers |
yarn.nodemanager.vmem-pmem-ratio |
Maximum ratio by which virtual memory usage of tasks may exceed physical memory |
The virtual memory usage of each task may exceed its physical memory limit by this ratio. The total amount of virtual memory used by tasks on the NodeManager may exceed its physical memory usage by this ratio. |
yarn.nodemanager.local-dirs |
Comma-separated list of paths on the local filesystem where intermediate data is written. |
Multiple paths help spread disk i/o. |
yarn.nodemanager.log-dirs |
Comma-separated list of paths on the local filesystem where logs are written. |
Multiple paths help spread disk i/o. |
yarn.nodemanager.log.retain-seconds |
10800 |
Default time (in seconds) to retain log files on the NodeManager Only applicable if log-aggregation is disabled. |
yarn.nodemanager.remote-app-log-dir |
/logs |
HDFS directory where the application logs are moved on application completion. Need to set appropriate permissions. Only applicable if log-aggregation is enabled. |
yarn.nodemanager.remote-app-log-dir-suffix |
logs |
Suffix appended to the remote log dir. Logs will be aggregated to ${yarn.nodemanager.remote-app-log-dir}/${user}/${thisParam} Only applicable if log-aggregation is enabled. |
yarn.nodemanager.aux-services |
mapreduce_shuffle |
Shuffle service that needs to be set for Map Reduce applications. |
o Configurations for History Server (Needs to be moved elsewhere):
Parameter |
Value |
Notes |
yarn.log-aggregation.retain-seconds |
-1 |
How long to keep aggregation logs before deleting them. -1 disables. Be careful, set this too small and you will spam the name node. |
yarn.log-aggregation.retain-check-interval-seconds |
-1 |
Time between checks for aggregated log retention. If set to 0 or a negative value then the value is computed as one-tenth of the aggregated log retention time. Be careful, set this too small and you will spam the name node. |
o Configurations for MapReduce Applications:
Parameter |
Value |
Notes |
mapreduce.framework.name |
yarn |
Execution framework set to Hadoop YARN. |
mapreduce.map.memory.mb |
1536 |
Larger resource limit for maps. |
mapreduce.map.java.opts |
-Xmx1024M |
Larger heap-size for child jvms of maps. |
mapreduce.reduce.memory.mb |
3072 |
Larger resource limit for reduces. |
mapreduce.reduce.java.opts |
-Xmx2560M |
Larger heap-size for child jvms of reduces. |
mapreduce.task.io.sort.mb |
512 |
Higher memory-limit while sorting data for efficiency. |
mapreduce.task.io.sort.factor |
100 |
More streams merged at once while sorting files. |
mapreduce.reduce.shuffle.parallelcopies |
50 |
Higher number of parallel copies run by reduces to fetch outputs from very large number of maps. |
o Configurations for MapReduce JobHistory Server:
Parameter |
Value |
Notes |
mapreduce.jobhistory.address |
MapReduce JobHistory Server host:port |
Default port is 10020. |
mapreduce.jobhistory.webapp.address |
MapReduce JobHistory Server Web UI host:port |
Default port is 19888. |
mapreduce.jobhistory.intermediate-done-dir |
/mr-history/tmp |
Directory where history files are written by MapReduce jobs. |
原文地址:http://blog.csdn.net/zhu_xun/article/details/42077311