输出类似如下说明安装成功
首先,需要配置各个机器间的相互访问:
1、 配置ssh的自动登陆(在master机上):
$ ssh-keygen -t dsa -P ” -f ~/.ssh/id_dsa
完成后会在~/.ssh/(用户目录下)生成两个文件:id_dsa 和id_dsa.pub。
再把id_dsa.pub 追加到授权key 里面(当前并没有authorized_keys文件):
$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
完成后可以实现无密码登录本机:
$ ssh localhost
2、把master上的id_dsa.pub 文件追加到slaves机器的authorized_keys 内( 以slaves1节点为例):
#拷贝master的id_dsa.pub文件(在master号机器上执行)
$ scp id_dsa.pub redmap@192.168.1.2:/home/redmap/
注:(只需在主节点上运行ssh-kegen程序。其他节点的目录结构创建后,将刚才在主节点创建的keys通过scp拷贝到从节点的同样的目录上。)
我们在实际执行中是手动把id_dsa.pub拷贝到其他slaves的节点中,而不是用scp命令。最好是直接手动拷贝过去,这样文件的权限是一致的。
登录192.168.1.2,进入用户目录下执行:
$ cat id_dsa.pub >> .ssh/authorized_keys
之后可以在master上不输入密码直接SSH访问slaves1
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
export HADOOP_MAPRED_HOME=${HADOOP_HOME}
export HADOOP_COMMON_HOME=${HADOOP_HOME}
export HADOOP_HDFS_HOME=${HADOOP_HOME}
<property>
<name>io.native.lib.available</name>
<value>true</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://host:9000</value>//本机的Ip地址或者域名,端口自己设置
<description>The name of the default file system.Either the literal string "local" or a host:port for NDFS.</description>
<final>true</final>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/home/tmp</value>
</property>
</configuration>
<property>
<name>dfs.namenode.name.dir</name><value>file:/usr/hadoop23/dfs/name</value>//本机保存name数据的目录,自定义<description>Determines where on the local filesystem the DFS name node should store the name table.If this is a comma-delimited list of directories,then name table is replicated in all of the directories,for redundancy.</description>
<final>true</final>
</property><property>
<name>dfs.datanode.data.dir</name><value>file:/us/hadoop23/dfs/data</value>//本机保存data数据的目录,自定义<description>Determines where on the local filesystem an DFS data node should store its blocks.If this is a comma-delimited list of directories,then data will be stored in all named directories,typically on different devices.Directories that do not exist are ignored.</description><final>true</final>
</property><property>
<name>dfs.replication</name><value>1</value>
</property><property>
<name>dfs.permission</name><value>false</value>
</property><property>
<name>dfs.webhdfs.enabled</name><value>true</value>
</property>
file:/usr/hadoop23/dfs/name与
file:/usr/hadoop23/dfs/data
是计算机中的一些文件夹,用于存放数据和编辑文件的路径必须用一个详细的URI描述。
<property>
<name>yarn.resourcemanager.address</name><value>host:port</value>//本机的Ip地址或者域名,端口自己设置<description>the host is the hostname of the ResourceManager and the port is the port onwhich the clients can talk to the Resource Manager. </description></property>
<property>
<name>yarn.resourcemanager.scheduler.address</name><value> host:port </value>//本机的Ip地址或者域名,端口自己设置<description>host is the hostname of the resourcemanager and port is the porton which the Applications in the cluster talk to the Resource Manager.</description>
</property><property>
<name>yarn.resourcemanager.resource-tracker.address</name><value> host:port </value>//本机的Ip地址或者域名,端口自己设置<description>host is the hostname of the resource manager andport is the port on which the NodeManagers contact the Resource Manager.</description>
</property><property>
<name>yarn.resourcemanager.admin.address</name><value> host:8033 </value>//本机的Ip地址或者域名,端口自己设置<description>host is the hostname of the resource manager andport is the port on which the NodeManagers contact the Resource Manager.</description>
</property><property>
<name>yarn.resourcemanager.webapp.address</name><value> host:8088 </value>//本机的Ip地址或者域名,端口自己设置<description>host is the hostname of the resource manager andport is the port on which the NodeManagers contact the Resource Manager.</description>
</property><property>
<name>yarn.nodemanager.aux-services</name><value>mapreduce.shuffle</value>
</property><property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name><value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
(2) 如果是多个几点的分布式集群8451 SecondaryNameNode8721 NodeManager8592 ResourceManager9384 Jps8152 NameNode8282 DataNode
原文地址:http://blog.csdn.net/jyl1798/article/details/42521927