在/usr目录下建立 hadoop文件夹,赋予hadoop用户权限 (master)
[hadoop@master usr]$ sudo mkdir hadoop [hadoop@master usr]$ ls -al total 156 drwxr-xr-x. 2 root root 4096 Jul 31 00:17 hadoop [hadoop@master usr]$ sudo chown -R hadoop:hadoop hadoop [hadoop@master usr]$ ls -al total 156 drwxr-xr-x. 2 hadoop hadoop 4096 Jul 31 00:17 hadoop
在/usr/hadoop目录下 安装hadoop,完成后删除安装包
[hadoop@master hadoop]$ tar -zxvf hadoop-1.2.1.tar.gz [hadoop@master hadoop]$ ls hadoop-1.2.1 hadoop-1.2.1.tar.gz [hadoop@master hadoop]$ rm hadoop-1.2.1.tar.gz [hadoop@master hadoop]$ ls hadoop-1.2.1
配置hadoop变量 /etc/profile
#hadoop environment export HADOOP_HOME=/usr/hadoop/hadoop-1.2.1 export PATH=$PATH:$HADOOP_HOME/bin "/etc/profile" 85L, 2070C written
在hadoop的目录下建立tmp文件夹,这个在core-site.xml里要设置,否则重启后会被删掉,必须重新format node才行
[hadoop@master hadoop-1.2.1]$ mkdir tmp [hadoop@master hadoop-1.2.1]$ ls -al total 8404 drwxrwxr-x. 2 hadoop hadoop 4096 Jul 31 00:44 tmp
配置Java变量,指定java目录
[hadoop@master conf]$ pwd /usr/hadoop/hadoop-1.2.1/conf [hadoop@master conf]$ vim hadoop-env.sh #JAVA environment setting export JAVA_HOME=/usr/java/jdk1.7.0_65
Hadoop配置文件在/usr/hadoop/hadoop-1.2.1/conf目录下,
代码开发分为core,hdfs和map/reduce三部分,core-site.xml、hdfs-site.xml、mapred-site.xml。
core-site.xml和hdfs-site.xml是站在HDFS角度上配置文件;core-site.xml和mapred-site.xml是站在MapReduce角度上配置文件。
配置core-site.xml ,设置hdfs访问路径和端口 ,设置临时文件目录
[hadoop@master conf]$ vim core-site.xml <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <!-- file system properties --> <property> <name>fs.default.name</name> <value>hdfs://10.15.5.200:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/usr/hadoop/hadoop-1.2.1/tmp</value> <description>temporary directories.</description> </property> </configuration>
配置hdfs-site.xml ,重新定义replication副本的数量,默认为3 slaves少于3台,系统会报错
[hadoop@master conf]$ vim hdfs-site.xml <value>1</value> <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration>
配置mapred-site.xml
[hadoop@master conf]$ vim mapred-site.xml <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>mapred.job.tracker</name> <value>10.15.5.200:9001</value> </property> </configuration>
配置master和slaves ,其中slaves配置 其他node上可以没有
[hadoop@master conf]$ vi masters 10.15.5.200 [hadoop@master conf]$ vi slaves 10.15.5.201 10.15.5.202
--------------- 在slaves上安装hadoop,配置环境----------------------------------------------------
照葫芦画瓢 有一点要注意,使用root账户建立完/usr/hadoop 给与hadoop用户权限后 后面操作就可以切回hadoop 防止其他不必要的问题
Hadoop1.2.1安装笔记3:hadoop配置,布布扣,bubuko.com
原文地址:http://ciscolang.blog.51cto.com/8976580/1538841