标签:des style blog http color io os 使用 java
一、系统环境:
二、下载jdk和hadoop,并上传到Ubuntu系统中
Vmware中的Linux与主机系统Windows交互文件的方法请参考:http://blog.chinaunix.net/uid-27717694-id-3834143.html
三、设置hadoop用户:
sudo addgroup hadoop #创建hadoop用户组 sudo adduser -ingroup hadoop hadoop #添加hadoop用户到hadoop组中 sudo gedit /etc/sudoers #为hadoop用户添加权限 在root设置权限的代码下添加一行: hadoopALL=(ALL:ALL) ALL
四、安装ssh,配置无密码登录
ssh-keygen -t ras -P "" sudo cat ~/.ssh/id_rsa.pub >> authorized_keys chmod 644 authorized_keys sudo gedit /etc/ssh/sshd_config 把AuthroziedKeysFile %h/.ssh/authorized_keys这一行注释取消
3. ssh localhost 成功!
五、安装jdk
这里采用全局设置方法,就是修改etc/profile,它是是所有用户的共用的环境变量 sudo gedit /etc/profile 打开之后在末尾添加 export JAVA_HOME=/usr/local/java/jdk1.7.0_67 export JRE_HOME=/usr/local/java/jdk1.7.0_67/jre export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib:$CLASSPATH export PATH=$JAVA_HOME/bin:$PATH
7. 检验是否安装成功
java -version 成功则显示如下 java version "1.7.0_67" Java(TM) SE Runtime Environment (build 1.7.0_67-b18) Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode)
六、安装Hadoop
配置: sudo gedit /etc/profile 添加: #HADOOP VARIABLES START export HADOOP_INSTALL=/home/hadoop/hadoop-2.5.0 export PATH=$PATH:$HADOOP_INSTALL/bin export PATH=$PATH:$HADOOP_INSTALL/sbin export HADOOP_MAPRED_HOME=$HADOOP_INSTALL export HADOOP_COMMON_HOME=$HADOOP_INSTALL export HADOOP_HDFS_HOME=$HADOOP_INSTALL export YARN_HOME=$HADOOP_INSTALL export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib" #HADOOP VARIABLES END
4. 配置core-site.xml,包含了Hadoop启动时的配置信息
sudo gedit /etc/hadoop/core-site.xml <configuration> <property> <name>fs.default.name</name> <value>hdfs://localhost:9000</value> </property> </configuration>
5. 配置yarn-site.xml,包含了MapReduce启动时的配置信息
sudo gedit /etc/hadoop/yarn-site.xml <configuration> <!-- Site specific YARN configuration properties --> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> </configuration>
6. 创建和配置mapred-site.xml
cd /etc/hadoop cp mapred-site.xml.template mapred-site.xml <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration>
7. 配置hdfs-site.xml
sudo gedit /etc/hadoop/hdfs-site.xml <configuration> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/home/hadoop/software/hadoop-2.4.0/hdfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/home/hadoop/software/hadoop-2.4.0/hdfs/data</value> </property> </configuration>
8. 格式化hdfs,命令行:hdfs namenode -format
9. 启动hadoop
start-dfs.sh:启动NameNode,DataNode,SecondaryNameNode
start-yarn.sh:启动NodeManager,Resourcemanager
标签:des style blog http color io os 使用 java
原文地址:http://www.cnblogs.com/luonet/p/3970217.html