标签:密码 div 环境变量 cluster sudo 伪分布式 下载安装 data shu
1.安装JDK1.8,下载安装包解压至 /usr/lib/jdk
vim /etc/profile
#配置路径
export JAVA_HOME= /usr/lib/jdk export JRE_HOME=${JAVA_HOME}/jre export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib export PATH=${JAVA_HOME}/bin:$PATH
2.之前已经安装好SSH,现在设置免密码登录
ssh-keygen -t rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
测试设置是否成功
ssh localhost
3.下载Hadoop2.6.0,解压至/home/super/software/hadoop
设置环境变量
sudo gedit ~/.bashrc
添加
export JAVA_HOME=/usr/lib/jdk export HADOOP_HOME=/home/super/software/hadoop export PATH=$PATH:$HADOOP_HOME/bin export PATH=$PATH:$HADOOP_HOME/sbin export HADOOP_MAPRED_HOME=$HADOOP_HOME export HADOOP_COMMON_HOME=$HADOOP_HOME export HADOOP_HDFS_HOME=$HADOOP_HOME export YARN_HOME=$HADOOP_HOME export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib" export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native:$JAVA_LIBRARY_PATH
使环境生效
source ~/.bashrc
【ps:在后续出现start-all.sh/stop-all.sh未找到命令的情况,解决方案:①cd到sbin目录下执行命令②再次运行source ~/.bashrc】
4.修改hadoop/etc/hadoop下的配置设置文件
修改hadoop-env.sh
export JAVA_HOME= /usr/lib/jdk
修改core-site.xml
<property> <name>fs.default.name</name> <value>hdfs://localhost:9000</value> </property>
修改yarn-site.xml
<property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property>
先将mapred-site.xml.template复制成mapred-site.xml,修改mapred-site.xml
<property> <name>mapreduce.framework.name</name> <value>yarn</value> </property>
修改hdfs-site.xml
<property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/home/super/software/hadoop/hadoop_data/hdfs/namenode</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/home/super/software/hadoop/hadoop_data/hdfs/datanode</value> </property>
创建namenode和datanode数据存储目录
sudo mkdir -p /home/super/software/hadoop/hadoop_data/hdfs/namenode sudo mkdir -p /home/super/software/hadoop/hadoop_data/hdfs/datanode
格式化namenode
hadoop namenode -format
启动hadoop
start-all.sh
输入jps查看结果
本来应该输出六个进程,结果只输出了四个,DataNode和NameNode未启动
ResourceManager Jps DataNode SecondaryNameNode NameNode NodeManager
查看/home/super/software/hadoop/logs下DataNode的启动日志,发现报错all directories in dfs.data.dir are invalid
目录权限问题导致节点无法启动
sudo chown super:super -R /home/super/software/hadoop
查看/home/super/software/hadoop/logs下NameNode的启动日志,发现报错NameNode is not formatted
因为之前格式化次数过多引起的clusterID变化。关闭Hadoop再进行一次格式化重新启动就行。
stop-all.sh hadoop namenode -format
重新启动后问题得到解决!
Ubuntu16.04 Hadoop2.6.0伪分布式安装与启动中遇到的问题
标签:密码 div 环境变量 cluster sudo 伪分布式 下载安装 data shu
原文地址:https://www.cnblogs.com/sindy-zhang/p/10547893.html