标签:enable spl cat add sof pen hid enabled lin
1 service iptables status 2 service iptables stop
很多稀奇古怪的问题都是SELINUX导致的。
1 useradd hadoop -d /home/hadoopecho 2 hadoop|passwd hadoop --stdin
1 ssh-keygen -t rsa -P ‘‘ -f ~/.ssh/id_rsa
1 scp id_rsa.pub xxx@ip:~/.ssh/file 2 cat id_rsa.pub >> authorized_keys
RSAAuthentication yes
PubkeyAuthentication yes
如果发现ssh hostnamexx还是提示输入密码的话,需查看/var/log/secure中的日志信息,查询具体的错误,通常是目录权限不对,
一般要把密码文件的权限设为600,chmod 600 .ssh/xxx
1 3.1 core-site.xml 2 3 <configuration> 4 <property> 5 <name>fs.default.name</name> 6 <value>hdfs://hmaster/:9000</value> 7 <final>true</final> 8 </property> 9 <property> 10 <name>hadoop.tmp.dir</name> 11 <value>file:/home/hadoop/tmp</value> 12 </property> 13 <property> 14 <name>io.file.buffer.size</name> 15 <value>131072</value> 16 </property> 17 </configuration> 18 19 20 3.2 hdfs-site.xml 21 22 <configuration> 23 <property> 24 <name>dfs.replication</name> 25 <value>1</value> 26 </property> 27 <property> 28 <name>dfs.data.dir</name> 29 <value>/home/hadoop/hdfs/data</value> 30 </property> 31 <property> 32 <property> 33 <name>dfs.name.dir</name> 34 <value>/home/hadoop/hdfs/name</value> 35 </property> 36 <property> 37 <name>dfs.webhdfs.enabled</name> 38 <value>true</value> 39 </property> 40 </configuration> 41 42 43 3.3 mapred-site.xml 44 45 <configuration> 46 <property> 47 <name>mapred.job.tracker</name> 48 <value>hmaster:8021</value> 49 </property> 50 <property> 51 <name>mapred.local.dir</name> 52 <value>/tmp/hadoop/mapred/local</value> 53 </property> 54 <property> 55 <name>mapred.system.dir</name> 56 <value>/tmp/hadoop/mapred/system</value> 57 </property> 58 <property> 59 <name>mapred.tasktracker.map.tasks.maximum</name> 60 <value>2</value> 61 </property> 62 <property> 63 <name>mapred.tasktracker.reduce.tasks.maximum</name> 64 <value>2</value> 65 </property> 66 <property> 67 <name>mapred.child.java.opts</name> 68 <value>Xmx200m</value> 69 </property> 70 <property> 71 <name>mapred.jobhistory.address</name> 72 <value>hmaster:10020</value> 73 </property> 74 <property> 75 <name>mapred.jobhistory.webapp.address</name> 76 <value>hmaster:19888</value> 77 </property> 78 </configuration> 79 80 3.4 yarn-site.xml 81 82 <configuration> 83 <!-- Site specific YARN configuration properties --> 84 <property> 85 <name>yarn.resourcemanager.address</name> 86 <value>hmaster:8032</value> 87 </property> 88 <property> 89 <name>yarn.nodemanager.aux-services</name> 90 <value>mapreduce.shuffle</value> 91 </property> 92 <property> 93 <name>yarn.nodemanager.webapp.address</name> 94 <value>hmaster:8088</value> 95 </property> 96 </configuration>
注意
master节点 /etc/hosts的前两行一定要注释掉
#127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 oracle-11g
#::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
不然会在127.0.0.1上起namenode的服务,而导致相关访问服务拒绝。
1 PATH=$PATH:$HOME/bin:$HOME/sbin:$HADOOP_HOME/sbin:$HADOOP_HOME/bin 2 JAVA_HOME=/usr/local/src/jdk1.8 3 export HADOOP_HOME=/home/hadoop/hadoop 4 export JAVA_HOME=/usr/local/src/jdk1.8 5 export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native 6 export LD_LIBRARY_PATH=$HADOOP_HOME/lib/native
每一个行 写上slave的IP,如
[hadoop@hadoop1 hadoop]$ cat slaves
192.168.43.199
hadoop namenode -format
看到 Exiting with status 0就说明成功初始化了。
在master,slave用JPS查看进程
有namenode,secondary namenode,datanode就正常了。
master:50070 是namenode的web地址
master:19888 jobhistory的web地址
标签:enable spl cat add sof pen hid enabled lin
原文地址:https://www.cnblogs.com/peeyee/p/11965214.html