标签:
start-all.sh 启动
坑爹
找不出错
试了各种办法,重新formaet 查看 集群ID是否相同。都无效
日志也没看到错
按官网方法手动一步步启,问题照旧
master节点,yarn namenode 启动(打印详细日志)
node节点 yarn datanode 启动
看到错了
15/07/02 03:32:51 INFO datanode.DataNode: Block pool BP-89742471-127.0.1.1-1435821846469 (Datanode Uuid null) service to /172.16.231.176:8020 beginning handshake with NN
15/07/02 03:32:52 ERROR datanode.DataNode: Initialization failed for Block pool BP-89742471-127.0.1.1-1435821846469 (Datanode Uuid null) service to /172.16.231.176:8020 Datanode denied communication with namenode because hostname cannot be resolved (ip=172.16.231.175, hostname=172.16.231.175): DatanodeRegistration(0.0.0.0:50010, datanodeUuid=c165abfd-1c06-4259-8588-c805abd72fca, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-56;cid=CID-c13bccb4-70ca-43f7-94f7-1b66bbaf64dd;nsid=1827594974;c=0)
at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:863)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:4485)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:1271)
at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:95)
at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:28539)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
问题在这里hostname cannot be resolved (ip=172.16.231.175, hostname=172.16.231.175)
改slaves 文件 中的ip 为hostname
如
datanode1
datanode2
配置/etc/hosts
172.16.231.173 datanode1
172.16.231.174 datanode2
再启动,解决
问题是 slaves 不能写ip 只能写hostname
hadoop datanode 启动正常,但master无法识别(50030不显示datanode节点)
标签:
原文地址:http://www.cnblogs.com/zihunqingxin/p/4619279.html