码迷,mamicode.com
首页 > 其他好文 > 详细

Spark

时间:2017-08-23 20:48:21      阅读:242      评论:0      收藏:0      [点我收藏+]

标签:run   ctp   spark集群   backend   star   net   seconds   解决方法   注释   

 WARN netty.NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(1,Command exited with code 1)] in 1 attempts
org.apache.spark.rpc.RpcTimeoutException: Futures timed out after [120 seconds]. This timeout is controlled by spark.rpc.askTimeout
at org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:48)
at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:63)
at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:76)
at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:101)
at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77)
at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.removeExecutor(CoarseGrainedSchedulerBackend.scala:359)
at 

spark集群部署好之后,运行start-all.sh,可以成功运行,但是运行shell出错,显示超时

由于netty是spark通信框架,通信超时所以产生问题。

解决方法:1.ip6可能是一个可能原因,把::1也就是ip6先注释掉试试(不行)                                 2.设置下超时时间(靠谱):
SparkConf: conf.set("spark.rpc.askTimeout", "600s")
  spark-defaults.conf: spark.rpc.askTimeout 600s
spark-submit: --conf spark.rpc.askTimeout=600s

 


Spark

标签:run   ctp   spark集群   backend   star   net   seconds   解决方法   注释   

原文地址:http://www.cnblogs.com/Khaleesi-yu/p/7419940.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!