标签:int run ace 程序 dem ack 失败 运行 tap
在yarn资源管理的集群上运行spark程序,无法读取的数据多与少,都会报这个错误,但是其他程序在集群上能够正常运行。
16/11/14 00:13:44 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Container marked as failed: container_1478851289360_0032_01_000005 on host: gs-server-v-407. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1478851289360_0032_01_000005 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:578) at org.apache.hadoop.util.Shell.run(Shell.java:481) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:763) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:213) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745)
通过这个异常,很容易认为是yarn的配置出现了问题,但是无论num-executors和execute-memory设置多大,都是直接报这个错,为了分析那部分出现的问题,将spark程序功能注销,仅保留创建SparkContext语句,但是这次还是报这个错误,于是怀疑是sparkConfig配置有问题,sparkContext配置如下
1 /** 2 * 初始化 spark context 3 * @param isLocal 4 * @return 5 */ 6 def initSparkContext(isLocal: Boolean): SparkContext = { 7 val conf = new SparkConf().setAppName("redir_parser") 8 .set("spark.executor.extraJavaOptions", "-XX:-OmitStackTraceInFastThrow;-XX:-UseGCOverheadLimit") 9 return new SparkContext(conf) 10 }
通过调整和验证sparkConf参数,发现spark.executor。extraJavaOptions 如果设置了上述两项,则会报上述异常,如果去掉任意一个,则可以正常运行。
标签:int run ace 程序 dem ack 失败 运行 tap
原文地址:http://www.cnblogs.com/lovegmail/p/6060492.html