回溯(最近一次调用最后一次):文件"/home/hdp-credit/yinzhichao/analysis_data/src/imei_mate_mobile.py",第93行,在main()文件"/home/hdp-credit/yinzhichao/analysis_data/src/imei_mate_mobile.py",第89行,在main compressionCodecClass = "org.apache.hadoop.io.compress.GzipCodec")文件"/usr/bin/hadoop/software/spark/python/lib/pyspark.zip/pyspark/rdd.py",第1504行,在saveAsTextFile文件"/usr/bin/hadoop/software/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py",第813行,在 call 文件"/usr/bin/hadoop/software/spark/python/lib/py4j-0.9-src.zip/py4j/protocol.py",行308,在get_return_value中py4j.protocol.Py4JJavaError:调用o128.saveAsTextFile时发生错误 . :org.apache.spark.SparkException:作业因阶段失败而中止:阶段0.0中的任务91029失败4次,最近失败:阶段0.0中丢失的任务91029.1(TID 91142,10.160.113.180):ExecutorLostFailure(执行者15退出导致通过其中一个正在运行的任务)原因:容器标记为失败:容器上的container_e09_1520438427024_306261_01_000016:10.160.113.180 . 退出状态:137 . 诊断:根据要求杀死容器 . 退出代码为137.更多信息容器以非零退出代码退出137被外部信号杀死

驱动程序堆栈跟踪:在org.apache.spark.scheduler.DAGScheduler.org $阿帕奇$火花$ $调度$$ DAGScheduler failJobAndIndependentStages(DAGScheduler.scala:1475)在org.apache.spark.scheduler.DAGScheduler $$ anonfun $ abortStage $ 1申请(DAGScheduler.scala:1463)at org.apache.spark.scheduler.DAGScheduler $$ anonfun $ abortStage $ 1.apply(DAGScheduler.scala:1462)at scala.collection.mutable.ResizableArray $ class.foreach(ResizableArray.scala: 59)位于org.apache.spark.ched处,org.apache.ched处的ord.apache.chedu.Dad中的scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)(DAGScheduler.scala:1462)atg.apache.spark.scheduler.DAGScheduler $$ anonfun $ handleTaskSetFailed $ 1.apply(DAGScheduler.scala:843)at org.apache.spark.scheduler.DAGScheduler $$ anonfun $ handleTaskSetFailed $ 1.apply(DAGScheduler.scala:843)at scala.Option.foreach(Option.scala:236 )org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:843)at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler . 阶:1684)在org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1643)在org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1632)在org.apache.spark.util .EventLoop $$ anon $ 1.run(EventLoop.scala:48)org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:664)at org.apache.spark.SparkContext.runJob(SparkContext.scala:1844) )org.apache.spark.SparkContext.runJob(SparkContext.scala:1858)org.apache.spark.SparkContext.runJob(SparkContext.scala:1933)at org.apache.spark.rdd.PairRDDFunctions $$ anonfun $ saveAsHadoopDataset $ 1.apply $ mcV $ sp(PairRDDFunctions.scala:1213)org.apache.spark.rdd.PairRDDFunctions $$ anonfun $ saveAsHadoopDataset $ 1.apply(PairRDDFunctions.scala:1156)at org.apache.spark.rdd.PairRDDFunctions $ $ $ anonfun $ saveAsHadoopDataset 1.适用(PairRDDFunctions.scala:1156)在org.apache.spark.rdd.RDDOperationScope $ .withScope(RDDOperationScope.scala:150)在org.apache.spark.rdd.RDDOperationScope $ .withScope(R DDOperationScope.scala:111)org.apache.spark.rdd.RDD.withScope(RDD.scala:320)atg.apache.spark.rdd.PairRDDFunctions.saveAsHadoopDataset(PairRDDFunctions.scala:1156)org.apache.spark .rdd.PairRDDFunctions $$ anonfun $ saveAsHadoopFile $ 4.适用$ MCV $ SP(PairRDDFunctions.scala:1060)在org.apache.spark.rdd.PairRDDFunctions $$ anonfun $ saveAsHadoopFile $ 4.适用(PairRDDFunctions.scala:1026)在组织位于org.apache.spark的org.apache.spark.rdd.RDDOperationScope $ .withScope(RDDOperationScope.scala:150)的.apache.spark.rdd.PairRDDFunctions $$ anonfun $ saveAsHadoopFile $ 4.apply(PairRDDFunctions.scala:1026) . rdd.RDDOperationScope $ .withScope(RDDOperationScope.scala:111)atg.apache.spark.rdd.RDD.withScope(RDD.scala:320)org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:1026) )在org.apache.spark.rdd.PairRDDFunctions $$ anonfun $ saveAsHadoopFile $ 3.apply $ MCV $ SP(PairRDDFunctions.scala:1007)在org.apache.spark.rdd.PairRDDFunctions $$ anonfun $ saveAsHadoopFile $ 3.apply(PairRDDFunctions .sca la:1007)org.apache.spark.rdd.PairRDDFunctions $$ anonfun $ saveAsHadoopFile $ 3.apply(PairRDDFunctions.scala:1007)at org.apache.spark.rdd.RDDOperationScope $ .withScope(RDDOperationScope.scala:150)at at org.apache.spark.rdd.RDDOperationScope $ .withScope(RDDOperationScope.scala:111)atg.apache.spark.rdd.RDD.withScope(RDD.scala:320)org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile (PairRDDFunctions.scala:1006)org.apache.spark.rdd.PairRDDFunctions $$ anonfun $ saveAsHadoopFile $ 2.apply $ mcV $ sp(PairRDDFunctions.scala:964)at org.apache.spark.rdd.PairRDDFunctions $$ anonfun $ saveAsHadoopFile $ 2.适用(PairRDDFunctions.scala:962)在org.apache.spark.rdd.PairRDDFunctions $$ anonfun $ saveAsHadoopFile $ 2.适用(PairRDDFunctions.scala:962)在org.apache.spark.rdd.RDDOperationScope $ .withScope( RDDOperationScope.scala:150)位于org.apache的org.apache.spark.rdd.RDDOperationScope $ .withScope(RDDOperationScope.scala:111)org.apache.spark.rdd.RDD.withScope(RDD.scala:320) . spark.rdd.PairRDDFunctions.saveAsHad oopFile(PairRDDFunctions.scala:962)atorg.apache.spark.rdd.RDD $$ anonfun $ saveAsTextFile $ 2.apply $ mcV $ sp(RDD.scala:1469)at org.apache.spark.rdd.RDD $$ anonfun $ saveAsTextFile $ 2.apply(RDD.scala :1457)org.apache.spark.rdd.RDD $$ anonfun $ saveAsTextFile $ 2.apply(RDD.scala:1457)atg.apache.spark.rdd.RDDOperationScope $ .withScope(RDDOperationScope.scala:150)at org .apache.spark.rdd.RDDOperationScope $ .withScope(RDDOperationScope.scala:111)atg.apache.spark.rdd.RDD.withScope(RDD.scala:320)org.apache.spark.rdd.RDD.saveAsTextFile( RDD.scala:1457)org.apache.apark.apava.java.JavaRDDLike $ class.saveAsTextFile(JavaRDDLike.scala:515)at org.apache.spark.api.java.AbstractJavaRDDLike.saveAsTextFile(JavaRDDLike.scala:46)在sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method .invoke(Method.java:601)at py4j.reflection.MethodInvoker.inv o4(MethodInvoker.java:231)py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)at py4j.Gateway.invoke(Gateway.java:259)py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133) )py4j.commands.CallCommand.execute(CallCommand.java:79)at py4j.GatewayConnection.run(GatewayConnection.java:209)at java.lang.Thread.run(Thread.java:722)