我能够在Yarn客户端模式下成功运行我的spark作业,但是当我尝试在Yarn集群模式下运行相同的作业时,我收到以下错误:

2016-05-23 20:10:55 task-result-getter-2 [WARN ] TaskSetManager - Lost task 0.0 in stage 2.0 (TID 70, impetus-IL0123C):
java.lang.NoClassDefFoundError: Could not initialize class com.xyz.spark.receiver.helper.RMQChannelHelper
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at com.twitter.chill.Instantiators$$anonfun$normalJava$1.apply(KryoBase.scala:160)
    at com.twitter.chill.Instantiators$$anon$1.newInstance(KryoBase.scala:123)
    at com.esotericsoftware.kryo.Kryo.newInstance(Kryo.java:1065)
    at com.esotericsoftware.kryo.serializers.FieldSerializer.create(FieldSerializer.java:228)
    at com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:217)
    at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:729)
    at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.read(DefaultArraySerializers.java:338)
    at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.read(DefaultArraySerializers.java:293)
    at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:729)
    at com.twitter.chill.WrappedArraySerializer.read(WrappedArraySerializer.scala:36)
    at com.twitter.chill.WrappedArraySerializer.read(WrappedArraySerializer.scala:23)
    at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:729)
    at org.apache.spark.serializer.KryoDeserializationStream.readObject(KryoSerializer.scala:192)
    at org.apache.spark.rdd.ParallelCollectionPartition$$anonfun$readObject$1$$anonfun$apply$mcV$sp$2.apply(ParallelCollectionRDD.scala:80)
    at org.apache.spark.rdd.ParallelCollectionPartition$$anonfun$readObject$1$$anonfun$apply$mcV$sp$2.apply(ParallelCollectionRDD.scala:80)
    at org.apache.spark.util.Utils$.deserializeViaNestedStream(Utils.scala:142)
    at org.apache.spark.rdd.ParallelCollectionPartition$$anonfun$readObject$1.apply$mcV$sp(ParallelCollectionRDD.scala:80)
    at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1160)
    at org.apache.spark.rdd.ParallelCollectionPartition.readObject(ParallelCollectionRDD.scala:70)
    at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1017)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
    at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:72)
    at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:98)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:194)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

执行程序类路径如下:

===============================================================================
YARN executor launch context:
  env:
    CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>/home/impadmin/opt/sax/lib/spark-sax-pipeline.jar<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
    SPARK_LOG_URL_STDERR -> http://xyz-IL0123C:8042/node/containerlogs/container_1464014361746_0001_01_000002/impadmin/stderr?start=-4096
    SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1464014361746_0001
    SPARK_YARN_CACHE_FILES_FILE_SIZES -> 186307673,263107419
    SPARK_USER -> impadmin
    SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PUBLIC
    SPARK_YARN_MODE -> true
    SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1464014404022,1464014399716
    SPARK_HOME -> /home/impadmin/opt/spark/spark-1.5.0
    SPARK_LOG_URL_STDOUT -> http://xyz-IL0123C:8042/node/containerlogs/container_1464014361746_0001_01_000002/impadmin/stdout?start=-4096
    SPARK_YARN_CACHE_FILES -> hdfs://172.26.49.204:54310/user/impadmin/.sparkStaging/application_1464014361746_0001/spark-assembly-1.5.0-hadoop2.7.1.jar#__spark__.jar,hdfs://172.26.49.204:54310/sax-pipelines/rkrYarnCluster.jar#__app__.jar

  command:
    {{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms1024m -Xmx1024m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=60071' '-Dspark.ui.port=0' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://sparkDriver@172.26.49.204:60071/user/CoarseGrainedScheduler --executor-id 1 --hostname xyz-IL0123C --cores 1 --app-id application_1464014361746_0001 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================

感谢您的帮助 .

此致,Rakesh Kumar Rakshit