我一直在努力让我的EMR集群上的火花深度学习库能够与Python 2.7并行读取图像 . 我一直在寻找这个问题已经有一段时间了,我未能找到解决方案 . 我尝试在conf中为sparksession设置不同的配置设置,并且在尝试创建SparkSession对象时出现以下错误

ERROR SparkContext:91 - Error initializing SparkContext.
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
   at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:89)
   at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:63)
   at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:164)
   at org.apache.spark.SparkContext.<init>(SparkContext.scala:500)
   at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
   at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
   at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
   at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
   at py4j.Gateway.invoke(Gateway.java:238)
   at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
   at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
   at py4j.GatewayConnection.run(GatewayConnection.java:214)
   at java.lang.Thread.run(Thread.java:748)

以上是使用jupyter笔记本时的结果 . 我尝试使用spark submit提交py文件并添加我需要用作--jars, - driver-class-path和-conf spark.executor.extraClassPath的值的jar,如this link所述.Here是我提交的代码以及导致的导入错误:

bin/spark-submit --jars /home/hadoop/spark-deep-learning-0.2.0-spark2.1-s_2.11.jar /
--driver-class-path /home/hadoop/spark-deep-learning-0.2.0-spark2.1-s_2.11.jar /
--conf spark.executor.extraClassPath=/home/hadoop/spark-deep-learning-0.2.0-spark2.1-s_2.11.jar /
/home/hadoop/RunningCode6.py 

Traceback (most recent call last):
  File "/home/hadoop/RunningCode6.py", line 74, in <module>
  from sparkdl import KerasImageFileTransformer
ImportError: No module named sparkdl

该库在独立模式下工作正常,但在使用群集模式时,我不断收到上述错误之一 .

我真的希望有人可以帮助我解决这个问题,因为我已经盯着它看了几个星期,我需要让它运转起来

谢谢!