我正在尝试在Windows上运行Spring Boot YARN sample .

在我的VM上运行单节点Hadoop 2.7.1 .

当我尝试使用 java -jar ... 从Windows运行应用程序时,Spring Yarn成功部署了所有jar - 我可以在Hadoop FS中浏览和观察它们 . 在集群中运行程序(主机:8088 /集群)时,我可以看到该应用程序已提交,然后运行容器,之后该应用程序失败,日志中出现下一个异常:

Application application_1496328851344_0001 failed 2 times due to AM Container for appattempt_1496328851344_0001_000002 exited with exitCode: 1
For more detailed output, check application tracking page:http://host:8088/cluster/app/application_1496328851344_0001 Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1496328851344_0001_02_000001
Exit code: 1
Exception message: /bin/bash: line 0: fg: (null): no job control
Stack trace: ExitCodeException exitCode=1: /bin/bash: line 0: fg: (null): no job control
at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
at org.apache.hadoop.util.Shell.run(Shell.java:456)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.

但是,当我在VM上启动应用程序时 - 一切正常 .

这是我的Hadoop配置文件:

core-site.xml:

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://192.168.0.106:9000</value>
    </property>
</configuration>

hdfs-site.xml:

<configuration>
<property>
    <name>dfs.replication</name>
    <value>1</value>
</property>
<property>
    <name>dfs.namenode.name.dir</name>
    <value>/usr/local/hadoop-2.7.1/data/namenode</value>
</property>
<property>
    <name>dfs.datanode.data.dir</name>
    <value>/usr/local/hadoop-2.7.1/data/datanode</value>
</property>

<property>
    <name>dfs.client.use.datanode.hostname</name>
    <value>true</value>
</property>

<property>
    <name>dfs.permissions.enabled</name>
    <value>false</value>
</property>

mapred-site.xml:

<configuration>
    <property>
       <name>mapreduce.framework.name</name>
       <value>yarn</value>
    </property>
</configuration>

yarn-site.xml:

<configuration>
    <property>
       <name>yarn.nodemanager.aux-services</name>
       <value>mapreduce_shuffle</value>
    </property>
    <property>
       <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
       <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
        <name>yarn.nodemanager.vmem-pmem-ratio</name>
        <value>5</value>
    </property>
</configuration>

UPDmapreduce.app-submission.cross-platform 属性设置为 true 没有帮助 .