目前,我有一个使用 spark.deploy.yarn.Client 向Yarn提交申请的工作代码实现 . 聚合此客户端需要的所有参数很复杂,但应用程序的提交很简单:

ClientArguments cArgs = new ClientArguments(args.toArray(new String[0]));
client = new Client(cArgs, sparkConf);
applicationID = client.submitApplication();

在此之前的大多数代码都在积累 sparkConfargs . 现在我希望退出 Client 并与Rest一起工作 . Spark提供完整的REST api,包括提交应用程序 - 根据Spark documentation,这是一个简单的json / xml帖子的问题:

POST http://<rm http address:port>/ws/v1/cluster/apps
Accept: application/json
Content-Type: application/json
{
  "application-id":"application_1404203615263_0001",
  "application-name":"test",
  "am-container-spec":
{
  "local-resources":
  {
    "entry":
    [
      {
        "key":"AppMaster.jar",
        "value":
        {
          "resource":"hdfs://hdfs-namenode:9000/user/testuser/DistributedShell/demo-app/AppMaster.jar",
          "type":"FILE",
          "visibility":"APPLICATION",
          "size": 43004,
          "timestamp": 1405452071209
        }
      }
    ]
  },
  "commands":
  {
    "command":"{{JAVA_HOME}}/bin/java -Xmx10m org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster --container_memory 10 --container_vcores 1 --num_containers 1 --priority 0 1><LOG_DIR>/AppMaster.stdout 2><LOG_DIR>/AppMaster.stderr"
  },
  "environment":
  {
    "entry":
    [
      {
        "key": "DISTRIBUTEDSHELLSCRIPTTIMESTAMP",
        "value": "1405459400754"
      },
      {
        "key": "CLASSPATH",
        "value": "{{CLASSPATH}}<CPS>./*<CPS>{{HADOOP_CONF_DIR}}<CPS>{{HADOOP_COMMON_HOME}}/share/hadoop/common/*<CPS>{{HADOOP_COMMON_HOME}}/share/hadoop/common/lib/*<CPS>{{HADOOP_HDFS_HOME}}/share/hadoop/hdfs/*<CPS>{{HADOOP_HDFS_HOME}}/share/hadoop/hdfs/lib/*<CPS>{{HADOOP_YARN_HOME}}/share/hadoop/yarn/*<CPS>{{HADOOP_YARN_HOME}}/share/hadoop/yarn/lib/*<CPS>./log4j.properties"
      },
      {
        "key": "DISTRIBUTEDSHELLSCRIPTLEN",
        "value": "6"
      },
      {
        "key": "DISTRIBUTEDSHELLSCRIPTLOCATION",
        "value": "hdfs://hdfs-namenode:9000/user/testuser/demo-app/shellCommands"
      }
    ]
  }
},
"unmanaged-AM":false,
"max-app-attempts":2,
"resource":
{
  "memory":1024,
  "vCores":1
},
"application-type":"YARN",
"keep-containers-across-application-attempts":false,
"log-aggregation-context":
{
  "log-include-pattern":"file1",
  "log-exclude-pattern":"file2",
  "rolled-log-include-pattern":"file3",
  "rolled-log-exclude-pattern":"file4",
  "log-aggregation-policy-class-name":"org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AllContainerLogAggregationPolicy",
  "log-aggregation-policy-parameters":""
},
"attempt-failures-validity-interval":3600000,
"reservation-id":"reservation_1454114874_1",
"am-black-listing-requests":
{
  "am-black-listing-enabled":true,
  "disable-failure-threshold":0.01
}
}

我试图将我的参数转换为POST请求的这个JSON主体,但似乎不可能 . 有谁知道我是否可以从正在运行的应用程序进行逆向工程我提交了通过REST发送的JSON有效负载?或者我可以用什么映射来获取Client参数并将它们放在JSON中?