首页 文章

从Oozie工作流运行的mapreduce中的HBase连接失败

提问于
浏览
0

我正在运行我的mapreduce作业,作为来自Oozie工作流程的java动作 . 当我在我的hadoop集群中运行我的mapreduce时它运行成功,但是当我运行时使用来自Oozie工作流程的相同jar时它会抛出

这是我的工作流程.xml

<workflow-app name="HBaseToFileDriver" xmlns="uri:oozie:workflow:0.1">

    <start to="mapReduceAction"/>
        <action name="mapReduceAction">
                <java>
                         <job-tracker>${jobTracker}</job-tracker>
                        <name-node>${nameNode}</name-node>
                        <prepare>
                                <delete path="${outputDir}"/>
                        </prepare>

                        <configuration>
                                <property>
                                        <name>mapred.mapper.new-api</name>
                                        <value>true</value>
                                </property>
                                <property>
                                        <name>mapred.reducer.new-api</name>
                                        <value>true</value>
                                </property>
                                 <property>
                                        <name>oozie.libpath</name>
                                        <value>${appPath}/lib</value>
                                </property>
                                <property>
                                    <name>mapreduce.job.queuename</name>
                                    <value>root.fricadev</value>
                                </property>

                            </configuration>
                                <main-class>com.thomsonretuers.hbase.HBaseToFileDriver</main-class>

                                    <arg>fricadev:FinancialLineItem</arg>


                                <capture-output/>
                </java>
                <ok to="end"/>
                <error to="killJob"/>
        </action>
        <kill name="killJob">
            <message>"Killed job due to error: ${wf:errorMessage(wf:lastErrorNode())}"</message>
        </kill>
    <end name="end" />
</workflow-app>

当我看到YARN中的日志时,下面是我的例外 . 即使显示为成功但输出文件未生成 .

1 回答

  • 0

    你看看Oozie Java Action

    IMPORTANT: In order for a Java action to succeed on a secure cluster, it must propagate the Hadoop delegation token like in the following code snippet (this is benign on non-secure clusters):
    
    // propagate delegation related props from launcher job to MR job
    if (System.getenv("HADOOP_TOKEN_FILE_LOCATION") != null) {
        jobConf.set("mapreduce.job.credentials.binary", System.getenv("HADOOP_TOKEN_FILE_LOCATION"));
    }
    

    您必须从系统env变量获取 HADOOP_TOKEN_FILE_LOCATION 并设置为属性 mapreduce.job.credentials.binary .

    HADOOP_TOKEN_FILE_LOCATION 由oozie在运行时设置 .

相关问题