Edit: IDE is Intellij IDEA
OS: Mac OS X Lion
Hadoop: 1.2.1
编辑:如果文件路径存在于当前文件系统位置,则此方法有效 . 因此,问题就变成了如何在从IDE运行时使用hdfs .
从IDE(Intellij IDEA)内部运行获取异常,请参见下文:
在程序参数中我指定'输入输出'
当然,HDFS中存在“输入”,其中包含数据文件 .
但代码正在尝试从HDFS访问本地项目文件系统位置的目录 .
hdfs命令:
James-MacBook-Pro:conf james$ hadoop fs -ls input
Found 1 items
-rw-r--r-- 1 james supergroup 15 2013-11-01 07:31 /user/james/input/simple.txt
Java源代码:
public class WordCount extends Configured implements Tool {
public static void main(String[] args) throws Exception {
int res = ToolRunner.run(new Configuration(), new WordCount(), args);
System.exit(res);
}
@Override
public int run(String[] args) throws Exception {
if (args.length != 2) {
System.err.println("Usage: hadoop jar mrjob-1.0-SNAPSHOT-job.jar"
+ " [generic options] <in> <out>");
System.out.println();
ToolRunner.printGenericCommandUsage(System.err);
return 1;
}
Job job = new Job(getConf(), "WordCount");
job.setJarByClass(getClass());
job.setMapperClass(TokenizingMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
boolean success = job.waitForCompletion(true);
return success ? 0 : 1;
}
}
组态:
核心的site.xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
HDFS-site.xml中
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
mapred-site.xml中
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
</configuration>
IDE中的参数:
input output
例外:
Nov 03, 2013 9:46:00 AM org.apache.hadoop.security.UserGroupInformation doAs
SEVERE: PriviledgedActionException as:james cause:org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/Users/james/work/projects/hadoop/mrjob/input
Exception in thread "main" org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/Users/james/work/projects/hadoop/mrjob/input
What have I done wrong?
1 回答
从本地Eclipse,我假设集群配置的Hadoop配置文件(core-site.xml)不在类路径上,被捆绑在hadoop jar等中的类路径隐藏 .
您可以通过在提交作业之前在代码中手动设置作业配置属性“fs.default.name”来修改此问题:
您可能也想要配置jobtracker,这样您就不会使用本地跟踪器了:
请注意,您的环境或部署的主机名,端口甚至属性名称可能不同 .
或者您可以将hadoop conf文件夹添加到类路径中(并确保它具有比hadoop jar更高的优先级)