首页 文章

如何解决预期的org.apache.hadoop.io.Text,在mapreduce作业中收到org.apache.hadoop.io.LongWritable

提问于
浏览
1

我正在尝试写一个可以从youtube数据集中分析一些信息的工作 . 我相信我已经在驱动程序类中正确设置了 Map 中的输出键,但我仍然得到上述错误我发布代码和异常这里,

Mapper

public class YouTubeDataMapper extends Mapper<LongWritable,Text,Text,IntWritable>{

private static final IntWritable one = new IntWritable(1); 
private Text category = new Text(); 
public void mapper(LongWritable key,Text value,Context context) throws IOException, InterruptedException{
    String str[] = value.toString().split("\t");
    category.set(str[3]);
    context.write(category, one);
}

}

Reducer类

public class YouTubeDataReducer extends Reducer<Text,IntWritable,Text,IntWritable>{

public void reduce(Text key,Iterable<IntWritable> values,Context context) throws IOException, InterruptedException{
    int sum=0;
    for(IntWritable count:values){
        sum+=count.get();
    }
    context.write(key, new IntWritable(sum));
}

}

驱动程序类

public class YouTubeDataDriver {

public static void main(String[] args) throws Exception {
    Configuration conf = new Configuration();

    @SuppressWarnings("deprecation")
    Job job = new Job(conf, "categories");
    job.setJarByClass(YouTubeDataDriver.class);

    job.setMapOutputKeyClass(Text.class);
    job.setMapOutputValueClass(IntWritable.class);
    // job.setNumReduceTasks(0);
    job.setOutputKeyClass(Text.class);// Here i have set the output keys
    job.setOutputValueClass(IntWritable.class);

    job.setMapperClass(YouTubeDataMapper.class);
    job.setReducerClass(YouTubeDataReducer.class);

    job.setInputFormatClass(TextInputFormat.class);
    job.setOutputFormatClass(TextOutputFormat.class);

    FileInputFormat.addInputPath(job, new Path(args[0]));
    FileOutputFormat.setOutputPath(job, new Path(args[1]));
    Path out = new Path(args[1]);
    out.getFileSystem(conf).delete(out);
    job.waitForCompletion(true);

}

}

我得到的例外

java.io.IOException:在map中键入mismatch:expected org.apache.hadoop.io.Text,在org.apache.hadoop.mapred.MapTask收到org.apache.hadoop.io.LongWritable $ MapOutputBuffer.collect( MapTask.java:1069)在org.apache.hadoop.mapred.MapTask $ NewOutputCollector.write(MapTask.java:712)在org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)在组织.apache.hadoop.mapreduce.lib.map.WrappedMapper $ Context.write(WrappedMapper.java:112)在org.apache.hadoop.mapreduce.Mapper.map(Mapper.java:124)在org.apache.hadoop.mapreduce .Mapper.run(Mapper.java:145)在org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)在org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)在org.apache.hadoop.mapred.YarnChild $ 2.run(YarnChild.java:168)在java.security.AccessController.doPrivileged(本机方法)在javax.security.auth.Subject.doAs(Subject.java:422)在组织.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)at at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)

我已在驱动程序类中设置了输出键

job.setOutputKeyClass(Text.class);// Here i have set the output keys
    job.setOutputValueClass(IntWritable.class);

但为什么我仍然得到错误?请帮忙,我是mapreduce的新手

2 回答

  • 0

    mapper() 方法重命名为 map() (请参阅official docs) .

    's happening is that no data is actually being processed by the mapper. It doesn' t输入 mapper() 方法(因为它正在寻找 map() 方法),因此保持 Map 阶段不变,这意味着 Map 输出键仍然是 LongWritable .

    作为旁白,

    String str[] = value.toString().split("\t");
    category.set(str[3]);
    

    是非常危险的 . 假设所有输入数据至少包含3个 \t 字符,这是冒险的 . 当处理大量数据时,几乎总会有一些人希望你的整个工作在发生这种情况时死亡 . 考虑做以下事情:

    String valueStr = value.toString();
    if (valueStr != null) {
        String str[] = valueStr.split("\t");
        if (str[] != null && str.size > 3) {
            category.set(str[3]);
            context.write(category, one);
        }
    }
    
  • 2

    下面的代码(用对象更新LongWritable)对我有用 -

    import java.io.IOException;
    import java.util.StringTokenizer;
    
    import org.apache.hadoop.conf.Configuration;
    import org.apache.hadoop.fs.Path;
    import org.apache.hadoop.io.IntWritable;
    import org.apache.hadoop.io.Text;
    import org.apache.hadoop.mapreduce.Job;
    import org.apache.hadoop.mapreduce.Mapper;
    import org.apache.hadoop.mapreduce.Reducer;
    import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
    import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
    import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
    import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
    
    public class YouTubeDataDriver {
    
        public static class YouTubeDataMapper
                extends Mapper<Object, Text, Text, IntWritable>{
    
            private final static IntWritable one = new IntWritable(1);
            private Text word = new Text();
    
            public void map(Object key, Text value, Context context
            ) throws IOException, InterruptedException {
                StringTokenizer itr = new StringTokenizer(value.toString());
                while (itr.hasMoreTokens()) {
                    word.set(itr.nextToken());
                    context.write(word, one);
                }
            }
        }
    
        public static class YouTubeDataReducer
                extends Reducer<Text,IntWritable,Text,IntWritable> {
            private IntWritable result = new IntWritable();
    
            public void reduce(Text key, Iterable<IntWritable> values,
                               Context context
            ) throws IOException, InterruptedException {
                int sum = 0;
                for (IntWritable val : values) {
                    sum += val.get();
                }
                result.set(sum);
                context.write(key, result);
            }
        }
    
        public static void main(String[] args) throws Exception {
            Configuration conf = new Configuration();
    
            @SuppressWarnings("deprecation")
            Job job = new Job(conf, "categories");
            job.setJarByClass(YouTubeDataDriver.class);
    
            job.setMapOutputKeyClass(Text.class);
            job.setMapOutputValueClass(IntWritable.class);
            // job.setNumReduceTasks(0);
            job.setOutputKeyClass(Text.class);// Here i have set the output keys
            job.setOutputValueClass(IntWritable.class);
    
            job.setMapperClass(YouTubeDataMapper.class);
            job.setReducerClass(YouTubeDataReducer.class);
    
            job.setInputFormatClass(TextInputFormat.class);
            job.setOutputFormatClass(TextOutputFormat.class);
    
            FileInputFormat.addInputPath(job, new Path(args[0]));
            FileOutputFormat.setOutputPath(job, new Path(args[1]));
            Path out = new Path(args[1]);
            out.getFileSystem(conf).delete(out);
            job.waitForCompletion(true);
    
        }
    
    }
    

相关问题