在MapReduce中,当map生成的数据过大时,带宽就成了瓶颈,怎样精简压缩传给Reduce的数据,有不影响最终的结果呢。有一种方法就是使用Combiner,Combiner号称本地的Reduce,Reduce最终的输入,是Combiner的输出。下面以《Hadoop in action》中的专利数据为例。我们打算统计每个国家的专利数目。代码如下(使用Combiner的代码注释掉):
- package net.csdn.blog.ipolaris.hadoopdemo;
-
- import java.io.IOException;
-
- import net.scdn.blog.ipolaris.util.ArgsTool;
-
- import org.apache.hadoop.conf.Configuration;
- import org.apache.hadoop.conf.Configured;
- import org.apache.hadoop.fs.Path;
- import org.apache.hadoop.io.IntWritable;
- import org.apache.hadoop.io.LongWritable;
- import org.apache.hadoop.io.Text;
- import org.apache.hadoop.mapreduce.Job;
- import org.apache.hadoop.mapreduce.Mapper;
- import org.apache.hadoop.mapreduce.Reducer;
- import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
- import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
- import org.apache.hadoop.util.Tool;
- import org.apache.hadoop.util.ToolRunner;
-
- public class Demo1 extends Configured implements Tool{
-
-
- public static void main(String[] args) throws Exception {
- System.exit(ToolRunner.run(new Demo1(), args));
-
- }
-
- public static class DemoMap extends Mapper<LongWritable, Text, Text, IntWritable>{
-
- @Override
- protected void map(LongWritable key, Text value, Context context)
- throws IOException, InterruptedException {
-
- String line = value.toString();
- String[] splitdata = line.split("\\,");
- String contry = splitdata[4];
- System.out.println("country:"+contry);
- if (contry.trim().equals("\"COUNTRY\"")) {
- return;
- }else{
- context.write(new Text(contry), new IntWritable(1));
- }
- }
-
- }
-
- public static class DemoReduce extends Reducer<Text, IntWritable, Text, IntWritable>{
-
- @Override
- protected void reduce(Text arg0, Iterable<IntWritable> arg1,Context context)
- throws IOException, InterruptedException {
- System.out.println("reduce");
- int sum = 0;
- for (IntWritable num : arg1) {
- sum += num.get();
- }
- context.write(arg0, new IntWritable(sum));
- }
-
- }
- @Override
- public int run(String[] arg0) throws Exception {
- Configuration conf = getConf();
-
-
- Job job = new Job(conf, "demo1");
- String inputPath = ArgsTool.getArg(arg0, "input");
- String outputPath = ArgsTool.getArg(arg0, "output");
-
- FileInputFormat.addInputPath(job, new Path(inputPath));
- FileOutputFormat.setOutputPath(job, new Path(outputPath));
-
- job.setJarByClass(Demo1.class);
- job.setMapperClass(DemoMap.class);
- job.setReducerClass(DemoReduce.class);
-
- job.setOutputKeyClass(Text.class);
- job.setOutputValueClass(IntWritable.class);
- return job.waitForCompletion(true)?0:1;
- }
-
- }
可以看出,reduce的输入每个key所对应的value将是一大串1,但处理的文本很多时,这一串1已将占用很大的带宽,如果我们在map的输出给于reduce之前做一下合并或计算,那么传给reduce的数据就会少很多,减轻了网络压力。此时Combiner就排上用场了。我们现在本地把Map的输出做一个合并计算,把具有相同key的1做一个计算,然后再把此输出作为reduce的输入,这样传给reduce的数据就少了很多。Combiner是用reducer来定义的,多数的情况下Combiner和reduce处理的是同一种逻辑,所以job.setCombinerClass()的参数可以直接使用定义的reduce,当然也可以单独去定义一个有别于reduce的Combiner,继承Reducer,写法基本上定义reduce一样。让我们看一下,加入Combiner之前的处理结果
我们看到Reduce input records的值为2923922(在map中删掉了一条数据),而Map input records值为2923923,也就是说每个map input record,对应了一个reduce input record。代表着我们要通过网络传输大量的值。最终的统计结果如下(只截取了一段)
我们在看看加上Combiner运行情况
Reduce input records只有565,大量的map输出已经在Combiner中进行了合并,最终的统计结果和上图相同,就不贴图了。