public class WCReducer extends Reducer<Text, LongWritable, Text, LongWritable> {
 
}
3.WordCount类实现Main方法
 * 2.自定义一个类,这个类要继承import org.apache.hadoop.mapreduce.Mapper;
 
 * 重写map方法,实现具体业务逻辑,将新的kv输出
 
 * 3.自定义一个类,这个类要继承import org.apache.hadoop.mapreduce.Reducer;
 
 * 4.将自定义的mapper和reducer通过job对象组装起来
 
      public static void main(String[] args) throws Exception {
 
            Job job = Job.getInstance(new Configuration());
 
            job.setJarByClass(WordCount.class);
 
            job.setMapperClass(WCMapper.class);
 
            job.setMapOutputKeyClass(Text.class);
 
            job.setMapOutputValueClass(LongWritable.class);
 
            FileInputFormat.setInputPaths(job, new Path("/words.txt"));
 
            job.setReducerClass(WCReducer.class);
 
            job.setOutputKeyClass(Text.class);
 
            job.setOutputValueClass(LongWritable.class);
 
            FileOutputFormat.setOutputPath(job, new Path("/wcount619"));
 
            job.waitForCompletion(true);
 
 
 
4.打包为wc.jar,并上传到linux,并在Hadoop下运行
     hadoop jar /root/wc.jar