标签:
(1)Hadoop2.7.2 在linux部署完毕,成功启动dfs和yarn,通过jps查看,进程都存在
(2)安装maven
在windows系统中,直接通过Run as Java Application运行wordcount,而不需要先打包成jar包,然后在linux终端运行
1、启动dfs和yarn
终端:${HADOOP_HOME}/sbin/start-dfs.sh
${HADOOP_HOME}/sbin/start-yarn.sh
通过在namenode节点上jps查看显示:
4852 NameNode
5364 ResourceManager
5141 SecondaryNameNode
10335 Jps
在datanode节点上使用jps查看显示:
10369 Jps
4852 NameNode
5364 ResourceManager
5141 SecondaryNameNode
2、Eclipse基础配置
(1)将hadoop-eclipse-plugin-2.7.2.jar插件下载,放在Eclipse的目录下的plugins目录下,启动Eclipse,然后点击查看Hadoop插件是否生效,点击windows——>preferences,如下图1
(2)将hadoop-2.7.2的解压包添加到2所示的目录,点击OK
3、Eclipse创建maven工程
(1)创建过程省略
(2)添加dependency,POM.xml中的依赖项如下:
hadoop-common
hadoop-hdfs
hadoop-mapreduce-client-core
hadoop-mapreduce-client-common
<!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-common --> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-common</artifactId> <version>2.7.2</version> </dependency> <!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-hdfs --> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-hdfs</artifactId> <version>2.7.2</version> </dependency> <!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-mapreduce-client-core --> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-mapreduce-client-core</artifactId> <version>2.7.2</version> </dependency> <!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-mapreduce-client-common --> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-mapreduce-client-common</artifactId> <version>2.7.2</version> </dependency>
(3)此时可能会卡顿一段时间,Build workpath如果特别慢的话,请参考我前不久的一篇解决方法,等到maven中的依赖包下载install完毕即可
4、编写mapreduce中的wordcount代码
代码此处不在累述,,简单代码架构(红色框的那个包)和内容如下:
WCMapper类:
package cn.edu.nupt.hadoop.mr.wordcount; import java.io.IOException; import org.apache.commons.lang.StringUtils; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Mapper; // 4个泛型中,前两个是指定的mapper输入数据的类型 //map 和 reduce 的数据输入输出是以key-value的形式封装的 //默认情况下,框架传递给我们的mapper的输入数据中,key是要处理的文本中一行的其实偏移量,这一行的内容作为value // JDK 中long string等使用jdk自带的序列化机制,序列化之后会携带很多附加信息,造成网络传输冗余, // 所以Hadoop自己封装了一些序列化机制 public class WCMapper extends Mapper<LongWritable, Text, Text, LongWritable>{ // mapreduce框架每读一行就调用一次该方法 @Override protected void map(LongWritable key, Text value,Context context) throws IOException, InterruptedException { //具体的业务写在这个方法中,而且我们业务要处理的数据已经被该框架传递进来 // key是这一行的其实偏移量,value是文本内容 String line = value.toString(); String[] words = StringUtils.split(line, " "); for(String word : words){ context.write(new Text(word), new LongWritable(1)); } } }
WCReducer类:
package cn.edu.nupt.hadoop.mr.wordcount; import java.io.IOException; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Reducer; public class WCReducer extends Reducer<Text, LongWritable, Text, LongWritable>{ // 框架在map处理完成之后,将所有的kv对缓存起来,进行分组,然后传递一个组 // <key,{value1,value2...valuen}> //<hello,{1,1,1,1,1,1.....}> @Override protected void reduce(Text key, Iterable<LongWritable> values,Context context) throws IOException, InterruptedException { long count = 0; for(LongWritable value:values){ count += value.get(); } context.write(key, new LongWritable(count)); } }
WCRunner类
package cn.edu.nupt.hadoop.mr.wordcount; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; /** * *<p> WCRunner.java * Description:<br/> * (1)用来描述一个作业<br/> * (2)比如,该作业使用哪个类作为逻辑处理中的map,哪个作为reduce * (3)还可以指定改作业要处理的数据所在的路径 * (4)还可以指定作业输出的路径 *<p> * Company: cstor * * @author zhuxy * 2016年8月4日 下午9:58:02 */ public class WCRunner { public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); Job wcjob = Job.getInstance(conf); // 找到Mapper和Reducer两个类所在的路径 //设置整个job所用的那些类在哪个jar下 wcjob.setJarByClass(WCRunner.class); //本job使用的mapper和reducer类 wcjob.setMapperClass(WCMapper.class); wcjob.setReducerClass(WCReducer.class); //指定reduce的输出数据kv类型 wcjob.setOutputKeyClass(Text.class); wcjob.setOutputValueClass(LongWritable.class); // 指定map的输出数据的kv类型 wcjob.setMapOutputKeyClass(Text.class); wcjob.setMapOutputValueClass(LongWritable.class); // // FileInputFormat.setInputPaths(wcjob, new Path("hdfs://master:9000/wc/input/testHdfs.txt")); // FileOutputFormat.setOutputPath(wcjob, new Path("hdfs://master:9000/wc/output7/")); FileInputFormat.setInputPaths(wcjob, new Path("file:///E:/input/testwc.txt")); FileOutputFormat.setOutputPath(wcjob, new Path("file:///E:/output3/")); wcjob.waitForCompletion(true); } }
此时代码张贴完毕。
5、在CentOS的本地创建一个文件,命名为testHdfs.txt(这个是我之前的测试文件,内容不重要,名字不重要,一致即可),内容如下:
hello java
hello Hadoop
hello world
创建好后,将文件上传到hdfs文件系统的/wc/input文件夹下面
hadoop fs -put ./testHdfs.txt /wc/input
6、在WCRunner类中,右击Run as -->Java Application,出现如下错误:
log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "main" java.lang.NullPointerException
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1012)
at org.apache.hadoop.util.Shell.runCommand(Shell.java:483)
at org.apache.hadoop.util.Shell.run(Shell.java:456)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:815)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:798)
……
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
at cn.edu.nupt.hadoop.mr.wordcount.WCRunner.main(WCRunner.java:55)
解决办法:参考:eclipse Run on Hadoop java.lang.NullPointerException
方法:在Hadoop的bin目录下放winutils.exe,在环境变量中配置 HADOOP_HOME,把hadoop.dll拷贝到C:\Windows\System32下面即可
注:此处最好将HADOOP_HOME/bin目录添加到path中,这样可以运行本地模式,即是上述代码中注释的部分
两个文件的下载地址:win10下hadoo2.7.2的hadoop.dll和winutils.exe
7、此时再次运行Run as -->Java Application,出现问题如下:
log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
文件夹创建成功,但是文件夹下面没有success 和 运行结果part*文件,即/wc/output3下面没内容(输出结果)。
解决办法:点击windows-->perspective-->open perspective-->other-->MapReduce,Eclipse界面效果如下:
并且在底部出现MapReduce Locations,效果如下:
此时右击黄色的Map/Reduce Locations,选择New Had*,然后编辑如下,
编辑结束点击finish。再次运行Run as -->Java Application,出现想要的结果了,如图:
该图出现基本代表运行成功,没问题。但是发现MapReduce程序运行的计数器等信息没有打印在控制台,控制台只打印了log4j三行信息。解决方法见第8条
8、解决将输出的信息打印到Console上。
参考:Eclipse中运行MapReduce程序时控制台无法打印进度信息的问题
这种情况一般是由于log4j这个日志信息打印模块的配置信息没有给出造成的,可以在项目的src目录下,新建一个文件,命名为“log4j.properties”,填入以下信息:
log4j.rootLogger=INFO, stdout log4j.appender.stdout=org.apache.log4j.ConsoleAppender log4j.appender.stdout.layout=org.apache.log4j.PatternLayout log4j.appender.stdout.layout.ConversionPattern=%d %p [%c] - %m%n log4j.appender.logfile=org.apache.log4j.FileAppender log4j.appender.logfile.File=target/spring.log log4j.appender.logfile.layout=org.apache.log4j.PatternLayout log4j.appender.logfile.layout.ConversionPattern=%d %p [%c] - %m%n
9、此时,所有的问题解决
(1)控制台打印信息
2016-08-05 00:56:45,209 INFO [org.apache.hadoop.conf.Configuration.deprecation] - session.id is deprecated. Instead, use dfs.metrics.session-id
2016-08-05 00:56:45,211 INFO [org.apache.hadoop.metrics.jvm.JvmMetrics] - Initializing JVM Metrics with processName=JobTracker, sessionId=
2016-08-05 00:56:45,856 WARN [org.apache.hadoop.mapreduce.JobResourceUploader] - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
2016-08-05 00:56:45,918 WARN [org.apache.hadoop.mapreduce.JobResourceUploader] - No job jar file set. User classes may not be found. See Job or Job#setJar(String).
2016-08-05 00:56:45,976 INFO [org.apache.hadoop.mapreduce.lib.input.FileInputFormat] - Total input paths to process : 1
(2)/wc/outputn/part*输出的信息
Hadoop 1
hello 3
java 1
world 1
至此成功实现。
win10+eclipse+hadoop2.7.2+maven直接通过Run as Java Application运行wordcount
标签:
原文地址:http://www.cnblogs.com/xiangyangzhu/p/5739063.html