标签:
hdfs-site.xml里添加
1 <property> 2 <name>dfs.permissions</name> 3 <value>false</value> 4 </property>
然后在虚拟机里,运行hadoop dfsadmin -safemode leave
保险起见,再来一个 hadoop fs -chmod 777 /
总而言之,就是彻底把hadoop的安全检测关掉(学习阶段不需要这些,正式生产上时,不要这么干),最后重启hadoop,再到eclipse里,重复刚才的删除文件操作试下,应该可以了。
package com.wimang.test; import java.io.IOException; import java.util.StringTokenizer; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.util.GenericOptionsParser; public class WordCount { public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable> { private final static IntWritable one = new IntWritable(1); private Text word = new Text(); public void map(Object key, Text value, Context context ) throws IOException, InterruptedException { StringTokenizer itr = new StringTokenizer(value.toString()); while (itr.hasMoreTokens()) { word.set(itr.nextToken()); context.write(word, one); } } } public static class IntSumReducer extends Reducer<Text, IntWritable, Text, IntWritable> { private IntWritable result = new IntWritable(); public void reduce(Text key, Iterable<IntWritable> values, Context context ) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } result.set(sum); context.write(key, result); } } @SuppressWarnings("deprecation") public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); String[] otherArgs = new GenericOptionsParser(conf, args) .getRemainingArgs(); if (otherArgs.length != 2) { System.err.println("Usage: wordcount <in> <out>"); System.exit(2); } Job job = new Job(conf, "word count"); job.setJarByClass(WordCount.class); job.setMapperClass(TokenizerMapper.class); job.setCombinerClass(IntSumReducer.class); job.setReducerClass(IntSumReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); FileInputFormat.addInputPath(job, new Path(otherArgs[0])); FileOutputFormat.setOutputPath(job, new Path(otherArgs[1])); System.exit(job.waitForCompletion(true) ? 0 : 1); } }
出现如下图,即为运行成功。
或者在hadoop运行的虚拟机上通过敲击命令查看也可以,可以在终端中用命令如下,查看是否生成文件夹output:
bin/hadoop fs -ls
用下面命令查看生成的文件内容:
bin/hadoop fs -cat output1/*
出现如上问题,只需要下载安装vc+2013组件即可,下载地址为:http://www.microsoft.com/en-us/download/confirmation.aspx?id=40784
就是没有没有写的权限,这时候可以参考上面按照eclipse hadoop插件时有提到过解决办法。如果是安装我的这个流程走,应该不会出现这个问题。
到此,就可以在windows上进行eclipse远程开发调试ubutun上的Hadoop了。
注:本文为自用,随时更新。
windows上eclipse搭建hadoop开发环境(自用,备忘)
标签:
原文地址:http://blog.csdn.net/osaymissyou0/article/details/51981728