标签:mahout mapreduce hadoop 机器学习 数据挖掘
阅读导读:数据文件randomData.csv,由R语言通过“随机正太分布函数”程序生成,单机内存实验请参考文章:用Maven构建Mahout项目。原始数据文件:这里只截取了一部分数据。
~ vi datafile/randomData.csv -0.883033363823402 -3.31967192630249 -2.39312626419456 3.34726861118871 2.66976353341256 1.85144276077058 -1.09922906899594 -6.06261735207489 -4.36361936997216 1.90509905380532 -0.00351835125495037 -0.610105996559153 -2.9962958796338 -3.60959839525735 -3.27529418132066 0.0230099799641799 2.17665594420569 6.77290756817957 -2.47862038335637 2.53431833167278 5.53654901906814 2.65089785582474 5.66257474538338 6.86783609641077 -0.558946883114376 1.22332819416237 5.11728525486132 3.74663871584768 1.91240516693351 2.95874731384062 -2.49747101306535 2.05006504756875 3.98781883213459 1.00780938946366 5.47470532716682 5.35084411045171
注:由于Mahout中kmeans算法,默认的分融符是” “(空格),因些我把逗号分隔的数据文件,改成以空格分隔。
package org.conan.mymahout.cluster08; import org.apache.hadoop.fs.Path; import org.apache.hadoop.mapred.JobConf; import org.apache.mahout.clustering.conversion.InputDriver; import org.apache.mahout.clustering.kmeans.KMeansDriver; import org.apache.mahout.clustering.kmeans.RandomSeedGenerator; import org.apache.mahout.common.distance.DistanceMeasure; import org.apache.mahout.common.distance.EuclideanDistanceMeasure; import org.apache.mahout.utils.clustering.ClusterDumper; import org.conan.mymahout.hdfs.HdfsDAO; import org.conan.mymahout.recommendation.ItemCFHadoop; public class KmeansHadoop { private static final String HDFS = "hdfs://192.168.1.210:9000"; public static void main(String[] args) throws Exception { String localFile = "datafile/randomData.csv"; String inPath = HDFS + "/user/hdfs/mix_data"; String seqFile = inPath + "/seqfile"; String seeds = inPath + "/seeds"; String outPath = inPath + "/result/"; String clusteredPoints = outPath + "/clusteredPoints"; JobConf conf = config(); HdfsDAO hdfs = new HdfsDAO(HDFS, conf); hdfs.rmr(inPath); hdfs.mkdirs(inPath); hdfs.copyFile(localFile, inPath); hdfs.ls(inPath); InputDriver.runJob(new Path(inPath), new Path(seqFile), "org.apache.mahout.math.RandomAccessSparseVector"); int k = 3; Path seqFilePath = new Path(seqFile); Path clustersSeeds = new Path(seeds); DistanceMeasure measure = new EuclideanDistanceMeasure(); clustersSeeds = RandomSeedGenerator.buildRandom(conf, seqFilePath, clustersSeeds, k, measure); KMeansDriver.run(conf, seqFilePath, clustersSeeds, new Path(outPath), measure, 0.01, 10, true, 0.01, false); Path outGlobPath = new Path(outPath, "clusters-*-final"); Path clusteredPointsPath = new Path(clusteredPoints); System.out.printf("Dumping out clusters from clusters: %s and clusteredPoints: %s\n", outGlobPath, clusteredPointsPath); ClusterDumper clusterDumper = new ClusterDumper(outGlobPath, clusteredPointsPath); clusterDumper.printClusters(null); } public static JobConf config() { JobConf conf = new JobConf(ItemCFHadoop.class); conf.setJobName("ItemCFHadoop"); conf.addResource("classpath:/hadoop/core-site.xml"); conf.addResource("classpath:/hadoop/hdfs-site.xml"); conf.addResource("classpath:/hadoop/mapred-site.xml"); return conf; } }
控制台输出:
Delete: hdfs://192.168.1.210:9000/user/hdfs/mix_data Create: hdfs://192.168.1.210:9000/user/hdfs/mix_data copy from: datafile/randomData.csv to hdfs://192.168.1.210:9000/user/hdfs/mix_data ls: hdfs://192.168.1.210:9000/user/hdfs/mix_data ========================================================== name: hdfs://192.168.1.210:9000/user/hdfs/mix_data/randomData.csv, folder: false, size: 36655 ========================================================== SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. 2013-10-14 15:39:31 org.apache.hadoop.util.NativeCodeLoader 警告: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2013-10-14 15:39:31 org.apache.hadoop.mapred.JobClient copyAndConfigureFiles 警告: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 2013-10-14 15:39:31 org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus 信息: Total input paths to process : 1 2013-10-14 15:39:31 org.apache.hadoop.io.compress.snappy.LoadSnappy 警告: Snappy native library not loaded 2013-10-14 15:39:31 org.apache.hadoop.mapred.JobClient monitorAndPrintJob 信息: Running job: job_local_0001 2013-10-14 15:39:31 org.apache.hadoop.mapred.Task initialize 信息: Using ResourceCalculatorPlugin : null 2013-10-14 15:39:31 org.apache.hadoop.mapred.Task done 信息: Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting 2013-10-14 15:39:31 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate 信息: 2013-10-14 15:39:31 org.apache.hadoop.mapred.Task commit 信息: Task attempt_local_0001_m_000000_0 is allowed to commit now 2013-10-14 15:39:31 org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask 信息: Saved output of task 'attempt_local_0001_m_000000_0' to hdfs://192.168.1.210:9000/user/hdfs/mix_data/seqfile 2013-10-14 15:39:31 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate 信息: 2013-10-14 15:39:31 org.apache.hadoop.mapred.Task sendDone 信息: Task 'attempt_local_0001_m_000000_0' done. 2013-10-14 15:39:32 org.apache.hadoop.mapred.JobClient monitorAndPrintJob 信息: map 100% reduce 0% 2013-10-14 15:39:32 org.apache.hadoop.mapred.JobClient monitorAndPrintJob 信息: Job complete: job_local_0001 ...... 2013-10-14 15:39:41 org.apache.hadoop.mapred.Merger$MergeQueue merge 信息: Down to the last merge-pass, with 1 segments left of total size: 677 bytes 2013-10-14 15:39:41 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate 信息: 2013-10-14 15:39:41 org.apache.hadoop.mapred.Task done 信息: Task:attempt_local_0009_r_000000_0 is done. And is in the process of commiting 2013-10-14 15:39:41 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate 信息: 2013-10-14 15:39:41 org.apache.hadoop.mapred.Task commit 信息: Task attempt_local_0009_r_000000_0 is allowed to commit now 2013-10-14 15:39:41 org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask 信息: Saved output of task 'attempt_local_0009_r_000000_0' to hdfs://192.168.1.210:9000/user/hdfs/mix_data/result/clusters-8 2013-10-14 15:39:41 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate 信息: reduce > reduce 2013-10-14 15:39:41 org.apache.hadoop.mapred.Task sendDone 信息: Task 'attempt_local_0009_r_000000_0' done. 2013-10-14 15:39:42 org.apache.hadoop.mapred.JobClient monitorAndPrintJob 信息: map 100% reduce 100% 2013-10-14 15:39:42 org.apache.hadoop.mapred.JobClient monitorAndPrintJob 信息: Job complete: job_local_0009 2013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log 信息: Counters: 19 2013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log 信息: File Output Format Counters 2013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log 信息: Bytes Written=695 2013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log 信息: FileSystemCounters 2013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log 信息: FILE_BYTES_READ=27256775 2013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log 信息: HDFS_BYTES_READ=673669 2013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log 信息: FILE_BYTES_WRITTEN=28569192 2013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log 信息: HDFS_BYTES_WRITTEN=152767 2013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log 信息: File Input Format Counters 2013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log 信息: Bytes Read=31390 2013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log 信息: Map-Reduce Framework 2013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log 信息: Map output materialized bytes=681 2013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log 信息: Map input records=1000 2013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log 信息: Reduce shuffle bytes=0 2013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log 信息: Spilled Records=6 2013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log 信息: Map output bytes=666 2013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log 信息: Total committed heap usage (bytes)=1772093440 2013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log 信息: SPLIT_RAW_BYTES=130 2013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log 信息: Combine input records=0 2013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log 信息: Reduce input records=3 2013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log 信息: Reduce input groups=3 2013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log 信息: Combine output records=0 2013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log 信息: Reduce output records=3 2013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log 信息: Map output records=3 2013-10-14 15:39:42 org.apache.hadoop.mapred.JobClient copyAndConfigureFiles 警告: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 2013-10-14 15:39:42 org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus 信息: Total input paths to process : 1 2013-10-14 15:39:42 org.apache.hadoop.mapred.JobClient monitorAndPrintJob 信息: Running job: job_local_0010 2013-10-14 15:39:42 org.apache.hadoop.mapred.Task initialize 信息: Using ResourceCalculatorPlugin : null 2013-10-14 15:39:42 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: io.sort.mb = 100 2013-10-14 15:39:42 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: data buffer = 79691776/99614720 2013-10-14 15:39:42 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: record buffer = 262144/327680 2013-10-14 15:39:42 org.apache.hadoop.mapred.MapTask$MapOutputBuffer flush 信息: Starting flush of map output 2013-10-14 15:39:42 org.apache.hadoop.mapred.MapTask$MapOutputBuffer sortAndSpill 信息: Finished spill 0 2013-10-14 15:39:42 org.apache.hadoop.mapred.Task done 信息: Task:attempt_local_0010_m_000000_0 is done. And is in the process of commiting 2013-10-14 15:39:42 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate 信息: 2013-10-14 15:39:42 org.apache.hadoop.mapred.Task sendDone 信息: Task 'attempt_local_0010_m_000000_0' done. 2013-10-14 15:39:42 org.apache.hadoop.mapred.Task initialize 信息: Using ResourceCalculatorPlugin : null 2013-10-14 15:39:42 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate 信息: 2013-10-14 15:39:42 org.apache.hadoop.mapred.Merger$MergeQueue merge 信息: Merging 1 sorted segments 2013-10-14 15:39:42 org.apache.hadoop.mapred.Merger$MergeQueue merge 信息: Down to the last merge-pass, with 1 segments left of total size: 677 bytes 2013-10-14 15:39:42 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate 信息: 2013-10-14 15:39:42 org.apache.hadoop.mapred.Task done 信息: Task:attempt_local_0010_r_000000_0 is done. And is in the process of commiting 2013-10-14 15:39:42 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate 信息: 2013-10-14 15:39:42 org.apache.hadoop.mapred.Task commit 信息: Task attempt_local_0010_r_000000_0 is allowed to commit now 2013-10-14 15:39:42 org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask 信息: Saved output of task 'attempt_local_0010_r_000000_0' to hdfs://192.168.1.210:9000/user/hdfs/mix_data/result/clusters-9 2013-10-14 15:39:42 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate 信息: reduce > reduce 2013-10-14 15:39:42 org.apache.hadoop.mapred.Task sendDone 信息: Task 'attempt_local_0010_r_000000_0' done. 2013-10-14 15:39:43 org.apache.hadoop.mapred.JobClient monitorAndPrintJob 信息: map 100% reduce 100% 2013-10-14 15:39:43 org.apache.hadoop.mapred.JobClient monitorAndPrintJob 信息: Job complete: job_local_0010 2013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log 信息: Counters: 19 2013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log 信息: File Output Format Counters 2013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log 信息: Bytes Written=695 2013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log 信息: FileSystemCounters 2013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log 信息: FILE_BYTES_READ=30544993 2013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log 信息: HDFS_BYTES_READ=741007 2013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log 信息: FILE_BYTES_WRITTEN=32013760 2013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log 信息: HDFS_BYTES_WRITTEN=154545 2013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log 信息: File Input Format Counters 2013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log 信息: Bytes Read=31390 2013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log 信息: Map-Reduce Framework 2013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log 信息: Map output materialized bytes=681 2013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log 信息: Map input records=1000 2013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log 信息: Reduce shuffle bytes=0 2013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log 信息: Spilled Records=6 2013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log 信息: Map output bytes=666 2013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log 信息: Total committed heap usage (bytes)=1966735360 2013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log 信息: SPLIT_RAW_BYTES=130 2013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log 信息: Combine input records=0 2013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log 信息: Reduce input records=3 2013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log 信息: Reduce input groups=3 2013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log 信息: Combine output records=0 2013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log 信息: Reduce output records=3 2013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log 信息: Map output records=3 2013-10-14 15:39:43 org.apache.hadoop.mapred.JobClient copyAndConfigureFiles 警告: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 2013-10-14 15:39:43 org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus 信息: Total input paths to process : 1 2013-10-14 15:39:43 org.apache.hadoop.mapred.JobClient monitorAndPrintJob 信息: Running job: job_local_0011 2013-10-14 15:39:43 org.apache.hadoop.mapred.Task initialize 信息: Using ResourceCalculatorPlugin : null 2013-10-14 15:39:43 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: io.sort.mb = 100 2013-10-14 15:39:43 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: data buffer = 79691776/99614720 2013-10-14 15:39:43 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: record buffer = 262144/327680 2013-10-14 15:39:43 org.apache.hadoop.mapred.MapTask$MapOutputBuffer flush 信息: Starting flush of map output 2013-10-14 15:39:43 org.apache.hadoop.mapred.MapTask$MapOutputBuffer sortAndSpill 信息: Finished spill 0 2013-10-14 15:39:43 org.apache.hadoop.mapred.Task done 信息: Task:attempt_local_0011_m_000000_0 is done. And is in the process of commiting 2013-10-14 15:39:43 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate 信息: 2013-10-14 15:39:43 org.apache.hadoop.mapred.Task sendDone 信息: Task 'attempt_local_0011_m_000000_0' done. 2013-10-14 15:39:43 org.apache.hadoop.mapred.Task initialize 信息: Using ResourceCalculatorPlugin : null 2013-10-14 15:39:43 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate 信息: 2013-10-14 15:39:43 org.apache.hadoop.mapred.Merger$MergeQueue merge 信息: Merging 1 sorted segments 2013-10-14 15:39:43 org.apache.hadoop.mapred.Merger$MergeQueue merge 信息: Down to the last merge-pass, with 1 segments left of total size: 677 bytes 2013-10-14 15:39:43 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate 信息: 2013-10-14 15:39:43 org.apache.hadoop.mapred.Task done 信息: Task:attempt_local_0011_r_000000_0 is done. And is in the process of commiting 2013-10-14 15:39:43 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate 信息: 2013-10-14 15:39:43 org.apache.hadoop.mapred.Task commit 信息: Task attempt_local_0011_r_000000_0 is allowed to commit now 2013-10-14 15:39:43 org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask 信息: Saved output of task 'attempt_local_0011_r_000000_0' to hdfs://192.168.1.210:9000/user/hdfs/mix_data/result/clusters-10 2013-10-14 15:39:43 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate 信息: reduce > reduce 2013-10-14 15:39:43 org.apache.hadoop.mapred.Task sendDone 信息: Task 'attempt_local_0011_r_000000_0' done. 2013-10-14 15:39:44 org.apache.hadoop.mapred.JobClient monitorAndPrintJob 信息: map 100% reduce 100% 2013-10-14 15:39:44 org.apache.hadoop.mapred.JobClient monitorAndPrintJob 信息: Job complete: job_local_0011 2013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log 信息: Counters: 19 2013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log 信息: File Output Format Counters 2013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log 信息: Bytes Written=695 2013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log 信息: FileSystemCounters 2013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log 信息: FILE_BYTES_READ=33833211 2013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log 信息: HDFS_BYTES_READ=808345 2013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log 信息: FILE_BYTES_WRITTEN=35458320 2013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log 信息: HDFS_BYTES_WRITTEN=156323 2013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log 信息: File Input Format Counters 2013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log 信息: Bytes Read=31390 2013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log 信息: Map-Reduce Framework 2013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log 信息: Map output materialized bytes=681 2013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log 信息: Map input records=1000 2013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log 信息: Reduce shuffle bytes=0 2013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log 信息: Spilled Records=6 2013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log 信息: Map output bytes=666 2013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log 信息: Total committed heap usage (bytes)=2166095872 2013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log 信息: SPLIT_RAW_BYTES=130 2013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log 信息: Combine input records=0 2013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log 信息: Reduce input records=3 2013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log 信息: Reduce input groups=3 2013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log 信息: Combine output records=0 2013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log 信息: Reduce output records=3 2013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log 信息: Map output records=3 2013-10-14 15:39:44 org.apache.hadoop.mapred.JobClient copyAndConfigureFiles 警告: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 2013-10-14 15:39:44 org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus 信息: Total input paths to process : 1 2013-10-14 15:39:44 org.apache.hadoop.mapred.JobClient monitorAndPrintJob 信息: Running job: job_local_0012 2013-10-14 15:39:44 org.apache.hadoop.mapred.Task initialize 信息: Using ResourceCalculatorPlugin : null 2013-10-14 15:39:44 org.apache.hadoop.mapred.Task done 信息: Task:attempt_local_0012_m_000000_0 is done. And is in the process of commiting 2013-10-14 15:39:44 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate 信息: 2013-10-14 15:39:44 org.apache.hadoop.mapred.Task commit 信息: Task attempt_local_0012_m_000000_0 is allowed to commit now 2013-10-14 15:39:44 org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask 信息: Saved output of task 'attempt_local_0012_m_000000_0' to hdfs://192.168.1.210:9000/user/hdfs/mix_data/result/clusteredPoints 2013-10-14 15:39:44 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate 信息: 2013-10-14 15:39:44 org.apache.hadoop.mapred.Task sendDone 信息: Task 'attempt_local_0012_m_000000_0' done. 2013-10-14 15:39:45 org.apache.hadoop.mapred.JobClient monitorAndPrintJob 信息: map 100% reduce 0% 2013-10-14 15:39:45 org.apache.hadoop.mapred.JobClient monitorAndPrintJob 信息: Job complete: job_local_0012 2013-10-14 15:39:45 org.apache.hadoop.mapred.Counters log 信息: Counters: 11 2013-10-14 15:39:45 org.apache.hadoop.mapred.Counters log 信息: File Output Format Counters 2013-10-14 15:39:45 org.apache.hadoop.mapred.Counters log 信息: Bytes Written=41520 2013-10-14 15:39:45 org.apache.hadoop.mapred.Counters log 信息: File Input Format Counters 2013-10-14 15:39:45 org.apache.hadoop.mapred.Counters log 信息: Bytes Read=31390 2013-10-14 15:39:45 org.apache.hadoop.mapred.Counters log 信息: FileSystemCounters 2013-10-14 15:39:45 org.apache.hadoop.mapred.Counters log 信息: FILE_BYTES_READ=18560374 2013-10-14 15:39:45 org.apache.hadoop.mapred.Counters log 信息: HDFS_BYTES_READ=437203 2013-10-14 15:39:45 org.apache.hadoop.mapred.Counters log 信息: FILE_BYTES_WRITTEN=19450325 2013-10-14 15:39:45 org.apache.hadoop.mapred.Counters log 信息: HDFS_BYTES_WRITTEN=120417 2013-10-14 15:39:45 org.apache.hadoop.mapred.Counters log 信息: Map-Reduce Framework 2013-10-14 15:39:45 org.apache.hadoop.mapred.Counters log 信息: Map input records=1000 2013-10-14 15:39:45 org.apache.hadoop.mapred.Counters log 信息: Spilled Records=0 2013-10-14 15:39:45 org.apache.hadoop.mapred.Counters log 信息: Total committed heap usage (bytes)=1083047936 2013-10-14 15:39:45 org.apache.hadoop.mapred.Counters log 信息: SPLIT_RAW_BYTES=130 2013-10-14 15:39:45 org.apache.hadoop.mapred.Counters log 信息: Map output records=1000 Dumping out clusters from clusters: hdfs://192.168.1.210:9000/user/hdfs/mix_data/result/clusters-*-final and clusteredPoints: hdfs://192.168.1.210:9000/user/hdfs/mix_data/result/clusteredPoints CL-552{n=443 c=[1.631, -0.412] r=[1.563, 1.407]} Weight : [props - optional]: Point: 1.0: [-2.393, 3.347] 1.0: [-4.364, 1.905] 1.0: [-3.275, 0.023] 1.0: [-2.479, 2.534] 1.0: [-0.559, 1.223] ... CL-847{n=77 c=[-2.953, -0.971] r=[1.767, 2.189]} Weight : [props - optional]: Point: 1.0: [-0.883, -3.320] 1.0: [-1.099, -6.063] 1.0: [-0.004, -0.610] 1.0: [-2.996, -3.610] 1.0: [3.988, 1.008] ... CL-823{n=480 c=[0.219, 2.600] r=[1.479, 1.385]} Weight : [props - optional]: Point: 1.0: [2.670, 1.851] 1.0: [2.177, 6.773] 1.0: [5.537, 2.651] 1.0: [5.663, 6.868] 1.0: [5.117, 3.747] 1.0: [1.912, 2.959] ...
HDFS的数据目录和工作目录,并上传数据文件。
Delete: hdfs://192.168.1.210:9000/user/hdfs/mix_data Create: hdfs://192.168.1.210:9000/user/hdfs/mix_data copy from: datafile/randomData.csv to hdfs://192.168.1.210:9000/user/hdfs/mix_data ls: hdfs://192.168.1.210:9000/user/hdfs/mix_data ========================================================== name: hdfs://192.168.1.210:9000/user/hdfs/mix_data/randomData.csv, folder: false, size: 36655
b. 算法执行
算法执行,有3个步骤。1):把原始数据randomData.csv,转成Mahout sequence files of VectorWritable。 程序源代码:
InputDriver.runJob(new Path(inPath), new Path(seqFile), "org.apache.mahout.math.RandomAccessSparseVector");日志输出:
2):通过随机的方法,选中kmeans的3个中心,做为初始集群 程序源代码:
int k = 3; Path seqFilePath = new Path(seqFile); Path clustersSeeds = new Path(seeds); DistanceMeasure measure = new EuclideanDistanceMeasure(); clustersSeeds = RandomSeedGenerator.buildRandom(conf, seqFilePath, clustersSeeds, k, measure);
日志输出:
Job complete: job_local_0002
3):根据迭代次数的设置,执行MapReduce,进行计算
程序源代码:
KMeansDriver.run(conf, seqFilePath, clustersSeeds, new Path(outPath), measure, 0.01, 10, true, 0.01, false);
日志输出:
Job complete: job_local_0003 Job complete: job_local_0004 Job complete: job_local_0005 Job complete: job_local_0006 Job complete: job_local_0007 Job complete: job_local_0008 Job complete: job_local_0009 Job complete: job_local_0010 Job complete: job_local_0011 Job complete: job_local_0012
c.打印聚类结果
Dumping out clusters from clusters: hdfs://192.168.1.210:9000/user/hdfs/mix_data/result/clusters-*-final and clusteredPoints: hdfs://192.168.1.210:9000/user/hdfs/mix_data/result/clusteredPoints CL-552{n=443 c=[1.631, -0.412] r=[1.563, 1.407]} CL-847{n=77 c=[-2.953, -0.971] r=[1.767, 2.189]} CL-823{n=480 c=[0.219, 2.600] r=[1.479, 1.385]}
运行结果:有3个中心。
Cluster1, 包括443个点,中心坐标[1.631, -0.412] Cluster2, 包括77个点,中心坐标[-2.953, -0.971] Cluster3, 包括480 个点,中心坐标[0.219, 2.600]
# 根目录 ~ hadoop fs -ls /user/hdfs/mix_data Found 4 items -rw-r--r-- 3 Administrator supergroup 36655 2013-10-04 15:31 /user/hdfs/mix_data/randomData.csv drwxr-xr-x - Administrator supergroup 0 2013-10-04 15:31 /user/hdfs/mix_data/result drwxr-xr-x - Administrator supergroup 0 2013-10-04 15:31 /user/hdfs/mix_data/seeds drwxr-xr-x - Administrator supergroup 0 2013-10-04 15:31 /user/hdfs/mix_data/seqfile # 输出目录 ~ hadoop fs -ls /user/hdfs/mix_data/result Found 13 items -rw-r--r-- 3 Administrator supergroup 194 2013-10-04 15:31 /user/hdfs/mix_data/result/_policy drwxr-xr-x - Administrator supergroup 0 2013-10-04 15:31 /user/hdfs/mix_data/result/clusteredPoints drwxr-xr-x - Administrator supergroup 0 2013-10-04 15:31 /user/hdfs/mix_data/result/clusters-0 drwxr-xr-x - Administrator supergroup 0 2013-10-04 15:31 /user/hdfs/mix_data/result/clusters-1 drwxr-xr-x - Administrator supergroup 0 2013-10-04 15:31 /user/hdfs/mix_data/result/clusters-10-final drwxr-xr-x - Administrator supergroup 0 2013-10-04 15:31 /user/hdfs/mix_data/result/clusters-2 drwxr-xr-x - Administrator supergroup 0 2013-10-04 15:31 /user/hdfs/mix_data/result/clusters-3 drwxr-xr-x - Administrator supergroup 0 2013-10-04 15:31 /user/hdfs/mix_data/result/clusters-4 drwxr-xr-x - Administrator supergroup 0 2013-10-04 15:31 /user/hdfs/mix_data/result/clusters-5 drwxr-xr-x - Administrator supergroup 0 2013-10-04 15:31 /user/hdfs/mix_data/result/clusters-6 drwxr-xr-x - Administrator supergroup 0 2013-10-04 15:31 /user/hdfs/mix_data/result/clusters-7 drwxr-xr-x - Administrator supergroup 0 2013-10-04 15:31 /user/hdfs/mix_data/result/clusters-8 drwxr-xr-x - Administrator supergroup 0 2013-10-04 15:31 /user/hdfs/mix_data/result/clusters-9 # 产生的随机中心种子目录 ~ hadoop fs -ls /user/hdfs/mix_data/seeds Found 1 items -rw-r--r-- 3 Administrator supergroup 599 2013-10-04 15:31 /user/hdfs/mix_data/seeds/part-randomSeed # 输入文件换成Mahout格式文件的目录 ~ hadoop fs -ls /user/hdfs/mix_data/seqfile Found 2 items -rw-r--r-- 3 Administrator supergroup 0 2013-10-04 15:31 /user/hdfs/mix_data/seqfile/_SUCCESS -rw-r--r-- 3 Administrator supergroup 31390 2013-10-04 15:31 /user/hdfs/mix_data/seqfile/part-m-00000
分别把聚类后的点,保存到不同的cluster*.csv文件,然后用R语言画图。
c1<-read.csv(file="cluster1.csv",sep=",",header=FALSE) c2<-read.csv(file="cluster2.csv",sep=",",header=FALSE) c3<-read.csv(file="cluster3.csv",sep=",",header=FALSE) y<-rbind(c1,c2,c3) cols<-c(rep(1,nrow(c1)),rep(2,nrow(c2)),rep(3,nrow(c3))) plot(y, col=c("black","blue","green")[cols]) center<-matrix(c(1.631, -0.412,-2.953, -0.971,0.219, 2.600),ncol=2,byrow=TRUE) points(center, col="violetred", pch = 19)
从上图中,我们看到有 黑,蓝,绿,三种颜色的空心点,这些点就是原始数据。3个紫色实点,是Mahout的kmeans后生成的3个中心。
对比文章中用R语言实现的kmeans的分类和中心,都不太一样。用Maven构建Mahout项目 简单总结一下,在使用kmeans时,根据距离算法,阈值,初始中心,迭代次数的不同,kmeans计算的结果是不相同的。标签:mahout mapreduce hadoop 机器学习 数据挖掘
原文地址:http://blog.csdn.net/u013361361/article/details/40558411