标签:apach collect 合并 textfile test class blog div scala
scala> var f1=sc.textFile("/tmp/dataTest/followers.txt")
scala> f1.flatMap(x=>x.split("-")).map((_,1)).collect //每个数字以‘-‘分割,并数字为key,给每个key赋值1 res10: Array[(String, Int)] = Array((2,1), (1,1), (4,1), (1,1), (1,1), (2,1), (6,1), (3,1), (7,1), (3,1), (7,1), (6,1), (6,1), (7,1), (3,1), (7,1))
scala> f1.flatMap(x=>x.split("-")).map((_,1)).reduceByKey(_+_).collect res12: Array[(String, Int)] = Array((4,1), (7,4), (6,3), (2,2), (3,3), (1,3))
sortByKey
scala> var resText=f1.flatMap(x=>x.split("-")).map((_,1)).reduceByKey(_+_).map(x=>(x._2,x._1)).sortByKey(false).map(x=>(x._2,x._1)) resText: org.apache.spark.rdd.RDD[(String, Int)] = MapPartitionsRDD[39] at map at <console>:26
map{case (x._1, x._2) => (x._2, x._1)}.sortByKey(false)
scala> resText.saveAsTextFile("/tmp/out/res")
[root@node4 node4]# hdfs dfs -cat /tmp/out/res/part-00000 (7,4) (6,3) (3,3) (1,3) (2,2) (4,1) [root@node4 node4]#
标签:apach collect 合并 textfile test class blog div scala
原文地址:http://www.cnblogs.com/zhangXingSheng/p/6512583.html