标签:color ova apach logs number 不同的 reg data inner
aggregateByKey与aggregate类似,都是进行两次聚合,不同的是后者只对分区有效,前者对分区中key进一步细分
def
aggregateByKey[U
:
ClassTag](zeroValue
:
U, partitioner
:
Partitioner)
(seqOp
:
(U, V)
=
> U, combOp
:
(U, U)
=
> U)
:
RDD[(K, U)]
def
aggregateByKey[U
:
ClassTag](zeroValue
:
U, numPartitions
:
Int)
(seqOp
:
(U, V)
=
> U, combOp
:
(U, U)
=
> U)
:
RDD[(K, U)]
def
aggregateByKey[U
:
ClassTag](zeroValue
:
U)
(seqOp
:
(U, V)
=
> U, combOp
:
(U, U)
=
> U)
:
RDD[(K, U)]
//数据被分为两个分区 //分区1:(1,3),(1,2) //分区2:(1, 4),(2,3),(2,4) scala> var data = sc.parallelize(List((1,3),(1,2),(1, 4),(2,3),(2,4)),2) data: org.apache.spark.rdd.RDD[(Int, Int)] = ParallelCollectionRDD[7] at parallelize at <console>:24 //每个分区中按key聚合 scala> def InnerCom(a:Int, b:Int) : Int ={ | println("InnerCom: " + a + " :" + b) | math.max(a,b) | } InnerCom: (a: Int, b: Int)Int //分区间的聚合 scala> def PartitionCom(a:Int, b:Int) : Int ={ | println("PartitionCom: " + a + " :" + b) | a + b | } PartitionCom: (a: Int, b: Int)Int //第一个分区中只有一个key,两个元素 //聚合后结果为(1,3) //第二个分区中两个key,1、2 //聚合后结果为(1,4)、(2,3) //二次聚合后结果为(1,7)(2,4) scala> data.aggregateByKey(2)(InnerCom, PartitionCom).collect InnerCom: 2 :3 InnerCom: 3 :2 InnerCom: 2 :4 InnerCom: 2 :3 InnerCom: 3 :4 PartitionCom: 3 :4 res: Array[(Int, Int)] = Array((2,4), (1,7))
标签:color ova apach logs number 不同的 reg data inner
原文地址:http://www.cnblogs.com/gaohuajie/p/7495210.html