标签:.sh over 还需要 return 分享 deque tuple lis res
关于Executor如何运行算子,请参考前面博文:大数据:Spark Core(四)用LogQuery的例子来说明Executor是如何运算RDD的算子,当Executor进行reduce运算的时候,生成运算结果的临时Shuffle数据,并保存在磁盘中,被最后的Action算子调用,而这个阶段就是在ShuffleMapTask里执行的。
前面博客中也提到了,用什么ShuffleWrite是由ShuffleHandler来决定的,在这篇博客里主要介绍最常见的SortShuffleWrite的核心算法ExternalSorter.
在前面博客中介绍了SortedShuffleWrite调用ExternalSorter.insertAll进行数据插入和数据合并的,ExternalSorted里使用了PartitionedAppendOnlyMap作为数据的存储方式
先来看PartitionedAppendOnlyMap的结构
虽然名字为Map,但是在这里和常见的Map的结构并不太一样,里面并没有使用链表结果保存相同的hash值的key,当插入的key的hashcode相同的时但key不相同,会通过i的叠加一直找到数组里空闲的位置。
这里有几个注意点:
Partitioner的方法
abstract class Partitioner extends Serializable { def numPartitions: Int def getPartition(key: Any): Int }
def getPartition(key: Any): Int = key match { case null => 0 case _ => Utils.nonNegativeMod(key.hashCode, numPartitions) }
在大数据的情况下进行归并,由于合并的数据量非常大,仅仅使用AppendOnlyMap进行数据的归并内存显然是不足够的,在这种情况下需要对讲内存里的已经归并的数据刷到磁盘上避免OOM的风险。
控制Spill到磁盘的阀值
if (elementsRead % 32 == 0 && currentMemory >= myMemoryThreshold) { // Claim up to double our current memory from the shuffle memory pool val amountToRequest = 2 * currentMemory - myMemoryThreshold val granted = acquireMemory(amountToRequest) myMemoryThreshold += granted // If we were granted too little memory to grow further (either tryToAcquire returned 0, // or we already had more memory than myMemoryThreshold), spill the current collection shouldSpill = currentMemory >= myMemoryThreshold }在每添加32个元素的时候,检查一下当前的内存状况,currentMemory是Map当前大概使用的内存,myMemoryThreshold是可以使用的内存址,初始的时候受参数控制:
spark.shuffle.spill.initialMemoryThreshold为何要尝试申请1倍的当前内存?AppendOnlyMap的每次扩容是1倍数组
spark.shuffle.spill.numElementsForceSpillThreshold
private def mergeSort(iterators: Seq[Iterator[Product2[K, C]]], comparator: Comparator[K]) : Iterator[Product2[K, C]] = { val bufferedIters = iterators.filter(_.hasNext).map(_.buffered) type Iter = BufferedIterator[Product2[K, C]] val heap = new mutable.PriorityQueue[Iter]()(new Ordering[Iter] { // Use the reverse of comparator.compare because PriorityQueue dequeues the max override def compare(x: Iter, y: Iter): Int = -comparator.compare(x.head._1, y.head._1) }) heap.enqueue(bufferedIters: _*) // Will contain only the iterators with hasNext = true new Iterator[Product2[K, C]] { override def hasNext: Boolean = !heap.isEmpty override def next(): Product2[K, C] = { if (!hasNext) { throw new NoSuchElementException } val firstBuf = heap.dequeue() val firstPair = firstBuf.next() if (firstBuf.hasNext) { heap.enqueue(firstBuf) } firstPair } } }
if (!hasNext) { throw new NoSuchElementException } keys.clear() combiners.clear() val firstPair = sorted.next() keys += firstPair._1 combiners += firstPair._2 val key = firstPair._1 while (sorted.hasNext && comparator.compare(sorted.head._1, key) == 0) { val pair = sorted.next() var i = 0 var foundKey = false while (i < keys.size && !foundKey) { if (keys(i) == pair._1) { combiners(i) = mergeCombiners(combiners(i), pair._2) foundKey = true } i += 1 } if (!foundKey) { keys += pair._1 combiners += pair._2 } }
大数据:Spark Shuffle(一)ShuffleWrite:Executor如何将Shuffle的结果进行归并写到数据文件中去
标签:.sh over 还需要 return 分享 deque tuple lis res
原文地址:http://blog.csdn.net/raintungli/article/details/70807376