标签:
在前四篇博文中,我们分析了Job提交运行总流程的第一阶段Stage划分与提交,它又被细化为三个分阶段:
1、Job的调度模型与运行反馈;
2、Stage划分;
3、Stage提交:对应TaskSet的生成。
Stage划分与提交阶段主要是由DAGScheduler完成的,而DAGScheduler负责Job的逻辑调度,主要职责也即DAG图的分解,按照RDD间是否为shuffle dependency,将整个Job划分为一个个stage,并将每个stage转化为tasks的集合--TaskSet。
接下来我们要讲的第二阶段Task调度与执行,则是Spark中Job的物理调度,它实际上分为两个主要阶段:
1、Task调度;
2、Task运行。
下面,我们分析下Task的调度。我们知道,在第一阶段的末尾,stage被提交后,每个stage被转化为一组task的集合--TaskSet,而紧接着,则调用taskScheduler.submitTasks()提交这些tasks,而TaskScheduler的主要职责,则是负责Job物理调度阶段--Task调度。TaskScheduler为scala中的一个trait,你可以简单的把它理解为Java中的接口,目前它仅仅有一个实现类TaskSchedulerImpl。
TaskScheduler负责低层次任务的调度,每个TaskScheduler为一个特定的SparkContext调度tasks。这些调度器获取到由DAGScheduler为每个stage提交至他们的一组Tasks,并负责将这些tasks发送到集群,以执行它们,在它们失败时重试,并减轻掉队情况(类似MapReduce的推测执行原理吧,在这里留个疑问)。这些调度器返回一些事件events给DAGScheduler。其源码如下:
- private[spark] trait TaskScheduler {
-
- private val appId = "spark-application-" + System.currentTimeMillis
-
- def rootPool: Pool
-
- def schedulingMode: SchedulingMode
-
- def start(): Unit
-
-
-
-
- def postStartHook() { }
-
-
- def stop(): Unit
-
-
- def submitTasks(taskSet: TaskSet): Unit
-
-
- def cancelTasks(stageId: Int, interruptThread: Boolean)
-
-
- def setDAGScheduler(dagScheduler: DAGScheduler): Unit
-
-
- def defaultParallelism(): Int
-
-
- def executorHeartbeatReceived(execId: String, taskMetrics: Array[(Long, TaskMetrics)],
- blockManagerId: BlockManagerId): Boolean
-
-
- def applicationId(): String = appId
-
-
- def executorLost(executorId: String, reason: ExecutorLossReason): Unit
-
-
- def applicationAttemptId(): Option[String]
-
- }
通过源码我们可以知道,TaskScheduler提供了实例化与销毁时必要的start()和stop()方法,并提供了提交Tasks与取消Tasks的submitTasks()和cancelTasks()方法,并且通过executorHeartbeatReceived()周期性的接收executor的心跳,更新运行中tasks的元信息,并让master知晓BlockManager仍然存活。
好了,结合源码,我们一步步来看吧。
首先,在DAGScheduler的submitMissingTasks()方法的最后,每个stage生成一组tasks后,即调用用TaskScheduler的submitTasks()方法提交task,代码如下:
- taskScheduler.submitTasks(new TaskSet(
- tasks.toArray, stage.id, stage.latestInfo.attemptId, jobId, properties))
-
- stage.latestInfo.submissionTime = Some(clock.getTimeMillis())
那么我们先来看下TaskScheduler的submitTasks()方法,在其实现类TaskSchedulerImpl中,代码如下:
- override def submitTasks(taskSet: TaskSet) {
-
-
- val tasks = taskSet.tasks
- logInfo("Adding task set " + taskSet.id + " with " + tasks.length + " tasks")
-
-
- this.synchronized {
-
-
- val manager = createTaskSetManager(taskSet, maxTaskFailures)
-
-
- val stage = taskSet.stageId
-
-
-
- val stageTaskSets =
- taskSetsByStageIdAndAttempt.getOrElseUpdate(stage, new HashMap[Int, TaskSetManager])
- stageTaskSets(taskSet.stageAttemptId) = manager
-
-
- val conflictingTaskSet = stageTaskSets.exists { case (_, ts) =>
- ts.taskSet != taskSet && !ts.isZombie
- }
- if (conflictingTaskSet) {
- throw new IllegalStateException(s"more than one active taskSet for stage $stage:" +
- s" ${stageTaskSets.toSeq.map{_._2.taskSet.id}.mkString(",")}")
- }
-
-
- schedulableBuilder.addTaskSetManager(manager, manager.taskSet.properties)
-
-
- if (!isLocal && !hasReceivedTask) {
- starvationTimer.scheduleAtFixedRate(new TimerTask() {
- override def run() {
- if (!hasLaunchedTask) {
- logWarning("Initial job has not accepted any resources; " +
- "check your cluster UI to ensure that workers are registered " +
- "and have sufficient resources")
- } else {
- this.cancel()
- }
- }
- }, STARVATION_TIMEOUT_MS, STARVATION_TIMEOUT_MS)
- }
-
-
- hasReceivedTask = true
- }
-
-
- backend.reviveOffers()
- }
该方法首先从入参TaskSet中获取tasks;
接下来,在synchronized同步代码块内,主要完成以下几件事:
1、创建TaskSetManager,TaskSetManager主要用来干什么呢,后面我们会分析;
2、通过taskSet获取stageId;
3、更新数据结构taskSetsByStageIdAndAttempt,将映射关系stageId->[taskSet.stageAttemptId->TaskSetManager]存入,这里的TaskSetManager就是上面创建的TaskSetManager,taskSet.stageAttemptId是怎么赋值的呢?为了保证叙述的完整性,还是先留个小小的疑问吧;
4、查看是否存在冲突的taskSet,如果存在,抛出IllegalStateException异常;
5、将TaskSetManager添加到schedulableBuilder中;
6、最后调用SchedulerBackend的reviveOffers()。
下面慢慢分析上述流程,首先这个TaskSetManager是干什么呢?通过名字可以简单的推论出,它是TaskSet的管理者,主要在TaskSchedulerImpl中调度同一个TaskSet中的tasks。该类追踪每个task,当它们失败时重试(直到限制的最大次数),并通过延迟调度处理位置感知调度。该类最主要的接口就是resourceOffer()方法,该方法会询问TaskSet,它是否想要在一个节点上运行一个task,并在TaskSet中的task状态变更时通知它(比如完成等)。
现在再来看下taskSet的stageAttemptId,在DAGScheduler的submitMissingTasks()方法中调用TaskScheduler的submitTasks()方法提交task,构造TaskSet对象时,赋值给TaskSet的stageAttemptId字段的是stage.latestInfo.attemptId。而Stage的latestInfo是这样定义的:
-
- def latestInfo: StageInfo = _latestInfo
即它是由_latestInfo来赋值的,那么_latestInfo呢?代码如下:
- private var _latestInfo: StageInfo = StageInfo.fromStage(this, nextAttemptId)
具体初始化过程我们不做过多讨论,我们只要知道,StageInfo中存在一个成员变量attemptId即可,而这个成员变量就是上面我们所说的taskSet的stageAttemptId。而StageInfo中attemptId的值,则是由Stage中nextAttemptId的值确定的,其定义如下:
-
- private var nextAttemptId: Int = 0
而它值的变化是怎么样的呢?答案就在Stage的makeNewStageAttempt()方法中,代码如下:
-
- def makeNewStageAttempt(
- numPartitionsToCompute: Int,
- taskLocalityPreferences: Seq[Seq[TaskLocation]] = Seq.empty): Unit = {
-
-
- _latestInfo = StageInfo.fromStage(
- this, nextAttemptId, Some(numPartitionsToCompute), taskLocalityPreferences)
-
-
- nextAttemptId += 1
- }
什么时候调用makeNewStageAttempt()方法呢?还记得《Spark源码分析之Stage提交》一文的最后,真正提交stage的方法submitMissingTasks()中第6步,标记新的stage attempt,并发送一个SparkListenerStageSubmitted事件吗,代码如下:
- stage.makeNewStageAttempt(partitionsToCompute.size, taskIdToLocations.values.toSeq)
-
- listenerBus.post(SparkListenerStageSubmitted(stage.latestInfo, properties))
也就是说,在每次提交stage时,即会调用该方法,创建一个新的_latestInfo对象,并对nextAttemptId进行自增。
好了,言归正传,继续往下看。第5步便是将TaskSetManager添加到schedulableBuilder中,那么这里就有两个问题:
1、schedulableBuilder是什么?
2、为什么要将TaskSetManager添加到schedulableBuilder中呢?
我们首先看下schedulableBuilder的定义及初始化。其定义代码如下:
- var schedulableBuilder: SchedulableBuilder = null
而它的初始化则是在TaskSchedulerImpl的initialize()方法中。如下:
- def initialize(backend: SchedulerBackend) {
-
- this.backend = backend
-
-
-
- rootPool = new Pool("", schedulingMode, 0, 0)
-
-
- schedulableBuilder = {
- schedulingMode match {
- case SchedulingMode.FIFO =>
- new FIFOSchedulableBuilder(rootPool)
- case SchedulingMode.FAIR =>
- new FairSchedulableBuilder(rootPool, conf)
- }
- }
- schedulableBuilder.buildPools()
- }
这个方法同时也初始化了TaskSchedulerImpl中SchedulerBackend类型的backend对象,这个对象在最后一步会用到,我们稍后再说。
继续看schedulableBuilder,通过代码我们就能知道,这个schedulableBuilder是调度构造器,分FIFO和FAIR两种。至于这两种构造器的含义和区别,我们以后再分析。下面看下SchedulableBuilder的源码:
- private[spark] trait SchedulableBuilder {
- def rootPool: Pool
-
- def buildPools()
-
- def addTaskSetManager(manager: Schedulable, properties: Properties)
- }
从上面的英文注释我们就能知道,SchedulableBuilder是一个构造调度树的接口,它提供了一个成员变量Pool类型的rootPool和两个主要方法:
1、buildPools()方法:构造调度树节点(调度池);
2、addTaskSetManager()方法:构造叶子节点(TaskSetManagers)。
下面,我们以FIFOSchedulableBuilder为例,简单说下。FIFOSchedulableBuilder中buildPools()是个空方法,没什么可说的,我们主要分析下它的buildPools()方法,代码如下:
- override def addTaskSetManager(manager: Schedulable, properties: Properties) {
- rootPool.addSchedulable(manager)
- }
可以看到,它实际上是调用的Pool的addSchedulable()方法。继续追踪:
- override def addSchedulable(schedulable: Schedulable) {
- require(schedulable != null)
-
-
- schedulableQueue.add(schedulable)
-
-
- schedulableNameToSchedulable.put(schedulable.name, schedulable)
-
-
- schedulable.parent = this
- }
而翻看TaskSetManager的源码可以知道,TaskSetManager就实现了Schedulable这个trait(特质,类似java的接口),也就意味着TaskSetManager是可以被调度的,这也就回答了上面的问题2。
好了,我们继续看最后一步,调用SchedulerBackend的reviveOffers()。问题又来了,问题不断啊。
1、SchedulerBackend是什么?
2、SchedulerBackend如何被初始化?
3、SchedulerBackend的reviveOffers()到底做了什么?
带着问题去学习终究是好的,它让我们有了暂时的目标。下面,我们一步步来分析。
SchedulerBackend是Spark中一个可插拔组件,可插拔意味着它可以有多种实现方式,后续我们会概略讲讲。按照字面意思,它就是调度器的一个后台服务或者实现,其主要作用就是在物理机器或者说worker就绪后,能够提供其上的资源并将tasks加载到那些机器或者worker上。
上文中我们已经预先提到过,在TaskSchedulerImpl的initialize()方法初始化schedulableBuilder时,同时也初始化了SchedulerBackend,即:
这个SchedulerBackend是被传递进来的,那么这时我们就要追溯到TaskSchedulerImpl实例化的时候了。在Spark应用环境的初始化时,其上下文信息SparkContext中存在以下代码:
- val (sched, ts) = SparkContext.createTaskScheduler(this, master)
- _schedulerBackend = sched
- _taskScheduler = ts
createTaskScheduler()方法主要就是根据给定的Master URL创建一个TaskScheduler。大致代码如下:
- private def createTaskScheduler(
- sc: SparkContext,
- master: String): (SchedulerBackend, TaskScheduler) = {
- import SparkMasterRegex._
-
-
- val MAX_LOCAL_TASK_FAILURES = 1
-
- master match {
- case "local" =>
- val scheduler = new TaskSchedulerImpl(sc, MAX_LOCAL_TASK_FAILURES, isLocal = true)
- val backend = new LocalBackend(sc.getConf, scheduler, 1)
- scheduler.initialize(backend)
- (backend, scheduler)
-
- case LOCAL_N_REGEX(threads) =>
- def localCpuCount: Int = Runtime.getRuntime.availableProcessors()
-
- val threadCount = if (threads == "*") localCpuCount else threads.toInt
- if (threadCount <= 0) {
- throw new SparkException(s"Asked to run locally with $threadCount threads")
- }
- val scheduler = new TaskSchedulerImpl(sc, MAX_LOCAL_TASK_FAILURES, isLocal = true)
- val backend = new LocalBackend(sc.getConf, scheduler, threadCount)
- scheduler.initialize(backend)
- (backend, scheduler)
-
- case LOCAL_N_FAILURES_REGEX(threads, maxFailures) =>
- def localCpuCount: Int = Runtime.getRuntime.availableProcessors()
-
-
- val threadCount = if (threads == "*") localCpuCount else threads.toInt
- val scheduler = new TaskSchedulerImpl(sc, maxFailures.toInt, isLocal = true)
- val backend = new LocalBackend(sc.getConf, scheduler, threadCount)
- scheduler.initialize(backend)
- (backend, scheduler)
-
-
- case SPARK_REGEX(sparkUrl) =>
-
-
- val scheduler = new TaskSchedulerImpl(sc)
- val masterUrls = sparkUrl.split(",").map("spark://" + _)
-
-
- val backend = new SparkDeploySchedulerBackend(scheduler, sc, masterUrls)
-
-
-
- scheduler.initialize(backend)
-
- (backend, scheduler)
-
- case LOCAL_CLUSTER_REGEX(numSlaves, coresPerSlave, memoryPerSlave) =>
-
- val memoryPerSlaveInt = memoryPerSlave.toInt
- if (sc.executorMemory > memoryPerSlaveInt) {
- throw new SparkException(
- "Asked to launch cluster with %d MB RAM / worker but requested %d MB/worker".format(
- memoryPerSlaveInt, sc.executorMemory))
- }
-
- val scheduler = new TaskSchedulerImpl(sc)
- val localCluster = new LocalSparkCluster(
- numSlaves.toInt, coresPerSlave.toInt, memoryPerSlaveInt, sc.conf)
- val masterUrls = localCluster.start()
- val backend = new SparkDeploySchedulerBackend(scheduler, sc, masterUrls)
- scheduler.initialize(backend)
- backend.shutdownCallback = (backend: SparkDeploySchedulerBackend) => {
- localCluster.stop()
- }
- (backend, scheduler)
-
- case "yarn-standalone" | "yarn-cluster" =>
- if (master == "yarn-standalone") {
- logWarning(
- "\"yarn-standalone\" is deprecated as of Spark 1.0. Use \"yarn-cluster\" instead.")
- }
- val scheduler = try {
- val clazz = Utils.classForName("org.apache.spark.scheduler.cluster.YarnClusterScheduler")
- val cons = clazz.getConstructor(classOf[SparkContext])
- cons.newInstance(sc).asInstanceOf[TaskSchedulerImpl]
- } catch {
-
-
- case e: Exception => {
- throw new SparkException("YARN mode not available ?", e)
- }
- }
- val backend = try {
- val clazz =
- Utils.classForName("org.apache.spark.scheduler.cluster.YarnClusterSchedulerBackend")
- val cons = clazz.getConstructor(classOf[TaskSchedulerImpl], classOf[SparkContext])
- cons.newInstance(scheduler, sc).asInstanceOf[CoarseGrainedSchedulerBackend]
- } catch {
- case e: Exception => {
- throw new SparkException("YARN mode not available ?", e)
- }
- }
- scheduler.initialize(backend)
- (backend, scheduler)
-
- case "yarn-client" =>
- val scheduler = try {
- val clazz = Utils.classForName("org.apache.spark.scheduler.cluster.YarnScheduler")
- val cons = clazz.getConstructor(classOf[SparkContext])
- cons.newInstance(sc).asInstanceOf[TaskSchedulerImpl]
-
- } catch {
- case e: Exception => {
- throw new SparkException("YARN mode not available ?", e)
- }
- }
-
- val backend = try {
- val clazz =
- Utils.classForName("org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend")
- val cons = clazz.getConstructor(classOf[TaskSchedulerImpl], classOf[SparkContext])
- cons.newInstance(scheduler, sc).asInstanceOf[CoarseGrainedSchedulerBackend]
- } catch {
- case e: Exception => {
- throw new SparkException("YARN mode not available ?", e)
- }
- }
-
- scheduler.initialize(backend)
- (backend, scheduler)
-
- case MESOS_REGEX(mesosUrl) =>
- MesosNativeLibrary.load()
- val scheduler = new TaskSchedulerImpl(sc)
- val coarseGrained = sc.conf.getBoolean("spark.mesos.coarse", defaultValue = true)
- val backend = if (coarseGrained) {
- new CoarseMesosSchedulerBackend(scheduler, sc, mesosUrl, sc.env.securityManager)
- } else {
- new MesosSchedulerBackend(scheduler, sc, mesosUrl)
- }
- scheduler.initialize(backend)
- (backend, scheduler)
-
- case SIMR_REGEX(simrUrl) =>
- val scheduler = new TaskSchedulerImpl(sc)
- val backend = new SimrSchedulerBackend(scheduler, sc, simrUrl)
- scheduler.initialize(backend)
- (backend, scheduler)
-
- case zkUrl if zkUrl.startsWith("zk://") =>
- logWarning("Master URL for a multi-master Mesos cluster managed by ZooKeeper should be " +
- "in the form mesos://zk://host:port. Current Master URL will stop working in Spark 2.0.")
- createTaskScheduler(sc, "mesos://" + zkUrl)
-
- case _ =>
- throw new SparkException("Could not parse Master URL: ‘" + master + "‘")
- }
- }
可以看出,它是根据Spark的部署模式来确定创建何种TaskScheduler及SchedulerBackend的。我们就以常见的Standalone模式来说明,关键代码如下:
- case SPARK_REGEX(sparkUrl) =>
-
-
- val scheduler = new TaskSchedulerImpl(sc)
- val masterUrls = sparkUrl.split(",").map("spark://" + _)
-
-
- val backend = new SparkDeploySchedulerBackend(scheduler, sc, masterUrls)
-
-
-
- scheduler.initialize(backend)
-
- (backend, scheduler)
Standalone模式模式中,TaskScheduler的实现为TaskSchedulerImpl,而SchedulerBackend的实现为SparkDeploySchedulerBackend,并且在TaskScheduler生成后,随即调用其initialize()方法完成了初始化,也就确定了SchedulableBuilder和SchedulerBackend。
至此,前两个是什么以及如何初始化的问题我们都已得到答案,下面再看最后一个关于做什么的问题:SchedulerBackend的reviveOffers()到底做了什么?还是以Standalone模式来说明。SparkDeploySchedulerBackend中没有提供此方法,我们只能寄希望于其父类CoarseGrainedSchedulerBackend,果不其然,在CoarseGrainedSchedulerBackend中我们找到了reviveOffers()方法。但是,代码很简单:
- override def reviveOffers() {
-
- driverEndpoint.send(ReviveOffers)
- }
我们继续看driverEndpoint是什么鬼。driverEndpoint是RPC中driver端Endpoint的引用,其类型为RpcEndpointRef。在CoarseGrainedSchedulerBackend启动时的start()方法中,对driverEndpoint进行了赋值:
- driverEndpoint = rpcEnv.setupEndpoint(ENDPOINT_NAME, createDriverEndpoint(properties))
这个RpcEnv只是一个抽象类,它有两种实现,一个是基于AKKa的AkkaRpcEnv,另外一个则是基于Netty的NettyRpcEnv,默认的实现是Netty。通过下述RpcEnv的代码即可看出:
- private def getRpcEnvFactory(conf: SparkConf): RpcEnvFactory = {
-
-
-
-
- val rpcEnvNames = Map(
- "akka" -> "org.apache.spark.rpc.akka.AkkaRpcEnvFactory",
- "netty" -> "org.apache.spark.rpc.netty.NettyRpcEnvFactory")
-
-
- val rpcEnvName = conf.get("spark.rpc", "netty")
- val rpcEnvFactoryClassName = rpcEnvNames.getOrElse(rpcEnvName.toLowerCase, rpcEnvName)
- Utils.classForName(rpcEnvFactoryClassName).newInstance().asInstanceOf[RpcEnvFactory]
- }
下面,我们就看下Netty的概要实现,在NettyRpcEnv的setupEndpoint()方法中:
- override def setupEndpoint(name: String, endpoint: RpcEndpoint): RpcEndpointRef = {
-
-
- dispatcher.registerRpcEndpoint(name, endpoint)
- }
它是通过dispatcher来完成endpoint注册的,name为“CoarseGrainedScheduler”,RpcEndpoint为CoarseGrainedSchedulerBackend中通过createDriverEndpoint()方法创建的DriverEndpoint对象。代码如下:
- protected def createDriverEndpoint(properties: Seq[(String, String)]): DriverEndpoint = {
- new DriverEndpoint(rpcEnv, properties)
- }
那么这个DriverEndpoint是什么类呢?我们发现它继承自ThreadSafeRpcEndpoint,继而继承RpcEndpoint这个类。这里,我们只要知道这个RpcEndpoint是进程间消息传递调用的一个端点,定义了消息触发的函数。当一个消息到来时,方法调用顺序为 onStart, receive, onStop。它的生命周期为constructor -> onStart -> receive* -> onStop。
为什么要用RpcEndpoint呢?很简单,Task的调度与执行是在一个分布式集群上进行的,自然需要进程间的通讯。
继续分析,那么上面提到的driverEndpoint是如何赋值的呢?我们继续看Dispatcher的registerRpcEndpoint()方法,因为最终是由它向上返回RpcEndpointRef来完成driverEndpoint的赋值的。代码如下:
-
- def registerRpcEndpoint(name: String, endpoint: RpcEndpoint): NettyRpcEndpointRef = {
-
-
- val addr = RpcEndpointAddress(nettyEnv.address, name)
-
-
- val endpointRef = new NettyRpcEndpointRef(nettyEnv.conf, addr, nettyEnv)
-
-
- synchronized {
- if (stopped) {
- throw new IllegalStateException("RpcEnv has been stopped")
- }
-
-
- if (endpoints.putIfAbsent(name, new EndpointData(name, endpoint, endpointRef)) != null) {
- throw new IllegalArgumentException(s"There is already an RpcEndpoint called $name")
- }
-
-
- val data = endpoints.get(name)
- endpointRefs.put(data.endpoint, data.ref)
- receivers.offer(data)
- }
- endpointRef
- }
返回的RpcEndpointRef为NettyRpcEndpointRef类型,而RpcEndpointRef则是一个远程RpcEndpoint的引用,通过它可以给远程RpcEndpoint发送消息,可以是同步可以是异步,它映射一个地址。这么看来,我们在远端(ps:另外的机器或者进程)注册了一个RpcEndpoint,即DriverEndpoint,而在本地端(当前机器或者进程)则持有一个RpcEndpoint的引用,即NettyRpcEndpointRef,可以由它来往远端发送消息,那么发送的是什么消息呢?我们现在返回CoarseGrainedSchedulerBackend中的reviveOffers()方法,发现发送的是ReviveOffers消息。这里只是发送,具体处理还要看远端的RpcEndpoint,即DriverEndpoint。通过上面我们可以知道,RpcEndpoint的服务流程为onStart()-->receive()--> onStop(),每当消息来临时,DriverEndpoint都会调用receive()方法来处理。关键代码如下:
- case ReviveOffers =>
- makeOffers()
继续追踪其makeOffers()方法,代码如下:
-
- private def makeOffers() {
-
-
- val activeExecutors = executorDataMap.filterKeys(executorIsAlive)
-
-
- val workOffers = activeExecutors.map { case (id, executorData) =>
-
- new WorkerOffer(id, executorData.executorHost, executorData.freeCores)
- }.toSeq
-
-
-
- launchTasks(scheduler.resourceOffers(workOffers))
- }
好了,留个尾巴,明天再继续分析吧~
博客原地址:http://blog.csdn.net/lipeng_bigdata/article/details/50687992
Spark源码分析之五:Task调度(一)
标签:
原文地址:http://www.cnblogs.com/jirimutu01/p/5274458.html