码迷,mamicode.com
首页 > 其他好文 > 详细

Spark on K8S源码解析.md

时间:2019-12-22 21:43:57      阅读:113      评论:0      收藏:0      [点我收藏+]

标签:var   ssh   files   isp   actor   -bash   parallel   some   调用   

Spark on K8S源码解析

time: 2019-12-19

Spark on k8s源码解析

本文基于spark-3.0.0 preview源码,来分析spark作业基于K8S的提交过程.
spark on k8s的代码位置位于:
技术图片
关于kubernetes目录由以下部分组成:

  1. $ tree kubernetes -L 1  
  2. kubernetes 
  3. ├── core 
  4. ├── docker 
  5. └── integration-tests 
  6.  

其中kubernetes中的core/src/main的代码目录如下:

  1. $ tree core/src/main/scala -L 7 
  2. core/src/main/scala 
  3. └── org 
  4. └── apache 
  5. └── spark 
  6. ├── deploy 
  7. │   └── k8s 
  8. │   ├── Config.scala 
  9. │   ├── Constants.scala 
  10. │   ├── features 
  11. │   │   ├── BasicDriverFeatureStep.scala 
  12. │   │   ├── BasicExecutorFeatureStep.scala 
  13. │   │   ├── DriverCommandFeatureStep.scala 
  14. │   │   ├── DriverKubernetesCredentialsFeatureStep.scala 
  15. │   │   ├── DriverServiceFeatureStep.scala 
  16. │   │   ├── EnvSecretsFeatureStep.scala 
  17. │   │   ├── ExecutorKubernetesCredentialsFeatureStep.scala 
  18. │   │   ├── HadoopConfDriverFeatureStep.scala 
  19. │   │   ├── KerberosConfDriverFeatureStep.scala 
  20. │   │   ├── KubernetesFeatureConfigStep.scala 
  21. │   │   ├── LocalDirsFeatureStep.scala 
  22. │   │   ├── MountSecretsFeatureStep.scala 
  23. │   │   ├── MountVolumesFeatureStep.scala 
  24. │   │   └── PodTemplateConfigMapStep.scala 
  25. │   ├── KubernetesConf.scala 
  26. │   ├── KubernetesDriverSpec.scala 
  27. │   ├── KubernetesUtils.scala 
  28. │   ├── KubernetesVolumeSpec.scala 
  29. │   ├── KubernetesVolumeUtils.scala 
  30. │   ├── SparkKubernetesClientFactory.scala 
  31. │   ├── SparkPod.scala 
  32. │   └── submit 
  33. │   ├── K8sSubmitOps.scala 
  34. │   ├── KubernetesClientApplication.scala 
  35. │   ├── KubernetesDriverBuilder.scala 
  36. │   ├── LoggingPodStatusWatcher.scala 
  37. │   └── MainAppResource.scala 
  38. └── scheduler 
  39. └── cluster 
  40. └── k8s 
  41. ├── ExecutorPodsAllocator.scala 
  42. ├── ExecutorPodsLifecycleManager.scala 
  43. ├── ExecutorPodsPollingSnapshotSource.scala 
  44. ├── ExecutorPodsSnapshot.scala 
  45. ├── ExecutorPodsSnapshotsStoreImpl.scala 
  46. ├── ExecutorPodsSnapshotsStore.scala 
  47. ├── ExecutorPodStates.scala 
  48. ├── ExecutorPodsWatchSnapshotSource.scala 
  49. ├── KubernetesClusterManager.scala 
  50. ├── KubernetesClusterSchedulerBackend.scala 
  51. └── KubernetesExecutorBuilder.scala 
  52.  
  53. 10 directories, 39 files 

而docker目录下面则是用来打包Spark镜像的Dockerfile:

  1. $ tree kubernetes/docker/src/main -L 5 
  2. kubernetes/docker/src/main 
  3. └── dockerfiles 
  4. └── spark 
  5. ├── bindings 
  6. │   ├── python 
  7. │   │   └── Dockerfile 
  8. │   └── R 
  9. │   └── Dockerfile 
  10. ├── Dockerfile 
  11. └── entrypoint.sh 
  12.  
  13. 5 directories, 4 files 
  14.  

1. Spark Submit

首先我们提交一个spark-pi的例子作为开始:

  1. $ ./bin/spark-submit \ 
  2. --master k8s://https://<k8s-apiserver-host>:<k8s-apiserver-port> \ 
  3. --deploy-mode cluster \ 
  4. --name spark-pi \ 
  5. --class org.apache.spark.examples.SparkPi \ 
  6. --conf spark.executor.instances=5 \ 
  7. --conf spark.kubernetes.container.image=<spark-image> \ 
  8. local:///path/to/examples.jar 

spark-submit.sh

  1. if [ -z "${SPARK_HOME}" ]; then 
  2. source "$(dirname "$0")"/find-spark-home 
  3. fi 
  4.  
  5. # disable randomized hash for string in Python 3.3+ 
  6. export PYTHONHASHSEED=0 
  7. # 源码批注: 这里将spark-submit中的所有入参都传递给spark-class  
  8. exec "${SPARK_HOME}"/bin/spark-class org.apache.spark.deploy.SparkSubmit "$@" 

spark-class.sh

这个脚本中核心功能见该代码的71-74行.
主要功能:
根据环境和spark-submit的入参去拼接

  1. java -Xmx128m *** -cp *** com.demo.Main ***.jar 

而进入的Main就是org.apache.spark.deploy.SparkSubmit

  1. #!/usr/bin/env bash 
  2.  
  3. # 
  4. # Licensed to the Apache Software Foundation (ASF) under one or more 
  5. # contributor license agreements. See the NOTICE file distributed with 
  6. # this work for additional information regarding copyright ownership. 
  7. # The ASF licenses this file to You under the Apache License, Version 2.0 
  8. # (the "License"); you may not use this file except in compliance with 
  9. # the License. You may obtain a copy of the License at 
  10. # 
  11. # http://www.apache.org/licenses/LICENSE-2.0 
  12. # 
  13. # Unless required by applicable law or agreed to in writing, software 
  14. # distributed under the License is distributed on an "AS IS" BASIS, 
  15. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 
  16. # See the License for the specific language governing permissions and 
  17. # limitations under the License. 
  18. # 
  19.  
  20. if [ -z "${SPARK_HOME}" ]; then 
  21. source "$(dirname "$0")"/find-spark-home 
  22. fi 
  23.  
  24. . "${SPARK_HOME}"/bin/load-spark-env.sh 
  25.  
  26. # Find the java binary 
  27. if [ -n "${JAVA_HOME}" ]; then 
  28. RUNNER="${JAVA_HOME}/bin/java" 
  29. else 
  30. if [ "$(command -v java)" ]; then 
  31. RUNNER="java" 
  32. else 
  33. echo "JAVA_HOME is not set" >&2 
  34. exit 1 
  35. fi 
  36. fi 
  37.  
  38. # Find Spark jars. 
  39. if [ -d "${SPARK_HOME}/jars" ]; then 
  40. SPARK_JARS_DIR="${SPARK_HOME}/jars" 
  41. else 
  42. SPARK_JARS_DIR="${SPARK_HOME}/assembly/target/scala-$SPARK_SCALA_VERSION/jars" 
  43. fi 
  44.  
  45. if [ ! -d "$SPARK_JARS_DIR" ] && [ -z "$SPARK_TESTING$SPARK_SQL_TESTING" ]; then 
  46. echo "Failed to find Spark jars directory ($SPARK_JARS_DIR)." 1>&2 
  47. echo "You need to build Spark with the target \"package\" before running this program." 1>&2 
  48. exit 1 
  49. else 
  50. LAUNCH_CLASSPATH="$SPARK_JARS_DIR/*" 
  51. fi 
  52.  
  53. # Add the launcher build dir to the classpath if requested. 
  54. if [ -n "$SPARK_PREPEND_CLASSES" ]; then 
  55. LAUNCH_CLASSPATH="${SPARK_HOME}/launcher/target/scala-$SPARK_SCALA_VERSION/classes:$LAUNCH_CLASSPATH" 
  56. fi 
  57.  
  58. # For tests 
  59. if [[ -n "$SPARK_TESTING" ]]; then 
  60. unset YARN_CONF_DIR 
  61. unset HADOOP_CONF_DIR 
  62. fi 
  63.  
  64. # The launcher library will print arguments separated by a NULL character, to allow arguments with 
  65. # characters that would be otherwise interpreted by the shell. Read that in a while loop, populating 
  66. # an array that will be used to exec the final command. 
  67. # 
  68. # The exit code of the launcher is appended to the output, so the parent shell removes it from the 
  69. # command array and checks the value to see if the launcher succeeded. 

  70. build_command() {
  71. "$RUNNER" -Xmx128m $SPARK_LAUNCHER_OPTS -cp "$LAUNCH_CLASSPATH" org.apache.spark.launcher.Main "$@"
  72. printf "%d\0" $?
  73. }
  74.  
  75. # Turn off posix mode since it does not allow process substitution 
  76. set +o posix 
  77. CMD=() 
  78. DELIM=$‘\n‘ 
  79. CMD_START_FLAG="false" 
  80. while IFS= read -d "$DELIM" -r ARG; do 
  81. if [ "$CMD_START_FLAG" == "true" ]; then 
  82. CMD+=("$ARG") 
  83. else 
  84. if [ "$ARG" == $‘\0‘ ]; then 
  85. # After NULL character is consumed, change the delimiter and consume command string. 
  86. DELIM=‘‘ 
  87. CMD_START_FLAG="true" 
  88. elif [ "$ARG" != "" ]; then 
  89. echo "$ARG" 
  90. fi 
  91. fi 
  92. done < <(build_command "$@") 
  93.  
  94. COUNT=${#CMD[@]} 
  95. LAST=$((COUNT - 1)) 
  96. LAUNCHER_EXIT_CODE=${CMD[$LAST]} 
  97.  
  98. # Certain JVM failures result in errors being printed to stdout (instead of stderr), which causes 
  99. # the code that parses the output of the launcher to get confused. In those cases, check if the 
  100. # exit code is an integer, and if it‘s not, handle it as a special error case. 
  101. if ! [[ $LAUNCHER_EXIT_CODE =~ ^[0-9]+$ ]]; then 
  102. echo "${CMD[@]}" | head -n-1 1>&2 
  103. exit 1 
  104. fi 
  105.  
  106. if [ $LAUNCHER_EXIT_CODE != 0 ]; then 
  107. exit $LAUNCHER_EXIT_CODE 
  108. fi 
  109.  
  110. CMD=("${CMD[@]:0:$LAST}") 
  111. exec "${CMD[@]}" 
  112.  

SparkSubmit

通过java命令启动,首先进入 Object SparkSubmit的main方法:

  • 构造SparkSubmit对象
  • 执行SparkSubmit.doSubmit方法.
  1. override def main(args: Array[String]): Unit = { 
  2. val submit = new SparkSubmit() { 
  3. self => 
  4.  
  5. override protected def parseArguments(args: Array[String]): SparkSubmitArguments = { 
  6. new SparkSubmitArguments(args) { 
  7. override protected def logInfo(msg: => String): Unit = self.logInfo(msg) 
  8.  
  9. override protected def logWarning(msg: => String): Unit = self.logWarning(msg) 
  10.  
  11. override protected def logError(msg: => String): Unit = self.logError(msg) 
  12. } 
  13. } 
  14.  
  15. override protected def logInfo(msg: => String): Unit = printMessage(msg) 
  16.  
  17. override protected def logWarning(msg: => String): Unit = printMessage(s"Warning: $msg") 
  18.  
  19. override protected def logError(msg: => String): Unit = printMessage(s"Error: $msg") 
  20.  
  21. override def doSubmit(args: Array[String]): Unit = { 
  22. try { 
  23. super.doSubmit(args) 
  24. } catch { 
  25. case e: SparkUserAppException => 
  26. exitFn(e.exitCode) 
  27. } 
  28. } 
  29.  
  30. } 
  31.  
  32. submit.doSubmit(args) 
  33. } 

这里我们进入SparkSubmit的doSubmit方法:
这里执行两步:

  • parseArguments(args)构造SparkSubmitArguments类对象
  • 执行submit()方法
  1. def doSubmit(args: Array[String]): Unit = { 
  2. // Initialize logging if it hasn‘t been done yet. Keep track of whether logging needs to 
  3. // be reset before the application starts. 
  4. val uninitLog = initializeLogIfNecessary(true, silent = true) 
  5. //代码批注: 根据spark-submit提交的参数构造SparkSubmitArguments对象 
  6. val appArgs = parseArguments(args) 
  7. if (appArgs.verbose) { 
  8. logInfo(appArgs.toString) 
  9. } 
  10. appArgs.action match { 
  11. case SparkSubmitAction.SUBMIT => submit(appArgs, uninitLog) 
  12. case SparkSubmitAction.KILL => kill(appArgs) 
  13. case SparkSubmitAction.REQUEST_STATUS => requestStatus(appArgs) 
  14. case SparkSubmitAction.PRINT_VERSION => printVersion() 
  15. } 
  16. } 

接下来我们进入submit()

  • 首先判断集群是否为standalone模式,这里由于集群是k8s native模式,直接执行else,进入doRunMain()
  • 由于我们spark-submit没有--proxy-user,直接执行53行的else,进入runMain()
  1. /** 
  2. * Submit the application using the provided parameters, ensuring to first wrap 
  3. * in a doAs when --proxy-user is specified. 
  4. */ 
  5. @tailrec 
  6. private def submit(args: SparkSubmitArguments, uninitLog: Boolean): Unit = { 
  7.  
  8. def doRunMain(): Unit = { 
  9. if (args.proxyUser != null) { 
  10. val proxyUser = UserGroupInformation.createProxyUser(args.proxyUser, 
  11. UserGroupInformation.getCurrentUser()) 
  12. try { 
  13. proxyUser.doAs(new PrivilegedExceptionAction[Unit]() { 
  14. override def run(): Unit = { 
  15. runMain(args, uninitLog) 
  16. } 
  17. }) 
  18. } catch { 
  19. case e: Exception => 
  20. // Hadoop‘s AuthorizationException suppresses the exception‘s stack trace, which 
  21. // makes the message printed to the output by the JVM not very helpful. Instead, 
  22. // detect exceptions with empty stack traces here, and treat them differently. 
  23. if (e.getStackTrace().length == 0) { 
  24. error(s"ERROR: ${e.getClass().getName()}: ${e.getMessage()}") 
  25. } else { 
  26. throw e 
  27. } 
  28. } 
  29. } else { 
  30. runMain(args, uninitLog) 
  31. } 
  32. } 
  33.  
  34. // In standalone cluster mode, there are two submission gateways: 
  35. // (1) The traditional RPC gateway using o.a.s.deploy.Client as a wrapper 
  36. // (2) The new REST-based gateway introduced in Spark 1.3 
  37. // The latter is the default behavior as of Spark 1.3, but Spark submit will fail over 
  38. // to use the legacy gateway if the master endpoint turns out to be not a REST server. 
  39. if (args.isStandaloneCluster && args.useRest) { 
  40. try { 
  41. logInfo("Running Spark using the REST application submission protocol.") 
  42. doRunMain() 
  43. } catch { 
  44. // Fail over to use the legacy submission gateway 
  45. case e: SubmitRestConnectionException => 
  46. logWarning(s"Master endpoint ${args.master} was not a REST server. " + 
  47. "Falling back to legacy submission gateway instead.") 
  48. args.useRest = false 
  49. submit(args, false) 
  50. } 
  51. // In all other modes, just run the main class as prepared 
  52. } else { 
  53. doRunMain() 
  54. } 
  55. } 
  56.  

进入runMain有两个关键步骤:

    1. 初始化childArgs,childClasspath,sparkConf,childMainClass.
    1. 实例化childMainClass

以上所谓的child都指的是resource manager中不用模式下提交作业的Client Main Class.

  1. /** 
  2. * Run the main method of the child class using the submit arguments. 
  3. * 
  4. * This runs in two steps. First, we prepare the launch environment by setting up 
  5. * the appropriate classpath, system properties, and application arguments for 
  6. * running the child main class based on the cluster manager and the deploy mode. 
  7. * Second, we use this launch environment to invoke the main method of the child 
  8. * main class. 
  9. * 
  10. * Note that this main class will not be the one provided by the user if we‘re 
  11. * running cluster deploy mode or python applications. 
  12. */ 
  13. private def runMain(args: SparkSubmitArguments, uninitLog: Boolean): Unit = { 
  14. val (childArgs, childClasspath, sparkConf, childMainClass) = prepareSubmitEnvironment(args) 
  15. // Let the main class re-initialize the logging system once it starts. 
  16. if (uninitLog) { 
  17. Logging.uninitialize() 
  18. } 
  19.  
  20. if (args.verbose) { 
  21. logInfo(s"Main class:\n$childMainClass") 
  22. logInfo(s"Arguments:\n${childArgs.mkString("\n")}") 
  23. // sysProps may contain sensitive information, so redact before printing 
  24. logInfo(s"Spark config:\n${Utils.redact(sparkConf.getAll.toMap).mkString("\n")}") 
  25. logInfo(s"Classpath elements:\n${childClasspath.mkString("\n")}") 
  26. logInfo("\n") 
  27. } 
  28. val loader = getSubmitClassLoader(sparkConf) 
  29. for (jar <- childClasspath) { 
  30. addJarToClasspath(jar, loader) 
  31. } 
  32.  
  33. var mainClass: Class[_] = null 
  34.  
  35. try { 
  36. mainClass = Utils.classForName(childMainClass) 
  37. } catch { 
  38. case e: ClassNotFoundException => 
  39. logError(s"Failed to load class $childMainClass.") 
  40. if (childMainClass.contains("thriftserver")) { 
  41. logInfo(s"Failed to load main class $childMainClass.") 
  42. logInfo("You need to build Spark with -Phive and -Phive-thriftserver.") 
  43. } 
  44. throw new SparkUserAppException(CLASS_NOT_FOUND_EXIT_STATUS) 
  45. case e: NoClassDefFoundError => 
  46. logError(s"Failed to load $childMainClass: ${e.getMessage()}") 
  47. if (e.getMessage.contains("org/apache/hadoop/hive")) { 
  48. logInfo(s"Failed to load hive class.") 
  49. logInfo("You need to build Spark with -Phive and -Phive-thriftserver.") 
  50. } 
  51. throw new SparkUserAppException(CLASS_NOT_FOUND_EXIT_STATUS) 
  52. } 
  53.  
  54. val app: SparkApplication = if (classOf[SparkApplication].isAssignableFrom(mainClass)) { 
  55. mainClass.getConstructor().newInstance().asInstanceOf[SparkApplication] 
  56. } else { 
  57. new JavaMainApplication(mainClass) 
  58. } 
  59.  

下面开始详细说明runMain()的两步:

第一步,初始化spark应用配置

  1. val (childArgs, childClasspath, sparkConf, childMainClass) = prepareSubmitEnvironment(args) 

我们可以看一下prepareSubmitEnvironment()方法中有以下关键部分:

  1. 确认集群模式
  1. // Set the cluster manager 
  2. val clusterManager: Int = args.master match { 
  3. case "yarn" => YARN 
  4. case m if m.startsWith("spark") => STANDALONE 
  5. case m if m.startsWith("mesos") => MESOS 
  6. case m if m.startsWith("k8s") => KUBERNETES 
  7. case m if m.startsWith("local") => LOCAL 
  8. case _ => 
  9. error("Master must either be yarn or start with spark, mesos, k8s, or local") 
  10. -1 
  11. } 
  12.  
  13. // Set the deploy mode; default is client mode 
  14. var deployMode: Int = args.deployMode match { 
  15. case "client" | null => CLIENT 
  16. case "cluster" => CLUSTER 
  17. case _ => 
  18. error("Deploy mode must be either client or cluster") 
  19. -1 
  20. } 

确认完spark-submit提交的参数中是kubernetes的cluster模式之后
2. 封装spark应用的classpath,files,sparkConf,以及childmainClass.
prepareSubmitEnvironment()方法中关于k8s的几个代码块:

初始化k8s模式的spark master

  1.  
  2. if (clusterManager == KUBERNETES) { 
  3. args.master = Utils.checkAndGetK8sMasterUrl(args.master) 
  4. // Make sure KUBERNETES is included in our build if we‘re trying to use it 
  5. if (!Utils.classIsLoadable(KUBERNETES_CLUSTER_SUBMIT_CLASS) && !Utils.isTesting) { 
  6. error( 
  7. "Could not load KUBERNETES classes. " + 
  8. "This copy of Spark may not have been compiled with KUBERNETES support.") 
  9. } 
  10. } 

构造各种集群模式判断条件的flag变量

  1. val isYarnCluster = clusterManager == YARN && deployMode == CLUSTER 
  2. val isMesosCluster = clusterManager == MESOS && deployMode == CLUSTER 
  3. val isStandAloneCluster = clusterManager == STANDALONE && deployMode == CLUSTER 
  4. val isKubernetesCluster = clusterManager == KUBERNETES && deployMode == CLUSTER 
  5. val isKubernetesClient = clusterManager == KUBERNETES && deployMode == CLIENT 
  6. val isKubernetesClusterModeDriver = isKubernetesClient && 
  7. sparkConf.getBoolean("spark.kubernetes.submitInDriver", false) 
  8.  

当然,我们的集群模式是kubernetes的cluster模式,根据isKubernetesCluster和isKubernetesClusterModeDriver,进入特定jar依赖解决和下载远程文件的流程,如果是Yarn或者Messos则进入的是其他流程:

将依赖的Jar 加入classpath

  1. if (!isMesosCluster && !isStandAloneCluster) { 
  2. // Resolve maven dependencies if there are any and add classpath to jars. Add them to py-files 
  3. // too for packages that include Python code 
  4. val resolvedMavenCoordinates = DependencyUtils.resolveMavenDependencies( 
  5. args.packagesExclusions, args.packages, args.repositories, args.ivyRepoPath, 
  6. args.ivySettingsPath) 
  7.  
  8. if (!StringUtils.isBlank(resolvedMavenCoordinates)) { 
  9. // In K8s client mode, when in the driver, add resolved jars early as we might need 
  10. // them at the submit time for artifact downloading. 
  11. // For example we might use the dependencies for downloading 
  12. // files from a Hadoop Compatible fs eg. S3. In this case the user might pass: 
  13. // --packages com.amazonaws:aws-java-sdk:1.7.4:org.apache.hadoop:hadoop-aws:2.7.6 
  14. if (isKubernetesClusterModeDriver) { 
  15. val loader = getSubmitClassLoader(sparkConf) 
  16. for (jar <- resolvedMavenCoordinates.split(",")) { 
  17. addJarToClasspath(jar, loader) 
  18. } 
  19. } else if (isKubernetesCluster) { 
  20. // We need this in K8s cluster mode so that we can upload local deps 
  21. // via the k8s application, like in cluster mode driver 
  22. childClasspath ++= resolvedMavenCoordinates.split(",") 
  23. } else { 
  24. args.jars = mergeFileLists(args.jars, resolvedMavenCoordinates) 
  25. if (args.isPython || isInternal(args.primaryResource)) { 
  26. args.pyFiles = mergeFileLists(args.pyFiles, resolvedMavenCoordinates) 
  27. } 
  28. } 
  29. } 
  30.  
  31.  

下载依赖的远程文件

  1. // In client mode, download remote files. 
  2. var localPrimaryResource: String = null 
  3. var localJars: String = null 
  4. var localPyFiles: String = null 
  5. if (deployMode == CLIENT) { 
  6. localPrimaryResource = Option(args.primaryResource).map { 
  7. downloadFile(_, targetDir, sparkConf, hadoopConf, secMgr) 
  8. }.orNull 
  9. localJars = Option(args.jars).map { 
  10. downloadFileList(_, targetDir, sparkConf, hadoopConf, secMgr) 
  11. }.orNull 
  12. localPyFiles = Option(args.pyFiles).map { 
  13. downloadFileList(_, targetDir, sparkConf, hadoopConf, secMgr) 
  14. }.orNull 
  15.  
  16. if (isKubernetesClusterModeDriver) { 
  17. // Replace with the downloaded local jar path to avoid propagating hadoop compatible uris. 
  18. // Executors will get the jars from the Spark file server. 
  19. // Explicitly download the related files here 
  20. args.jars = renameResourcesToLocalFS(args.jars, localJars) 
  21. val localFiles = Option(args.files).map { 
  22. downloadFileList(_, targetDir, sparkConf, hadoopConf, secMgr) 
  23. }.orNull 
  24. args.files = renameResourcesToLocalFS(args.files, localFiles) 
  25. } 
  26. } 
  27.  
  28. > 初始化sparkConf 
  29.  
  30. ``` scala?linenums 
  31. // A list of rules to map each argument to system properties or command-line options in 
  32. // each deploy mode; we iterate through these below 
  33. val options = List[OptionAssigner]( 
  34.  
  35. // All cluster managers 
  36. OptionAssigner(args.master, ALL_CLUSTER_MGRS, ALL_DEPLOY_MODES, confKey = "spark.master"), 
  37. OptionAssigner(args.deployMode, ALL_CLUSTER_MGRS, ALL_DEPLOY_MODES, 
  38. confKey = SUBMIT_DEPLOY_MODE.key), 
  39. OptionAssigner(args.name, ALL_CLUSTER_MGRS, ALL_DEPLOY_MODES, confKey = "spark.app.name"), 
  40. OptionAssigner(args.ivyRepoPath, ALL_CLUSTER_MGRS, CLIENT, confKey = "spark.jars.ivy"), 
  41. OptionAssigner(args.driverMemory, ALL_CLUSTER_MGRS, CLIENT, 
  42. confKey = DRIVER_MEMORY.key), 
  43. OptionAssigner(args.driverExtraClassPath, ALL_CLUSTER_MGRS, ALL_DEPLOY_MODES, 
  44. confKey = DRIVER_CLASS_PATH.key), 
  45. OptionAssigner(args.driverExtraJavaOptions, ALL_CLUSTER_MGRS, ALL_DEPLOY_MODES, 
  46. confKey = DRIVER_JAVA_OPTIONS.key), 
  47. OptionAssigner(args.driverExtraLibraryPath, ALL_CLUSTER_MGRS, ALL_DEPLOY_MODES, 
  48. confKey = DRIVER_LIBRARY_PATH.key), 
  49. OptionAssigner(args.principal, ALL_CLUSTER_MGRS, ALL_DEPLOY_MODES, 
  50. confKey = PRINCIPAL.key), 
  51. OptionAssigner(args.keytab, ALL_CLUSTER_MGRS, ALL_DEPLOY_MODES, 
  52. confKey = KEYTAB.key), 
  53. OptionAssigner(args.pyFiles, ALL_CLUSTER_MGRS, CLUSTER, confKey = SUBMIT_PYTHON_FILES.key), 
  54.  
  55. // Propagate attributes for dependency resolution at the driver side 
  56. OptionAssigner(args.packages, STANDALONE | MESOS | KUBERNETES, 
  57. CLUSTER, confKey = "spark.jars.packages"), 
  58. OptionAssigner(args.repositories, STANDALONE | MESOS | KUBERNETES, 
  59. CLUSTER, confKey = "spark.jars.repositories"), 
  60. OptionAssigner(args.ivyRepoPath, STANDALONE | MESOS | KUBERNETES, 
  61. CLUSTER, confKey = "spark.jars.ivy"), 
  62. OptionAssigner(args.packagesExclusions, STANDALONE | MESOS | KUBERNETES, 
  63. CLUSTER, confKey = "spark.jars.excludes"), 
  64.  
  65. // Yarn only 
  66. OptionAssigner(args.queue, YARN, ALL_DEPLOY_MODES, confKey = "spark.yarn.queue"), 
  67. OptionAssigner(args.pyFiles, YARN, ALL_DEPLOY_MODES, confKey = "spark.yarn.dist.pyFiles", 
  68. mergeFn = Some(mergeFileLists(_, _))), 
  69. OptionAssigner(args.jars, YARN, ALL_DEPLOY_MODES, confKey = "spark.yarn.dist.jars", 
  70. mergeFn = Some(mergeFileLists(_, _))), 
  71. OptionAssigner(args.files, YARN, ALL_DEPLOY_MODES, confKey = "spark.yarn.dist.files", 
  72. mergeFn = Some(mergeFileLists(_, _))), 
  73. OptionAssigner(args.archives, YARN, ALL_DEPLOY_MODES, confKey = "spark.yarn.dist.archives", 
  74. mergeFn = Some(mergeFileLists(_, _))), 
  75.  
  76. // Other options 
  77. OptionAssigner(args.numExecutors, YARN | KUBERNETES, ALL_DEPLOY_MODES, 
  78. confKey = EXECUTOR_INSTANCES.key), 
  79. OptionAssigner(args.executorCores, STANDALONE | YARN | KUBERNETES, ALL_DEPLOY_MODES, 
  80. confKey = EXECUTOR_CORES.key), 
  81. OptionAssigner(args.executorMemory, STANDALONE | MESOS | YARN | KUBERNETES, ALL_DEPLOY_MODES, 
  82. confKey = EXECUTOR_MEMORY.key), 
  83. OptionAssigner(args.totalExecutorCores, STANDALONE | MESOS | KUBERNETES, ALL_DEPLOY_MODES, 
  84. confKey = CORES_MAX.key), 
  85. OptionAssigner(args.files, LOCAL | STANDALONE | MESOS | KUBERNETES, ALL_DEPLOY_MODES, 
  86. confKey = FILES.key), 
  87. OptionAssigner(args.jars, LOCAL, CLIENT, confKey = JARS.key), 
  88. OptionAssigner(args.jars, STANDALONE | MESOS | KUBERNETES, ALL_DEPLOY_MODES, 
  89. confKey = JARS.key), 
  90. OptionAssigner(args.driverMemory, STANDALONE | MESOS | YARN | KUBERNETES, CLUSTER, 
  91. confKey = DRIVER_MEMORY.key), 
  92. OptionAssigner(args.driverCores, STANDALONE | MESOS | YARN | KUBERNETES, CLUSTER, 
  93. confKey = DRIVER_CORES.key), 
  94. OptionAssigner(args.supervise.toString, STANDALONE | MESOS, CLUSTER, 
  95. confKey = DRIVER_SUPERVISE.key), 
  96. OptionAssigner(args.ivyRepoPath, STANDALONE, CLUSTER, confKey = "spark.jars.ivy"), 
  97.  
  98. // An internal option used only for spark-shell to add user jars to repl‘s classloader, 
  99. // previously it uses "spark.jars" or "spark.yarn.dist.jars" which now may be pointed to 
  100. // remote jars, so adding a new option to only specify local jars for spark-shell internally. 
  101. OptionAssigner(localJars, ALL_CLUSTER_MGRS, CLIENT, confKey = "spark.repl.local.jars")  
  102. ) 
  103. //-------------------- 
  104. //忽略部分代码 
  105. //-------------------- 
  106. // Map all arguments to command-line options or system properties for our chosen mode 
  107. for (opt <- options) { 
  108. if (opt.value != null && 
  109. (deployMode & opt.deployMode) != 0 && 
  110. (clusterManager & opt.clusterManager) != 0) { 
  111. if (opt.clOption != null) { childArgs += (opt.clOption, opt.value) } 
  112. if (opt.confKey != null) { 
  113. if (opt.mergeFn.isDefined && sparkConf.contains(opt.confKey)) { 
  114. sparkConf.set(opt.confKey, opt.mergeFn.get.apply(sparkConf.get(opt.confKey), opt.value)) 
  115. } else { 
  116. sparkConf.set(opt.confKey, opt.value) 
  117. } 
  118. } 
  119. } 
  120. } 

初始化childMainClass,并传递Spark Application的mainClass或者R,Python的执行文件到childArgs

如果是Client模式,childMainClass直接就是Spark Application Main Class:

  1. // In client mode, launch the application main class directly 
  2. // In addition, add the main application jar and any added jars (if any) to the classpath 
  3. if (deployMode == CLIENT) { 
  4. childMainClass = args.mainClass 
  5. if (localPrimaryResource != null && isUserJar(localPrimaryResource)) { 
  6. childClasspath += localPrimaryResource 
  7. } 
  8. if (localJars != null) { childClasspath ++= localJars.split(",") } 
  9. } 

如果是Cluster模式,childMainClass就是Kubernetes的Client Main Class,由它去调用Spark Application Main Class.

  1. if (isKubernetesCluster) { 
  2. //这里KUBERNETES_CLUSTER_SUBMIT_CLASS是指: 
  3. //org.apache.spark.deploy.k8s.submit.KubernetesClientApplication,通过这个类来真正调用Spark应用的mainClass  
  4. childMainClass = KUBERNETES_CLUSTER_SUBMIT_CLASS 
  5. if (args.primaryResource != SparkLauncher.NO_RESOURCE) { 
  6. if (args.isPython) { 
  7. childArgs ++= Array("--primary-py-file", args.primaryResource) 
  8. childArgs ++= Array("--main-class", "org.apache.spark.deploy.PythonRunner") 
  9. } else if (args.isR) { 
  10. childArgs ++= Array("--primary-r-file", args.primaryResource) 
  11. childArgs ++= Array("--main-class", "org.apache.spark.deploy.RRunner") 
  12. } 
  13. else { 
  14. childArgs ++= Array("--primary-java-resource", args.primaryResource) 
  15. childArgs ++= Array("--main-class", args.mainClass) 
  16. } 
  17. } else { 
  18. childArgs ++= Array("--main-class", args.mainClass) 
  19. } 
  20. if (args.childArgs != null) { 
  21. args.childArgs.foreach { arg => 
  22. childArgs += ("--arg", arg) 
  23. } 
  24. } 
  25. } 
  26.  

最后完成childArgs, childClasspath, sparkConf, childMainClass的初始化并返回

第二步,执行spark应用

进入runMain完成第一步之后,执行childClass中的main方法,这里cluster模式的childClass就是Yarn,Kubernetes,Mesos提交作业的Client类
通过Client再一次调用我们编写的Spark mainClass,这里使用例子SparkPi:

  1. package org.apache.spark.examples 
  2.  
  3. import scala.math.random 
  4.  
  5. import org.apache.spark.sql.SparkSession 
  6.  
  7. /** Computes an approximation to pi */ 
  8. object SparkPi { 
  9. def main(args: Array[String]): Unit = { 
  10. val spark = SparkSession 
  11. .builder 
  12. .appName("Spark Pi") 
  13. .getOrCreate() 
  14. val slices = if (args.length > 0) args(0).toInt else 2 
  15. val n = math.min(100000L * slices, Int.MaxValue).toInt // avoid overflow 
  16. val count = spark.sparkContext.parallelize(1 until n, slices).map { i => 
  17. val x = random * 2 - 1 
  18. val y = random * 2 - 1 
  19. if (x*x + y*y <= 1) 1 else 0 
  20. }.reduce(_ + _) 
  21. println(s"Pi is roughly ${4.0 * count / (n - 1)}") 
  22. spark.stop() 
  23. } 
  24. } 

所以从spark-class执行java命令之后,调用关系链为SparkSubmit.main->KubernetesClientApplication.main->SparkPi.main
在知道调用关系链之后,我们再看runMain的最后片段,可以看到第20行调用了SparkApplication接口的start(childArgs,sparkConf)方法.

  1. val app: SparkApplication = if (classOf[SparkApplication].isAssignableFrom(mainClass)) { 
  2. //源码批注: 这里Yarn或者Kubernetes cluster模式会进入这里,因为Yarn/KubernetesClientApplication继承自SparkApplication  
  3. mainClass.getConstructor().newInstance().asInstanceOf[SparkApplication] 
  4. } else { 
  5. //源码批注: Client模式会进入这里,直接invoke SparkApplication mainClass的main方法 
  6. new JavaMainApplication(mainClass) 
  7. } 
  8.  
  9. @tailrec 
  10. def findCause(t: Throwable): Throwable = t match { 
  11. case e: UndeclaredThrowableException => 
  12. if (e.getCause() != null) findCause(e.getCause()) else e 
  13. case e: InvocationTargetException => 
  14. if (e.getCause() != null) findCause(e.getCause()) else e 
  15. case e: Throwable => 
  16. e 
  17. } 
  18.  
  19. try { 
  20. app.start(childArgs.toArray, sparkConf) 
  21. } catch { 
  22. case t: Throwable => 
  23. throw findCause(t) 
  24. } 

现在我们再进入KubernetesClientApplication.start()方法:

  1. /** 
  2. * Main class and entry point of application submission in KUBERNETES mode. 
  3. */ 
  4. private[spark] class KubernetesClientApplication extends SparkApplication { 
  5.  
  6. override def start(args: Array[String], conf: SparkConf): Unit = { 
  7. val parsedArguments = ClientArguments.fromCommandLineArgs(args) 
  8. run(parsedArguments, conf) 
  9. } 
  10.  
  11. private def run(clientArguments: ClientArguments, sparkConf: SparkConf): Unit = { 
  12. val appName = sparkConf.getOption("spark.app.name").getOrElse("spark") 
  13. // For constructing the app ID, we can‘t use the Spark application name, as the app ID is going 
  14. // to be added as a label to group resources belonging to the same application. Label values are 
  15. // considerably restrictive, e.g. must be no longer than 63 characters in length. So we generate 
  16. // a unique app ID (captured by spark.app.id) in the format below. 
  17. val kubernetesAppId = s"spark-${UUID.randomUUID().toString.replaceAll("-", "")}" 
  18. val waitForAppCompletion = sparkConf.get(WAIT_FOR_APP_COMPLETION) 
  19. val kubernetesConf = KubernetesConf.createDriverConf( 
  20. sparkConf, 
  21. kubernetesAppId, 
  22. clientArguments.mainAppResource, 
  23. clientArguments.mainClass, 
  24. clientArguments.driverArgs) 
  25. // The master URL has been checked for validity already in SparkSubmit. 
  26. // We just need to get rid of the "k8s://" prefix here. 
  27. val master = KubernetesUtils.parseMasterUrl(sparkConf.get("spark.master")) 
  28. val loggingInterval = if (waitForAppCompletion) Some(sparkConf.get(REPORT_INTERVAL)) else None 
  29.  
  30. val watcher = new LoggingPodStatusWatcherImpl(kubernetesAppId, loggingInterval) 
  31.  
  32. Utils.tryWithResource(SparkKubernetesClientFactory.createKubernetesClient( 
  33. master, 
  34. Some(kubernetesConf.namespace), 
  35. KUBERNETES_AUTH_SUBMISSION_CONF_PREFIX, 
  36. SparkKubernetesClientFactory.ClientType.Submission, 
  37. sparkConf, 
  38. None, 
  39. None)) { kubernetesClient => 
  40. val client = new Client( 
  41. kubernetesConf, 
  42. new KubernetesDriverBuilder(), 
  43. kubernetesClient, 
  44. waitForAppCompletion, 
  45. watcher) 
  46. client.run() 
  47. } 
  48. } 
  49. } 
  50.  

这里由Client连接k8s的API Server,开始构建Kubernetes Driver Pod,提交Spark on k8s的作业.
接下来,我们开始分析Driver Pod又是如何构建Exector Pod,并分配作业的.
未完待续...

Spark on K8S源码解析.md

标签:var   ssh   files   isp   actor   -bash   parallel   some   调用   

原文地址:https://www.cnblogs.com/lanrish/p/12081165.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!