标签:
本文将接受 Spark 集群的部署方式,包括无 HA、Spark Standalone HA 和 基于 ZooKeeper 的 HA 三种。[yyl@node1 program]$ cd spark-1.5.0-bin-2.5.2/conf/ [yyl@node1 conf]$ cp slaves.template slaves [yyl@node1 conf]$ vim slaves node2.zhch node3.zhch [yyl@node1 conf]$ cp spark-env.sh.template spark-env.sh [yyl@node1 conf]$ vim spark-env.sh export JAVA_HOME=/usr/lib/java/jdk1.7.0_80 export SPARK_MASTER_IP=node1.zhch export SPARK_MASTER_PORT=7077 export SPARK_WORKER_CORES=1 export SPARK_WORKER_INSTANCES=1 export SPARK_WORKER_MEMORY=1g
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=FILESYSTEM -Dspark.deploy.recoveryDirectory=/home/yyl/program/spark-1.5.0-bin-2.5.2/recovery"
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=node1.zhch:2181,node2.zhch:2181,node3.zhch:2181 -Dspark.deploy.zookeeper.dir=/spark"
标签:
原文地址:http://my.oschina.net/zc741520/blog/506108