标签:dfs 节点 data memory Spark2 home clust 1.4 submit
master:192.168.11.2
s1:192.168.11.3
s2 :192.168.11.4
共三个节点
第一步配置(三台一样) http://hadoop.apache.org/docs/r2.7.4/hadoop-project-dist/hadoop-common/ClusterSetup.html
1> etc/hadoop/core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/data/hadoop-2.7.4/hadoop-${user.name}</value>
</property>
</configuration>
2> etc/hadoop/slave
master
s1
s2
3> vi /etc/profile.d/env.sh
加入 export HADOOP_CONF_DIR=/home/spark/bd/hadoop-2.7.4/etc/hadoop
export JAVA_HOME=/opt/jdk
jps 查看是否启动
解决问题看日志
http://spark.apache.org/docs/latest/running-on-yarn.html
./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn --deploy-mode cluster --driver-memory 4g --executor-cores 2 examples/jars/spark-examples*.jar 10
标签:dfs 节点 data memory Spark2 home clust 1.4 submit
原文地址:http://www.cnblogs.com/anjunact/p/7663610.html