码迷,mamicode.com
首页 > 其他好文 > 详细

Spark部署配置

时间:2016-01-11 00:09:34      阅读:116      评论:0      收藏:0      [点我收藏+]

标签:

前提是已经安装了Hadoop

============================ SetUp Spark=============================
Configuration
spark-env.sh
HADOOP_CONF_DIR=/opt/data02/hadoop-2.6.0-cdh5.4.0/etc/hadoop
JAVA_HOME=/opt/modules/jdk1.7.0_67
SCALA_HOME=/opt/modules/scala-2.10.4
#######################################################
SPARK_MASTER_IP=hadoop-spark.dragon.org
SPARK_MASTER_PORT=7077
SPARK_MASTER_WEBUI_PORT=8080
SPARK_WORKER_CORES=1
SPARK_WORKER_MEMORY=1000m
SPARK_WORKER_PORT=7078
SPARK_WORKER_WEBUI_PORT=8081
SPARK_WORKER_INSTANCES=1
slaves
hadoop-spark.dragon.org
spark-defaults.conf
spark.master spark://hadoop-spark.dragon.org:7077
Start Spark
Start Master
sbin/start-master.sh
Start Slaves
sbin/start-slaves.sh
WEB UI
http://hadoop-spark.dragon.org:8080

============================ Test Spark=============================

scala> val rdd=sc.textFile("hdfs://hadoop-spark.dragon.org:8020/user/hadoop/data/wc.input")

scala> rdd.cache()

scala> val wordcount=rdd.flatMap(_.split(" ")).map(x=>(x,1)).reduceByKey(_+_)

scala> wordcount.take(10)

scala> val wordsort=wordcount.map(x=>(x._2,x._1)).sortByKey(false).map(x=>(x._2,x._1))

scala> wordsort.take(10)



Spark部署配置

标签:

原文地址:http://www.cnblogs.com/ilinuxer/p/5119904.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!