??几个参数:
??JDK-1.7+
??Hadoop-2.6.0(伪分布式);
??Scala-2.10.5;
??Spark-1.4.0;
??下面是具体的配置过程
安装JDK 1.7+
【下载网址】http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
环境变量设置(最好不要采用openjdk):
export JAVA_HOME=/usr/java/java-1.7.0_71
export JRE_HOME=$JAVA_HOME/jre
export PATH=$PATH:$JAVA_HOME/bin
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
更新重启环境变量
$ source /etc/profile
$ java -version
下载安装scala-2.10.5
【下载网址】http://www.scala-lang.org/download/2.10.5.html
下载对对对应的压缩包
$ tar -zxf scala-2.10.5.tgz
$ sudo mv scala-2.10.5 /usr/local
配置环境变量:
export SCALA_HOME=/usr/local/scala-2.11.4
export PATH=$SCALA_HOME/bin:$PATH
更新启动环境变量
source /etc/profile
测试scala安装是否成功
$ scala -version
【亲测】安装Hadoop(需要修改Hadoop的情况下,手动编译)
【安装Hadoop的参考网址】http://qindongliang.iteye.com/blog/2222145
sudo yum install -y autoconf automake libtool git gcc gcc-c++ make cmake openssl-devel,ncurses-devel bzip2-devel
安装Maven3.0+
【下载网址】http://archive.apache.org/dist/maven/maven-3/3.0.5/binaries/
tar -xvf apache-maven-3.0.5-bin.tar.gz
mv -rf apache-maven-3.0.5 /usr/local/
配置环境变量
MAVEN_HOME=/usr/local/apache-maven-3.0.5
export MAVEN_HOME
export PATH=${PATH}:${MAVEN_HOME}/bin
使生效
source /etc/profile
mvn -v
安装ant1.8+
【下载网址】http://archive.apache.org/dist/ant/binaries/
环境变量
export ANT_HOME=/usr/local/apache-ant-1.8.4
export PATH=$ANT_HOME/bin:$PATH
测试
ant -version
安装 protobuf-2.5.0.tar.gz
tar xvf protobuf-2.5.0.tar.gz
cd protobuf-2.5.0
./configure --prefix=/usr/local/protobuf
make
nake install
环境变量
export PATH=$PATH:/usr/local/protobuf/bin
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/protobuf/lib
测试
protoc --version
如果输出libprotoc 2.5.0
表示安装成功。
./configure --prefix=/usr/local/snappy
#指定的一个安装目录 make
make install
git clone https://github.com/electrum/hadoop-snappy.git
cd hadoop-snappy
mvn package -Dsnappy.prefix=/home/search/snappy
hadoop-snappy/target/hadoop-snappy-0.0.1-SNAPSHOT-tar/hadoop-snappy-0.0.1-SNAPSHOT/lib
目录下,有个hadoop-snappy-0.0.1-SNAPSHOT.jar
,在hadoop编译后,需要拷贝到$HADOOP_HOME/lib
目录下。 wget http://archive-primary.cloudera.com/cdh5/cdh/5/hadoop-2.6.0-cdh5.4.1-src.tar.gz
tar -zxvf hadoop-2.6.0-cdh5.4.1-src.tar.gz
mvn clean package -DskipTests -Pdist,native -Dtar -Dsnappy.lib=(hadoop-snappy里面编译后的库地址) -Dbundle.snappy
【最终选择】安装Hadoop(无需修改Hadoop时直接下载编译好的Hadoop文件)
/usr/local/
/usr/local/hadoop
下同。/usr/local/hadoop/etc/hadoop/
中(很多的xml文件),伪分布式需要修改2个配置文件 core-site.xml 和 hdfs-site.xml。<configuration>
</configuration>
修改为下面配置:
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/usr/local/hadoop/tmp</value>
<description>Abase for other temporary directories.</description>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop/tmp/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop/tmp/dfs/data</value>
</property>
</configuration>
【注意】不说明的话都是目录的起始位置都是:hadoop/
bin/hdfs namenode -format
successfully formatted
的提示,且输出信息的倒数第5行的提示如下,Exitting with status 0 表示成功,若为 Exitting with status 1 则是出错。若出错,可试着加上 sudo, 既 sudo bin/hdfs namenode -format
再试试看。sbin/start-dfs.sh
WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
这个提示,这些warn提示可以忽略,不会影响正常使用。jps
即可 http://localhost:50070
(localhost或者是服务器ip) Error: JAVA_HOME is not set and could not be found.
的错误,则需要在文件 hadoop/etc/hadoop/hadoop-env.sh
中设置 JAVA_HOME 变量,即找到 export JAVA_HOME=${JAVA_HOME} 这一行,改为 export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
(就是之前设置的JAVA_HOME位置),再重新尝试即可。sbin/stop-dfs.sh
wget http://archive.apache.org/dist/spark/spark-1.4.0/spark-1.4.0-bin-hadoop2.6.tgz
export SPARK_HOME=/usr/local/spark
export PATH=$PATH:$SPARK_HOME/bin
cd $SPARK_HOME/conf
cp spark-env.sh.template spark-env.sh
vim spark-env.sh
export JAVA_HOME=/usr/java/latest
export HADOOP_HOME=/usr/local/hadoop
export HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop
export SCALA_HOME=/usr/local/scala-2.10.5
export SPARK_HOME=/usr/local/spark
export SPARK_MASTER_IP=127.0.0.1
export SPARK_MASTER_PORT=7077
export SPARK_MASTER_WEBUI_PORT=8099
export SPARK_WORKER_CORES=3
export SPARK_WORKER_INSTANCES=1
export SPARK_WORKER_MEMORY=10G
export SPARK_WORKER_WEBUI_PORT=8081
export SPARK_EXECUTOR_CORES=1
export SPARK_EXECUTOR_MEMORY=1G
#export SPARK_CLASSPATH=/opt/hadoop-lzo/current/hadoop-lzo.jar
#export SPARK_CLASSPATH=$SPARK_CLASSPATH:$CLASSPATH
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$HADOOP_HOME/lib/native
cp slaves.template slaves
vim slaves
localhost
cd $SPARK_HOME/sbin/
./start-master.sh
cd $SPARK_HOME/sbin/
./start-slaves.sh
(注意是slaves)http://localhost:8099
(localhost或者是服务器ip) cd $SPARK_HOME/sbin/
./stop-master.sh
./stop-slaves.sh
版权声明:本文为博主原创文章,未经博主允许不得转载。
Spark-1.4.0单机部署(Hadoop-2.6.0采用伪分布式)【已测】
原文地址:http://blog.csdn.net/hust_sheng/article/details/48008617