码迷,mamicode.com
首页 > 其他好文 > 详细

install Hadoop

时间:2016-04-17 19:13:44      阅读:198      评论:0      收藏:0      [点我收藏+]

标签:

Installing Java

Hadoop runs on both Unix and Windows operating systems, and requires Java to be
installed. For a production installation, you should select a combination of operating
system, Java, and Hadoop that has been certified by the vendor of the Hadoop distribution
you are using. There is also a page on the Hadoop wiki that lists combinations
that community members have run with success.

Creating Linux User Accounts

It’s good practice to create dedicated Unix user accounts to separate the Hadoop processes
from each other, and from other services running on the same machine. The
HDFS, MapReduce, and YARN services are usually run as separate users, named hdfs,
mapred, and yarn, respectively. They all belong to the same hadoop group

 

Installing Hadoop
Download Hadoop from the Apache Hadoop releases page, and unpack the contents of
the distribution in a sensible location, such as /usr/local (/opt is another standard choice;
note that Hadoop should not be installed in a user’s home directory, as that may be an
NFS-mounted directory):
% cd /usr/local
% sudo tar xzf hadoop-x.y.z.tar.gz
You also need to change the owner of the Hadoop files to be the hadoop user and group:
% sudo chown -R hadoop:hadoop hadoop-x.y.z
It’s convenient to put the Hadoop binaries on the shell path too:
% export HADOOP_HOME=/usr/local/hadoop-x.y.z
% export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

 

Configuring SSH

ssh-keygen -t rsa -f ~/.ssh/id_rsa

% cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

Test that you can SSH from the master to a worker machine by making sure ssh-agent
is running,3 and then run ssh-add to store your passphrase. You should be able to SSH
to a worker without entering the passphrase again.

 

Formatting the HDFS Filesystem

% hdfs namenode -format

Starting and Stopping the Daemons

% start-dfs.sh

% start-yarn.sh

% mr-jobhistory-daemon.sh start historyserver

hdfs getconf -namenodes

 

Creating User Directories
Once you have a Hadoop cluster up and running, you need to give users access to it.
This involves creating a home directory for each user and setting ownership permissions
on it:
% hadoop fs -mkdir /user/username
% hadoop fs -chown username:username /user/username
This is a good time to set space limits on the directory. The following sets a 1 TB limit
on the given user directory:
% hdfs dfsadmin -setSpaceQuota 1t /user/username

Hadoop Configuration

 

install Hadoop

标签:

原文地址:http://www.cnblogs.com/baxk/p/5401805.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!