标签:format service rop java_home local ssh false mat 准备
BOOTPROTO="static"
IPADDR=192.168.1.111
NETMASK=255.255.255.0
GATEWAY=192.168.1.1
DNS1=192.168.1.1
NM_CONTROLLED=no
ONBOOT=yes
192.168.1.111 bigdata111 bigdata111
a)查看状态:systemctl status firewalld
b)结束:systemctl stop firewalld
c)关闭:systemctl disable firewalld
a)查看selinux状态:getenforce
b)关闭:编辑文件/etc/selinux/config,将selinux的值改成disable
c)重启:reboot
a)查看状态:systemctl status iptables
b)关闭:chkconfig iptables off
A)测试SSH:ssh bigdata111(系统默认已经安装ssh)
B)利用ssh-keygen生成密钥,并将密钥加入授权
cd ~/.ssh/???????????????????? # 若没有该目录,请先执行一次ssh localhost
ssh-keygen -t rsa????????????? # 会有提示,都按回车就可以
cat id_rsa.pub >> authorized_keys? # 加入授权
chmod 600 ./authorized_keys??? # 修改文件权限
A)安装JDK:tar -zxvf jdk-8u241-linux-x64.tar.gz -C /roo/test/
B)配置Java环境变量:
JAVA_HOME=/root/test/jdk1.8.0_241
export JAVA_HOME
PATH=$JAVA_HOME/bin:$PATH
export PATH
source ~/.bash_profile
A)解压hadoop:tar -zxvf hadoop-3.1.2.tar.gz -C /root/test/
B)配置hadoop环境变量:
HADOOP_HOME=/root/test/hadoop-3.1.2
export HADOOP_HOME
PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
export PATH
A)配置hadoop用户:
export HDFS_DATANODE_USER=root
export HDFS_DATANODE_SECURE_USER=root
export HDFS_NAMENODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root
b)重启立即生效:source ~/.bash_profile
B)配置hadoop-env.sh(/root/test/hadoop-3.1.2/etc/hadoop/目录下)
去掉54行注释,设置JAVA_HOME
JAVA_HOME=/root/test/jdk1.8.0_241
A)准备实验数据
a)建立输入目录:mkdir -p /root/temp/input
b)创建输入文件:vi data.txt
I love Beijing
I love China
Beijing is the capital of China
B)执行mapreduce
cd /root/test/hadoop-3.1.2/share/hadoop/mapreduce/
b)hadoop jar hadoop-mapreduce-examples-3.1.2.jar(此jar包提供了很多函数)
c)执行wordcount计算,进行实验:
hadoop jar hadoop-mapreduce-examples-3.1.2.jar wordcount /root/temp/input/data.txt /root/temp/output/wc
Beijing 2
China 2
I 2
capital 1
is 1
love 2
of 1
the 1
A)配置hdfs-site.xml
编辑:vi hdfs-site.xml,加入以下配置项:
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://bigdata111:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/root/test/hadoop-3.1.2/tmp</value>
</property>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>yarn.app.mapreduce.am.env</name>
<value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>
</property>
<property>
<name>mapreduce.map.env</name>
<value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>
</property>
<property>
<name>mapreduce.reduce.env</name>
<value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>bigdata111</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
3172 ResourceManager
3668 Jps
2917 SecondaryNameNode
3307 NodeManager
2542 NameNode
2686 DataNode
http://192.168.1.111:9870
安装配置完成!
大数据01_centos7部署hadoop-3-1-2本地模式与伪分布模式
标签:format service rop java_home local ssh false mat 准备
原文地址:https://www.cnblogs.com/weyanxy/p/12832100.html