标签:内网 ons passwd boot port scripts ring broadcast tables
本blog以K-Master服务器基础环境配置为例分别演示用户配置、sudo权限配置、网路配置、关闭防火墙、安装JDK工具等。用户需参照以下步骤完成KVMSlave1~KVMSlave3服务器的基础环境配置。
硬件环境:Centos 6.5 服务器4台(一台为Master节点,三台为Slave节点)
软件环境:Java 1.7.0_45、hadoop-1.2.1
hadoop1.X和hadoop2.X的文件结构已经完全不一样了,网上很少看到hadoop1.X以上的安装示例教程,我选择的是hadoop-1.1.2.tar.gz,另外我选的jdk8,centos7
硬件环境:Centos 6.5 服务器4台(一台为Master节点,三台为Slave节点)
软件环境:Java 1.7.0_45、hadoop-1.2.1
1)添加一个用户
[hadoop@K-Master hadoop]$ adduser hadoop #新建hadoop用户
[hadoop@K-Master hadoop]$ passwd hadoop #hadoop用户设置密码
2)建工作组
[hadoop@K-Master hadoop]$ groupadd hadoop #新建hadoop工作组
3)给已有的用户增加工作组
[hadoop@K-Master hadoop]$ usermod -G hadoop hadoop
1)新建个用户组admin
[hadoop@K-Master hadoop]# groupadd admin
2)将已有用户添加到admin用户组
[hadoop@K-Master hadoop]# usermod -G admin,hadoop hadoop
3)赋予修改/etc/sudoers文件写权限
[hadoop@K-Master hadoop]# chmod u+w /etc/sudoers
4)编辑/etc/sudoers文件
[hadoop@K-Master hadoop]# vi /etc/sudoers 缺省只有一条配置: root ALL=(ALL) ALL 在下边再加一条配置: %admin ALL=(ALL) ALL
这样admin用户组就拥有了sudo权限,属于admin用户组的hadoop用户同样拥有了sudo权限。
5)编辑完成后降低权限
[hadoop@K-Master hadoop]$ chmod u-w /etc/sudoers
1)配置IP地址
其实也可以不配置,就用默认的就可以,比方说我百度云的默认内网ip是192.168.0.4
详细配置信息如下所示:
[hadoop@K-Master hadoop]$ su hadoop #切换为hadoop用户 [hadoop@K-Master hadoop]$ sudo vi /etc/sysconfig/network-scripts/ifcfg-eth0 HWADDR=06:8D:30:00:00:27 TYPE=Ethernet BOOTPROTO=static IPADDR=192.168.100.147 PREFIX=24 GATEWAY=192.168.100.1 DNS1=192.168.100.1 DEFROUTE=yes IPV4_FAILURE_FATAL=yes IPV6INIT=no NAME=eth0 UUID=660a57a1-5edf-4cdd-b456-e7e1059aef11 ONBOOT=yes LAST_CONNECT=1411901185
2)重启网络服务使网络设置生效
[hadoop@K-Master hadoop]$ sudo service network restart Shutting down interface eth0: Device state: 3 (disconnected) [ OK ] Shutting down loopback interface: [ OK ] Bringing up loopback interface: [ OK ] Bringing up interface eth0: Active connection state: activated Active connection path: /org/freedesktop/NetworkManager/ActiveConnection/1 [ OK ]
3)测试IP网络配置
通过ifconfig命令查看网络的ip地址,如下信息显示eth0无线网卡的IP地址为192.168.100.147,与上述我们配置的IP地址吻合,表明IP地址配置成功。
[hadoop@K-Master ~]$ ifconfig eth0 Link encap:Ethernet HWaddr 06:8D:30:00:00:27 inet addr:192.168.100.147 Bcast:192.168.100.255 Mask:255.255.255.0 inet6 addr: fe80::48d:30ff:fe00:27/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:59099169 errors:0 dropped:0 overruns:0 frame:0 TX packets:30049168 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:12477388443 (11.6 GiB) TX bytes:8811418526 (8.2 GiB) loLink encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:2266013 errors:0 dropped:0 overruns:0 frame:0 TX packets:2266013 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:666482169 (635.6 MiB) TX bytes:666482169 (635.6 MiB)
4)修改Host主机名
我的机子是192.168.0.4 instance-3lm099to instance-3lm099to.novalocal
[hadoop@K-Master hadoop]$ sudo vi /etc/sysconfig/network NETWORKING=yes NETWORKING_IPV6=no HOSTNAME=Master [hadoop@K-Master hadoop]$ sudo vi /etc/hosts 127.0.0.1 localhost.localdomain ::1 hdirect30 hdirect30 192.168.100.201 K-Master
5)重启主机使得主机名生效
[hadoop@K-Master hadoop]$ sudo reboot
在启动前关闭集群中所有机器的防火墙,不然会出现datanode开后又自动关闭。
1)查看防火墙状态
[hadoop@K-Master ~]$ sudo service iptables status iptables: Firewall is not running.
2)关闭防火墙
[hadoop@K-Master hadoop]$ sudo service iptables stop
iptables: Setting chains to policy ACCEPT: filter [ OK ]
iptables: Flushing firewall rules: [ OK ]
iptables: Unloading modules: [ OK ]
3)永久关闭防火墙
[hadoop@K-Master hadoop]$ sudo chkconfig iptables off
4)关闭SELINUX
[hadoop@K-Master hadoop]$ sudo vi /etc/selinux/config
SELINUX=disabled
1)解压
[hadoop@K-Master ~]$ scp hadoop@192.168.0.201:/home/hadoop/jdk-7u65-linux-x64.rpm . [hadoop@K-Master ~]$ sudo rpm -ivh jdk-7u65-linux-x64.rpm
2)编辑”/etc/profile”文件,在后面添加Java的”JAVA_HOME”、”CLASSPATH”以及”PATH”内容。
[hadoop@K-Master ~]$ sudo vim /etc/profile #JAVA export JAVA_HOME=/usr/java/jdk1.7.0_65 export JRE_HOME=$JAVA_HOME/jre export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JRE_HOME/lib export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin #HADOOP export HADOOP_HOME=/usr/hadoop-1.2.1 export PATH=$PATH:$HADOOP_HOME/bin export HADOOP_HOME_WARN_SUPPRESS=1
3)使配置文件生效
[hadoop@K-Master ~]$ source /etc/profile
【Hadoop基础教程】1、Hadoop之服务器基础环境搭建(转)
标签:内网 ons passwd boot port scripts ring broadcast tables
原文地址:https://www.cnblogs.com/shamo89/p/9276872.html