配置主机映射关系 [hadoop@slave03 ~]$ cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 10.15.5.200 master.hadoop 10.15.5.201 slave01.hadoop 10.15.5.202 slave02.hadoop 10.15.5.203 slave03.hadoop
如上有4台主机,每台hosts都已经编辑过。所有的主机都已经编辑过/etc/ssh/sshd_config
RSAAuthentication yes PubkeyAuthentication yes AuthorizedKeysFile .ssh/authorized_keys
所有的authorized_keys都已经过hadoop用户ssh-keygen -t rsa生成过空密码,并写入成功如下
[hadoop@slave03 ~]$ cat .ssh/authorized_keys ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAsZbjTNyyW4ZPZ2cCbUk/m3F5LGTsLKzv/TzuG0LJZ9E/UN2dV4j7dhHbN1olbsvFp4oyGGKhFxn036PljJRfEWhdY0dDagWklH3oijRYr+UzBTMf8sIR9FxKD540rhC0NCEP+KaNZXAhndB7BTZTIkytMlfCboOcbRjt1XU+3Yhi7Dlp6Z6cu2oSgYo4iYpX1anHUMRuMLl5/mQ1INeoRMerBTzGfIYAIQF5Du2bE9HRSClyUal2O6QkKhZGHRa0+3jjFY/vER0R+ruLJZLVfQQkJffQJEz1qk4e/V2GEg5ScjAPyojKg+R/mpWNAwivbsJ8pe4YWnUxBNQFphB/7w== hadoop@master.hadoop ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAwk3DPdzRD41+saaLwnqLkMnLVf5m5br2hb6lCNDF+3vEiezIKpoxp2lzqOp7kUOMrubcJRVcdXJWgrzbp bmUzkqF1YyaXqZ1OaIR+by20glTnYemkTE3ljknVgrQXXyVUxYDw8TGx3sJ/sw5EgrUEM3xB98Yz9j9JUL7lkGzahjYO02JcaKU83YfqUjCt9QIhnLiIo 7BPs/NVK6UP6Srn0fCvsVXS2XqkHzjPhIjiQAiyDINMz3ZCHxnFsOQm9I//R61YYrHLfNPQKOnBMRV4/Q2FPGXa5wgx022i3VaUiXOSGlTV2ugoC22Q1n 422q9P3kWevhL4nDGizn5rEOCEQ== hadoop@slave01.hadoop ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA3FI6rK31HE5MGpqQS/60w8lHruIHSomjdWhmiN+FXKoTyZzqHwxyRS7j1Y8mplIpoUzAOlfPWbgm2AQto jFbLJfOtN4vvBkniw6OEcvjojyOQdyHsEr907L9K9GRq0VtFkjo/YxvYQf823I+14sOdZ8fCZrue0NmcJ0RMifLHrmMGeJ+LLN5fyWYygvDQIenF/DWNN KNgp15v/eVVaRokJ00sSooeikJlv7DL7EWK8UBc0M6Fum7Y2L4WXVmjavAoG5jxN2tJXQvwgbzFaJ1OWFzZzwsOlI+khVvb13ARGnSd0Lp1PDQ0V64Jyh Vj70TllSgvrhz2vkMFbe82TJM/w== hadoop@slave02.hadoop ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA20RPILfJJpN7QsiiGfGaKmcibuwDexslHob9CmzJUFJFxm8sMXtvOMuuipknCl4Aya5+U3AjO5xPh5ZcD rCMSgUrgerwi9ofF/2Od9efOCd4JMKV0V/nsAHdUtCxCeBGOyaPdZ9rEnEqKtu3dF8fW2zOl3UGJ3GrzgTeAvG1Rlet/+jL4RJ/ob+CzVwX3pZQ5YGrwU sPbQ70Sn4aZSM56VTjx9QaqWlXxQDiHgyI+FN2OCsfqB9kiDHSu3DZ2Jjyil+7kOeCUuoCOXZqjG3VUGURYVtguEIlRuHEOQV9hWc7zFVADt5AKpi0c4q i3Bo1P3gYVJ3/wYBqK8YrrHT8Jw== hadoop@slave03.hadoop
给每台主机安装Java, 如果已有Java 请先卸载,从oracle网站得到链接如下:
http://download.oracle.com/otn-pub/java/jdk/7u67-b01/jdk-7u67-linux-x64.tar.gz
下载该文件到本地FTP,因为直接wget会被oracle拒绝
[hadoop@slave03 usr]$ cd java [hadoop@slave03 java]$ pwd /usr/java ftp> bin 200 Type set to I. ftp> get jdk-7u67-linux-x64.tar.gz local: jdk-7u67-linux-x64.tar.gz remote: jdk-7u67-linux-x64.tar.gz 227 Entering Passive Mode (10,15,15,63,19,48). 150 Opening data connection for jdk-7u67-linux-x64.tar.gz.
解压数据包到/usr/java目录下
[hadoop@slave03 java]$ sudo tar -zxvf jdk-7u67-linux-x64.tar.gz
修改系统变量,指定java路径,通过source命令 应用该配置,验证Java版本
[hadoop@slave03 jdk1.7.0_67]$ sudo vim /etc/profile # JAVA environment export JAVA_HOME=/usr/java/jdk1.7.0_67 export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib export PATH=$PATH:$JAVA_HOME/bin:$JAVA_HOME/jre/bin [hadoop@slave03 jdk1.7.0_67]$ source /etc/profile [hadoop@slave03 jdk1.7.0_67]$ java -version java version "1.7.0_67" Java(TM) SE Runtime Environment (build 1.7.0_67-b01) Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode)
把安装文件分发到其他主机,参照以上步骤,安装java并配置变量并验证。
[hadoop@slave03 java]$ scp jdk-7u67-linux-x64.tar.gz hadoop@master.hadoop:~/ [hadoop@slave03 java]$ scp jdk-7u67-linux-x64.tar.gz hadoop@slave01.hadoop:~/ [hadoop@slave03 java]$ scp jdk-7u67-linux-x64.tar.gz hadoop@master.hadoop:~/
配置NTP服务:集群的时间必须保持一致,否则同步会有问题。所以需要配置NTP服务保证集群正常运行,将master外联到ntp服务器,其他主机和master同步,或者在企业网中搭建专用的NTP服务器都可以,可以使用210.72.145.44 (国家授时中心服务器IP地址)
[hadoop@master ~]$ sudo ntpdate -u 65.55.56.206 21 Aug 23:30:04 ntpdate[2355]: step time server 65.55.56.206 offset 455.994813 sec
看到硬件时间和系统时间不一样,将时间写入硬件(执行同步后,不会自动写).
[hadoop@master ~]$ sudo hwclock -r Thu 21 Aug 2014 11:25:07 PM CST -0.736104 seconds [hadoop@master ~]$ sudo date Thu Aug 21 23:33:01 CST 2014 [hadoop@master ~]$ sudo hwclock -w [hadoop@master ~]$ sudo hwclock -r Thu 21 Aug 2014 11:34:36 PM CST -0.053046 seconds
配置NTP Server参数,默认的不用动 修改和添加下面两行即可,最终验证如下
[hadoop@master ~]$ sudo vim /etc/ntp.conf restrict default nomodify notra server 65.55.56.206 prefer [hadoop@master ~]$ sudo service ntpd start Starting ntpd: [ OK ] [hadoop@master ~]$ sudo chkconfig ntpd on [hadoop@master ~]$ sudo ntpstat synchronised to NTP server (202.112.31.197) at stratum 3 time correct to within 1053 ms polling server every 64 s
如下,将所有的datanode 配置NTP Client ,并同步 然后写入硬件时钟
[hadoop@slave01 ~]$ sudo vim /etc/ntp.conf server 10.15.5.200 [hadoop@slave03 ~]$ sudo ntpdate -u 10.15.5.200 22 Aug 00:00:03 ntpdate[2404]: step time server 10.15.5.200 offset 4964.547679 sec [hadoop@slave03 ~]$ sudo hwclock -r Thu 21 Aug 2014 10:37:35 PM CST -0.447982 seconds [hadoop@slave03 ~]$ sudo hwclock -w [hadoop@slave03 ~]$ sudo hwclock -r Fri 22 Aug 2014 12:00:30 AM CST -0.248008 seconds
安装mysql
[hadoop@master ~]$ sudo yum install -y mysql-server mysql mysql-deve Dependencies Resolved ========================================================================================================================================== Package Arch Version Repository Size ========================================================================================================================================== Installing: mysql x86_64 5.1.73-3.el6_5 updates 894 k mysql-server x86_64 5.1.73-3.el6_5 updates 8.6 M Installing for dependencies: perl-DBD-MySQL x86_64 4.013-3.el6 base 134 k Updating for dependencies: mysql-libs x86_64 5.1.73-3.el6_5 updates 1.2 M Transaction Summary ========================================================================================================================================== Install 3 Package(s) Upgrade 1 Package(s)
本文出自 “The old artisan” 博客,谢绝转载!
Hadoop部署实践: 离线安装 CDH5.1 (待完成),布布扣,bubuko.com
原文地址:http://ciscolang.blog.51cto.com/8976580/1543344