一、环境
系统 CentOS 6.4x64最小化安装
mfs-master 192.168.3.33
mfs-slave 192.168.3.34
vip 192.168.3.35
mfs-chun 192.168.3.36
mfs-client 192.168.3.37
二、基础配置
#关闭iptables,配置hosts本地解析 [root@mfs-master ~]# service iptables stop [root@mfs-master ~]# vim /etc/hosts [root@mfs-master ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.3.33 mfs-master 192.168.3.34 mfs-slave 192.168.3.36 mfs-chun 192.168.3.37 mfs-client #安装yum源 [root@mfs-master ~]# rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm Retrieving http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm warning: /var/tmp/rpm-tmp.sh3hsI: Header V3 RSA/SHA256 Signature, key ID 0608b895: NOKEY Preparing... ########################################### [100%] 1:epel-release ########################################### [100%] [root@mfs-master ~]# sed -i ‘s@#b@b@g‘ /etc/yum.repos.d/epel.repo [root@mfs-master ~]# sed -i ‘s@mirrorlist@#mirrorlist@g‘ /etc/yum.repos.d/epel.repo [root@mfs-master ~]# rpm -ivh http://elrepo.org/elrepo-release-6-5.el6.elrepo.noarch.rpm #配置ntp同步时间 [root@mfs-master ~]# echo "*/10 * * * * /usr/sbin/ntpdate asia.pool.ntp.org &>/dev/null" >/var/spool/cron/root #配置ssh互信(这里只需要mfs-master和mfs-slave互信就行了) #mfs-master操作 [root@mfs-master ~]# ssh-keygen [root@mfs-master ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@mfs-slave #mfs-slave操作 [root@mfs-slave ~]# ssh-keygen [root@mfs-slave ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@mfs-master
三、安装配置heartbeat
(1).在mfs-master和mfs-slave执行同样的安装操作
[root@mfs-master ~]# yum install heartbeat -y
(2).配置ha.cf
[root@mfs-master ~]# egrep -v "^$|^#" /etc/ha.d/ha.cf logfile /var/log/ha-log logfacility local1 keepalive 2 deadtime 30 warntime 10 initdead 120 mcast eth0 225.0.10.1 694 1 0 auto_failback on node mfs-master node mfs-slave crm no
(3).配置authkeys
[root@mfs-master ~]# dd if=/dev/random bs=512 count=1 |openssl md5 0+1 records in 0+1 records out 21 bytes (21 B) copied, 5.0391e-05 s, 417 kB/s (stdin)= c55529482f1c76dd8967ba41f5441ae1 [root@mfs-master ~]# grep -v ^# /etc/ha.d/authkeys auth 1 1 md5 c55529482f1c76dd8967ba41f5441ae1 [root@mfs-master ~]# chmod 600 /etc/ha.d/authkeys
(4).配置haresource
[root@mfs-master ~]# grep -v ^# /etc/ha.d/haresources mfs-master IPaddr::192.168.3.35/24/eth0 #暂时只配置一个IP资源用于调试
(5).启动heartbeat
#说明,工作中应关闭开机自启动,当服务器重启时,由人工手动启动 [root@mfs-master ~]# chkconfig heartbeat off #启动服务 [root@mfs-master ha.d]# /etc/init.d/heartbeat start Starting High-Availability services: INFO: Resource is stopped Done. #将配置文件复制到mfs-slave上 [root@mfs-master ha.d]# scp authkeys ha.cf haresources mfs-slave:/etc/ha.d #在mfs-slave上启动服务 [root@mfs-slave ~]# /etc/init.d/heartbeat start Starting High-Availability services: INFO: Resource is stopped Done. #查看结果 [root@mfs-master ha.d]# ip a|grep eth0 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 inet 192.168.3.33/24 brd 192.168.3.255 scope global eth0 inet 192.168.3.35/24 brd 192.168.3.255 scope global secondary eth0 #查询备节点上的ip信息,备几点没有ip [root@mfs-master ha.d]# ssh mfs-slave ip a|grep eth0 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 inet 192.168.3.34/24 brd 192.168.3.255 scope global eth0
(6).测试heartbeat
正常状态
#mfs-master的IP信息 [root@mfs-master ha.d]# ip a |grep eth0 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 inet 192.168.3.33/24 brd 192.168.3.255 scope global eth0 inet 192.168.3.35/24 brd 192.168.3.255 scope global secondary eth0 #mfs-slave的IP信息 [root@mfs-slave ~]# ip a |grep eth0 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 inet 192.168.3.34/24 brd 192.168.3.255 scope global eth0
模拟主节点宕机后的状态信息
#在主节点mfs-master上停止heartbeat服务 [root@mfs-master ha.d]# /etc/init.d/heartbeat stop Stopping High-Availability services: Done. [root@mfs-master ha.d]# ip a|grep eth0 主节点的heartbeat服务停止后,vip资源被抢走 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 inet 192.168.3.33/24 brd 192.168.3.255 scope global eth0 #在备节点mfs--slave查看资源 [root@mfs-slave ~]# ip a |grep eth0 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 inet 192.168.3.34/24 brd 192.168.3.255 scope global eth0 inet 192.168.3.35/24 brd 192.168.3.255 scope global secondary eth0
恢复主节点的heartbeat服务
[root@mfs-master ~]# /etc/init.d/heartbeat start Starting High-Availability services: INFO: Resource is stopped Done. #主节点的heartbeat服务恢复后,将资源接管回来了 [root@mfs-master ~]# ip a |grep eth0 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 inet 192.168.3.33/24 brd 192.168.3.255 scope global eth0 inet 192.168.3.35/24 brd 192.168.3.255 scope global secondary eth0 [root@mfs-slave ~]# ip a |grep eth0 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 inet 192.168.3.34/24 brd 192.168.3.255 scope global eth0
四、安装部署DRBD
(1).对硬盘进行分区,mfs-master和mfs-slave的操作一样
[root@mfs-master ~]# fdisk /dev/sdb #说明:/dev/sdb分成2个分区/dev/sdb1和/dev/sdb2,/dev/sdb1=15G [root@mfs-master ~]# partprobe /dev/sdb #对分区进行格式化 [root@mfs-master ~]# mkfs.ext4 /dev/sdb1 说明:sdb2分区为meta data分区,不需要格式化操作 [root@mfs-master ~]# tune2fs -c -1 /dev/sdb1 说明:设置最大挂载数为-1,关闭强制检查挂载次数限制
2).安装DRBD
由于我们的系统是CentOS6.4的,所以我们还需要安装内核模块,版本需要和uname -r保持一致,安装包我们从系统安装软件中提取出来,过程略。mfs-master和mfs-slave的安装过程一样,这里只给出mfs-master的安装过程
#安装系统内核文件 [root@mfs-master ~]# rpm -ivh kernel-devel-2.6.32-358.el6.x86_64.rpm kernel-headers-2.6.32-358.el6.x86_64.rpm [root@mfs-master ~]# yum install drbd84 kmod-drbd84 -y
(3).配置DRBD
a.修改全局配置文件
[root@mfs-master ~]# egrep -v "^$|^#|^[[:space:]]+#" /etc/drbd.d/global_common.conf global { usage-count no; } common { protocol C; handlers { } startup { } options { } disk { on-io-error detach; no-disk-flushes; no-md-flushes; rate 200M; } net { sndbuf-size 512k; max-buffers 8000; unplug-watermark 1024; max-epoch-size 8000; cram-hmac-alg "sha1"; shared-secret "weyee2014"; after-sb-0pri disconnect; after-sb-1pri disconnect; after-sb-2pri disconnect; rr-conflict disconnect; } }
b.增加资源
[root@mfs-master ~]# cat /etc/drbd.d/mfsdata.res resource mfsdata { on mfs-master { device /dev/drbd1; disk /dev/sdb1; address 192.168.3.33:7789; meta-disk /dev/sdb2 [0]; } on mfs-slave { device /dev/drbd1; disk /dev/sdb1; address 192.168.3.34:7789; meta-disk /dev/sdb2 [0]; } }
c.将配置文件复制到ha-node2上,重启系统加载drbd模块,初始化meta数据
[root@mfs-master drbd.d]# scp global_common.conf mfsdata.res mfs-slave:/etc/drbd.d/ [root@mfs-master ~]# depmod [root@mfs-master ~]# modprobe drbd #我这是使用虚拟机安装的,重启后才能加载到drbd模块,不知道这是为什么 [root@mfs-master ~]# lsmod |grep drbd drbd 365931 2 libcrc32c 1246 1 drbd #在mfs-master上初始化meta数据 [root@mfs-master ~]# drbdadm create-md mfsdata initializing activity log NOT initializing bitmap Writing meta data... New drbd meta data block successfully created. #在mfs-slave上加载模块,初始化meta数据 [root@mfs-slave ~]# depmod [root@mfs-slave ~]# modprobe drbd [root@mfs-slave ~]# lsmod |grep drbd drbd 365931 0 libcrc32c 1246 1 drbd [root@mfs-slave ~]# drbdadm create-md mfsdata initializing activity log NOT initializing bitmap Writing meta data... New drbd meta data block successfully created.
d.在mfs-master和mfs-slave上启动drbd
#mfs-master操作 [root@mfs-master ~]# /etc/init.d/drbd start #mfs-slave操作 [root@mfs-slave ~]# /etc/init.d/drbd start [root@mfs-master ~]# drbd-overview 1:mfsdata/0 Connected Secondary/Secondary Inconsistent/Inconsistent #将mfs-master设置成主节点 [root@mfs-master ~]# drbdadm -- --overwrite-data-of-peer primary mfsdata [root@mfs-master ~]# drbd-overview 1:mfsdata/0 SyncSource Primary/Secondary UpToDate/Inconsistent [>....................] sync‘ed: 1.4% (15160/15364)M #将DRBD设备挂载到/data目录下,写入测试数据mfs-master.txt [root@mfs-master ~]# mount /dev/drbd1 /data [root@mfs-master ~]# touch /data/mfs-master.txt [root@mfs-master ~]# ls /data/ lost+found mfs-master.txt #状态结果显示UpToDate/UpToDate表示主备节点数据已同步 [root@mfs-master ~]# drbd-overview 1:mfsdata/0 Connected Primary/Secondary UpToDate/UpToDate /data ext4 15G 38M 14G 1%
e.测试DRBD
正常状态
[root@mfs-master ~]# drbd-overview 1:mfsdata/0 Connected Primary/Secondary UpToDate/UpToDate /data ext4 15G 38M 14G 1% #注:这里显示的是mfs-master是主节点,mfs-slave是从节点
模拟宕机后的状态
[root@mfs-master ~]# umount /data #将mfs-master设置成secondary状态 [root@mfs-master ~]# drbdadm secondary mfsdata [root@mfs-master ~]# drbd-overview 1:mfsdata/0 Connected Secondary/Secondary UpToDate/UpToDate #将mfs-slave设置成primary状态 [root@mfs-slave ~]# drbdadm primary mfsdata [root@mfs-slave ~]# drbd-overview 1:mfsdata/0 Connected Primary/Secondary UpToDate/UpToDate [root@mfs-slave ~]# mount /dev/drbd1 /mnt #查看文件,测试结果正常 [root@mfs-slave ~]# ls /mnt lost+found mfs-master.txt #注:DRBD主节点宕机后,将备节点设置成primary状态后能正常使用,且数据一致 #将DRBD状态恢复成原状态
五、在mfs-master上安装MFS
因为我们要使用drbd的高可用,所有我们的drbd设备要挂载到/usr/local/mfs目录下
[root@mfs-master ~]# mkdir /usr/local/mfs [root@mfs-master ~]# mount /dev/drbd1 /usr/local/mfs/ [root@mfs-master ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 18G 1.4G 16G 9% / tmpfs 242M 0 242M 0% /dev/shm /dev/sda1 190M 48M 132M 27% /boot /dev/drbd1 15G 38M 14G 1% /usr/local/mfs
MFS的安装部分mfs-master
#开始安装MFS [root@mfs-master ~]# yum install zlib-devel -y [root@mfs-master ~]# groupadd -g 1000 mfs [root@mfs-master ~]# useradd -u 1000 -g mfs -s /sbin/nologin mfs [root@mfs-master ~]# wget http://moosefs.org/tl_files/mfscode/mfs-1.6.27-5.tar.gz [root@mfs-master ~]# tar xf mfs-1.6.27-5.tar.gz [root@mfs-master ~]# cd mfs-1.6.27 [root@mfs-master mfs-1.6.27]# ./configure --prefix=/usr/local/mfs --with-default-user=mfs --with-default-group=mfs --disable-mfschunkserver --disable-mfsmount
本文出自 “ly36843运维” 博客,请务必保留此出处http://ly36843.blog.51cto.com/3120113/1676308
原文地址:http://ly36843.blog.51cto.com/3120113/1676308