1,两台cenots7.0服务器,我是用KVM虚拟机搭建的。
2,互相解析hosts文件
3,关闭iptables和selinux,ssh开启
4,配置yum源,安装pacemaker,corosync,drbd
5,配置drbd,选择procoto C ,复制模式,验证结果
6,配置corosync,基于crm管理VIP,FSsystem,DRBD块设备
7,验证结果,能否发生故障自动切换。
1,drbdAA eth0 9.111.222.59 eth1 192.168.100.59
drbdBB eth0 9.111.222.60 eth1 192.168.100.60
2,两个节点执行
systemctl stop firewalld
systemctl disable firewalld
vim /etc/selinux/config
将enforing 改为disabled
reboot使其生效
3.两台节点执行
yum clean all
yum makecache
4.安装drbd
这里是yum安装,编译安装容易报错,需要两台节点执行
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org 安装yum源
yum install http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
yum install drbd84-utils kmod-drbd84 安装drbd
5,挂载硬盘,kvm及virtual,vmwarestation虚拟机挂载方法一样,两个节点分区一样,盘符一样,我这里都为8G,sda
6,fdisk -l 查看是否加载磁盘设备,为磁盘设备分区
fdisk /dev/sda
m
p
n
enter
enter
w
7,drbd用的端口是7789,生产环境中一般都配置iptables规则,实验环境关闭iptables即可
8,drbd配置文件是分为模块化的,drbd.conf是主配置文件,其它模块配置文件在/etc/drbd.d/下
9,[root@drbdAA ~]# vim /etc/drbd.d/global_common.conf
global {
usage-count no;
# minor-count dialog-refresh disable-ip-verification
# cmd-timeout-short 5; cmd-timeout-medium 121; cmd-timeout-long 600;
}
common {
handlers {
pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";
#fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
# split-brain "/usr/lib/drbd/notify-split-brain.sh root";
# out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root";
# before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k";
#after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh;
}
startup {
# wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb
#wfc-timeout 30;
#degr-wfc-timeout 30;
}
options {
# cpu-mask on-no-data-accessible
}
disk {
on-io-error detach; 同步错误的做法是分离
fencing resource-only;
}
net {
protocol C;复制模式
cram-hmac-alg "sha1"; 设置加密算法sha1
shared-secret "mydrbd";设置加密key
}
syncer {
rate 1000M; 传输速率
}
}
增加资源
vim /etc/drbd.d/r0.res
resource r0 {
on drbdAA {
device /dev/drbd0;
disk /dev/sda1;
address 192.168.100.59:7789;
meta-disk internal;
}
on drbdBB {
device /dev/drbd0;
disk /dev/sda1;
address 192.168.100.60:7789;
meta-disk internal;
}
}
将配置文件同步到drbdBB
scp -r /etc/drbd.d/* root@drbdBB:/etc/drbd.d/
初始化资源
[root@drbdAA ~]# drbdadm create-md r0
Writing meta data...
initializing activity log
NOT initializing bitmap
New drbd meta data block successfully created.
[root@drbdBB ~]# drbdadm create-md web
Writing meta data...
initializing activity log
NOT initializing bitmap
New drbd meta data block successfully created.
drbdAA与drbdBB上启动DRBD服务
systemctl start drbd
查看一下启动状态
drbdAA:
[root@drbdAA ~]# cat /proc/drbd
version: 8.4.2 (api:1/proto:86-101)
GIT-hash: 7ad5f850d711223713d6dcadc3dd48860321070c build by dag@Build64R6, 2012-09-06 08:16:10
0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-----
ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:20970844
drbdBB:
[root@drbdBB ~]# cat /proc/drbd
version: 8.4.2 (api:1/proto:86-101)
GIT-hash: 7ad5f850d711223713d6dcadc3dd48860321070c build by dag@Build64R6, 2012-09-06 08:16:10
0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-----
ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:20970844
(8).命令查看一下
drbdAA:
[root@drbdAA ~]# drbd-overview
0:web/0 Connected Secondary/Secondary Inconsistent/Inconsistent C r-----
drbdBB:
[root@drbdBB ~]# drbd-overview
0:web/0 Connected Secondary/Secondary Inconsistent/Inconsistent C r-----
从上面的信息中可以看出此时两个节点均处于Secondary状态。于是,我们接下来需要将其中一个节点设置为Primary。在要设置为Primary 的节点上执行如下命令:drbdsetup /dev/drbd0 primary –o ,也可以在要设置为Primary的节点上使用如下命令来设置主节点: drbdadm -- --overwrite-data-of-peer primary r0
(9).将drbdAA设置为主节点
[root@drbdAA ~]#drbdadm primary --force r0
[root@drbdAA ~]# drbd-overview #drbdAA为主节点
0:web/0 SyncSource Primary/Secondary UpToDate/Inconsistent C r---n-
[>...................] sync‘ed: 5.1% (19440/20476)M
注:大家可以看到正在同步数据,得要一段时间
[root@drbdBB ~]# drbd-overview #node2为从节点
0:web/0 SyncTarget Secondary/Primary Inconsistent/UpToDate C r-----
[==>.................] sync‘ed: 17.0% (17016/20476)M
同步完成后,查看一下
[root@drbdAA ~]# drbd-overview
0:web/0 Connected Primary/Secondary UpToDate/UpToDate C r-----
[root@drbdBB ~]# drbd-overview
0:web/0 Connected Secondary/Primary UpToDate/UpToDate C r-----
格式化并挂载
[root@drbdAA ~]# mkfs.ext4 /dev/drbd0
[root@drbdAA ~]#mount /dev/drbd0 /mnt
[root@drbdAA ~]#mount|grep /dev/drbd0
切换Primary和Secondary节点
说明:对主Primary/Secondary模型的drbd服务来讲,在某个时刻只能有一个节点为Primary,因此,要切换两个节点的角色,只能在先将原有的Primary节点设置为Secondary后,才能原来的Secondary节点设置为Primary。
drbdAA:
[root@drbdAA ~]# umount /mnt/
[root@drbdAA ~]# drbdadm secondary r0
查看状态drbdAA
[root@drbdAA ~]# drbd-overview
0:web/0 Connected Secondary/Secondary UpToDate/UpToDate C r-----
node2:
[root@drbdBB ~]# drbdadm primary web
查看状态drbdBB
[root@drbdBB ~]# drbd-overview
0:web/0 Connected Primary/Secondary UpToDate/UpToDate C r-----
[root@drbdBB ~]# mount /dev/drbd0 /mnt/
使用下面的命令查看在此前在主节点上复制至此设备的文件是否存在
[root@drbdBB ~]# ll /mnt/
总用量 20
-rw-r--r-- 1 root root 884 8月 17 13:50 inittab
drwx------ 2 root root 16384 8月 17 13:49 lost+found
十三、DRBD 双主模式配置示例
drbd 8.4中第一次设置某节点成为主节点的命令
[root@drbdAA ~]# drbdadm primary --force r0
配置资源双主模型的示例:
resource r0 {
net {
protocol C;
allow-two-primaries yes;
}
startup {
become-primary-on both;
}
disk {
fencing resource-and-stonith;
}
handlers {
# Make sure the other node is confirmed
# dead after this!
outdate-peer "/sbin/kill-other-node.sh";
}
on drbdAA {
device /dev/drbd0;
disk /dev/sda1;
address 192.168.100.59:7789;
meta-disk internal;
}
on drbdBB {
device /dev/drbd0;
disk /dev/sda1;
address 192.168.100.60:7789;
meta-disk internal;
}
}
未完待续
本文出自 “rain” 博客,请务必保留此出处http://gushiren.blog.51cto.com/3392832/1687548
pacemaker+corosync+drbd实现自动故障转移集群
原文地址:http://gushiren.blog.51cto.com/3392832/1687548