标签:linux、corosync、pacemaker、drbd、mysql
大纲
一、系统环境及所需软件包
二、高可用环境准备工作
三、DRBD的安装与基本配置
四、Corosync的安装与基本配置
五、基于crm配置资源
一、系统环境及所需软件包
系统环境
CentOS5.8 x86_64
node1.network.com node1 172.16.1.101
node2.network.com node2 172.16.1.105
软件包
corosync-1.2.7-1.1.el5.x86_64.rpm
pacemaker-1.0.12-1.el5.centos.x86_64.rpm
drbd83-8.3.15-2.el5.centos.x86_64.rpm
kmod-drbd83-8.3.15-3.el5.centos.x86_64.rpm
二、高可用环境准备工作
1、时间同步
[root@node1 ~]# ntpdate s2c.time.edu.cn [root@node2 ~]# ntpdate s2c.time.edu.cn 可根据需要在每个节点上定义crontab任务 [root@node1 ~]# which ntpdate /sbin/ntpdate [root@node1 ~]# echo "*/5 * * * * /sbin/ntpdate s2c.time.edu.cn &> /dev/null" >> /var/spool/cron/root [root@node1 ~]# crontab -l */5 * * * * /sbin/ntpdate s2c.time.edu.cn &> /dev/null
2、主机名称要与uname -n,并通过/etc/hosts解析
node1 [root@node1 ~]# hostname node1.network.com [root@node1 ~]# uname -n node1.network.com [root@node1 ~]# sed -i ‘s@\(HOSTNAME=\).*@\1node1.network.com@g‘ /etc/sysconfig/network node2 [root@node2 ~]# hostname node2.network.com [root@node2 ~]# uname -n node2.network.com [root@node2 ~]# sed -i ‘s@\(HOSTNAME=\).*@\1node2.network.com@g‘ /etc/sysconfig/network node1添加hosts解析 [root@node1 ~]# vim /etc/hosts [root@node1 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 172.16.1.101 node1.network.com node1 172.16.1.105 node2.network.com node2 拷贝此hosts文件至node2 [root@node1 ~]# vim /etc/hosts [root@node1 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 172.16.1.101 node1.network.com node1 172.16.1.105 node2.network.com node2 [root@node1 ~]# scp /etc/hosts node2:/etc/ The authenticity of host ‘node2 (172.16.1.105)‘ can‘t be established. RSA key fingerprint is 13:42:92:7b:ff:61:d8:f3:7c:97:5f:22:f6:71:b3:24. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added ‘node2,172.16.1.105‘ (RSA) to the list of known hosts. root@node2‘s password: hosts 100% 233 0.2KB/s 00:00
3、ssh互信通信
node1 [root@node1 ~]# ssh-keygen -t rsa -f ~/.ssh/id_rsa -P ‘‘ Generating public/private rsa key pair. /root/.ssh/id_rsa already exists. Overwrite (y/n)? n # 我这里已经生成过了 [root@node1 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub node2 root@node2‘s password: Now try logging into the machine, with "ssh ‘node2‘", and check in: .ssh/authorized_keys to make sure we haven‘t added extra keys that you weren‘t expecting. [root@node1 ~]# setenforce 0 [root@node1 ~]# ssh node2 ‘ifconfig‘ eth0 Link encap:Ethernet HWaddr 00:0C:29:D6:03:52 inet addr:172.16.1.105 Bcast:255.255.255.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fed6:352/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:9881 errors:0 dropped:0 overruns:0 frame:0 TX packets:11220 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:5898514 (5.6 MiB) TX bytes:1850217 (1.7 MiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:16 errors:0 dropped:0 overruns:0 frame:0 TX packets:16 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1112 (1.0 KiB) TX bytes:1112 (1.0 KiB) 同理node2也需要做同样的双击互信,一样的操作,此处不再演示
4、关闭iptables和selinux
node1
[root@node1 ~]# service iptables stop [root@node1 ~]# vim /etc/sysconfig/selinux [root@node1 ~]# cat /etc/sysconfig/selinux # This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - SELinux is fully disabled. #SELINUX=permissive SELINUX=disabled # SELINUXTYPE= type of policy in use. Possible values are: # targeted - Only targeted network daemons are protected. # strict - Full SELinux protection. SELINUXTYPE=targeted
node2
[root@node2 ~]# service iptables stop [root@node2 ~]# vim /etc/sysconfig/selinux [root@node2 ~]# cat /etc/sysconfig/selinux # This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - SELinux is fully disabled. #SELINUX=permissive SELINUX=disabled # SELINUXTYPE= type of policy in use. Possible values are: # targeted - Only targeted network daemons are protected. # strict - Full SELinux protection. SELINUXTYPE=targeted
三、DRBD的安装与基本配置
1、配置163源(也可以自行下载两个软件包,我这里直接使用yum源)
node1
[root@node1 ~]# wget http://mirrors.163.com/.help/CentOS5-Base-163.repo [root@node1 ~]# yum repolist Loaded plugins: fastestmirror, security Loading mirror speeds from cached hostfile * epel: mirrors.hustunique.com repo id repo name status addons CentOS-5 - Addons - 163.com 0 base CentOS-5 - Base - 163.com 3,667 epel Extra Packages for Enterprise Linux 5 - x86_64 6,755 extras CentOS-5 - Extras - 163.com 266 updates CentOS-5 - Updates - 163.com 593 repolist: 11,281
node2
[root@node2 ~]# wget http://mirrors.163.com/.help/CentOS5-Base-163.repo [root@node1 ~]# yum repolist Loaded plugins: fastestmirror, security Loading mirror speeds from cached hostfile * epel: mirrors.hustunique.com repo id repo name status addons CentOS-5 - Addons - 163.com 0 base CentOS-5 - Base - 163.com 3,667 epel Extra Packages for Enterprise Linux 5 - x86_64 6,755 extras CentOS-5 - Extras - 163.com 266 updates CentOS-5 - Updates - 163.com 593 repolist: 11,281
2、安装drbd与kmod-drbd
node1 [root@node1 ~]# yum install -y drbd83 kmod-drbd83 node2 [root@node2 ~]# yum install -y drbd83 kmod-drbd83
3、准备分区作为drbd设备
node1 [root@node1 ~]# fdisk /dev/hda The number of cylinders for this disk is set to 44384. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-44384, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-44384, default 44384): +1G Command (m for help): p Disk /dev/hda: 21.4 GB, 21474836480 bytes 15 heads, 63 sectors/track, 44384 cylinders Units = cylinders of 945 * 512 = 483840 bytes Device Boot Start End Blocks Id System /dev/hda1 1 2068 977098+ 83 Linux Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. [root@node1 ~]# partprobe /dev/hda [root@node1 ~]# cat /proc/partitions major minor #blocks name 3 0 20971520 hda 3 1 977098 hda1 8 0 20971520 sda 8 1 104391 sda1 8 2 20860402 sda2 253 0 18776064 dm-0 253 1 2064384 dm-1 node2 [root@node2 ~]# fdisk /dev/hda Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won‘t be recoverable. The number of cylinders for this disk is set to 44384. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-44384, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-44384, default 44384): +1G Command (m for help): p Disk /dev/hda: 21.4 GB, 21474836480 bytes 15 heads, 63 sectors/track, 44384 cylinders Units = cylinders of 945 * 512 = 483840 bytes Device Boot Start End Blocks Id System /dev/hda1 1 2068 977098+ 83 Linux Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. [root@node2 ~]# partprobe /dev/hda [root@node2 ~]# cat /proc/partitions major minor #blocks name 3 0 20971520 hda 3 1 977098 hda1 8 0 20971520 sda 8 1 104391 sda1 8 2 20860402 sda2 253 0 18776064 dm-0 253 1 2064384 dm-1
4、编辑主配置文件
[root@node1 ~]# cat /etc/drbd.conf # # please have a a look at the example configuration file in # /usr/share/doc/drbd83/drbd.conf # [root@node1 ~]# cp /usr/share/doc/drbd83-8.3.15/drbd.conf /etc/drbd.conf cp: overwrite `/etc/drbd.conf‘? y [root@node1 ~]# cat /etc/drbd.conf # You can find an example in /usr/share/doc/drbd.../drbd.conf.example include "drbd.d/global_common.conf"; include "drbd.d/*.res"; 编辑全局属性定义配置文件 [root@node1 ~]# vim /etc/drbd.d/global_common.conf [root@node1 ~]# cat /etc/drbd.d/global_common.conf global { usage-count no; # 关闭用户数量统计 # minor-count dialog-refresh disable-ip-verification } common { protocol C; handlers { # These are EXAMPLE handlers only. # They may have severe implications, # like hard resetting the node under certain circumstances. # Be careful when chosing your poison. pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f"; pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f"; local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f"; # fence-peer "/usr/lib/drbd/crm-fence-peer.sh"; # split-brain "/usr/lib/drbd/notify-split-brain.sh root"; # out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root"; # before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k"; # after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh; } startup { # wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb } disk { on-io-error detach; # 一旦io发生错误,就拆掉磁盘 # on-io-error fencing use-bmbv no-disk-barrier no-disk-flushes # no-disk-drain no-md-flushes max-bio-bvecs } net { cram-hmac-alg "sha1"; # 指定加密算法 shared-secret "qYQ1cwOFC6E="; # 指定共享的密钥 # sndbuf-size rcvbuf-size timeout connect-int ping-int ping-timeout max-buffers # max-epoch-size ko-count allow-two-primaries cram-hmac-alg shared-secret # after-sb-0pri after-sb-1pri after-sb-2pri data-integrity-alg no-tcp-cork } syncer { rate 300M; # 指定速率 # rate after al-extents use-rle cpu-mask verify-alg csums-alg } }
5、定义一个资源文件/etc/drbd.d/mysql.res
[root@node1 ~]# vim /etc/drbd.d/mysql.res [root@node1 ~]# cat /etc/drbd.d/mysql.res resource mysql { on node1.network.com { device /dev/drbd0; disk /dev/hda1; address 172.16.1.101:7789; meta-disk internal; } on node2.network.com { device /dev/drbd0; disk /dev/hda1; address 172.16.1.105:7789; meta-disk internal; } } 将配置文件与资源文件拷贝至node2节点 [root@node1 ~]# scp -r /etc/drbd.* node2:/etc/ drbd.conf 100% 133 0.1KB/s 00:00 global_common.conf 100% 1688 1.7KB/s 00:00 mysql.res 100% 292 0.3KB/s 00:00
6、在两个节点上初始化已定义的资源并启动服务
初始化资源,两个节点同时执行 [root@node1 ~]# drbdadm create-md mysql Writing meta data... initializing activity log NOT initialized bitmap New drbd meta data block successfully created. [root@node2 ~]# drbdadm create-md mysql Writing meta data... initializing activity log NOT initialized bitmap New drbd meta data block successfully created. 启动drbd服务,两个节点同时执行 [root@node1 ~]# service drbd start Starting DRBD resources: [ d(mysql) s(mysql) n(mysql) ]...... [root@node2 ~]# service drbd start Starting DRBD resources: [ d(mysql) s(mysql) n(mysql) ]. 查看启动状态 [root@node1 ~]# cat /proc/drbd version: 8.3.15 (api:88/proto:86-97) GIT-hash: 0ce4d235fc02b5c53c1c52c53433d11a694eab8c build by mockbuild@builder10.centos.org, 2013-03-27 16:01:26 0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r----- ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:977028 也可以使用drbd-overview命令来查看 [root@node1 ~]# drbd-overview 0:web Connected Secondary/Secondary Inconsistent/Inconsistent C r----- 从上面的信息中可以看出此时两个节点均处于Secondary状态。 于是,我们接下来需要将其中一个节点设置为Primary。在要设置为Primary的节点上执行如下命令 [root@node1 ~]# drbdadm -- --overwrite-data-of-peer primary mysql 而后再次查看状态,可以发现数据同步过程已经开始 [root@node1 ~]# cat /proc/drbd version: 8.3.15 (api:88/proto:86-97) GIT-hash: 0ce4d235fc02b5c53c1c52c53433d11a694eab8c build by mockbuild@builder10.centos.org, 2013-03-27 16:01:26 0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r---n- ns:169252 nr:0 dw:0 dr:177024 al:0 bm:10 lo:4 pe:12 ua:64 ap:0 ep:1 wo:b oos:809220 [==>.................] sync‘ed: 17.6% (809220/977028)K finish: 0:00:09 speed: 83,904 (83,904) K/sec 等数据同步完成以后再次查看状态,可以发现节点已经牌实时状态,且节点已经有了主次 [root@node1 ~]# drbd-overview # Primary/Secondary表示当前节点为主,对等节点为从 0:mysql Connected Primary/Secondary UpToDate/UpToDate C r-----
7、创建文件系统并挂载
文件系统的挂载只能在Primary节点进行,因此,也只有在设置了主节点后才能对drbd设备进行格式化 [root@node1 ~]# mke2fs -j /dev/drbd0 mke2fs 1.39 (29-May-2006) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 122368 inodes, 244257 blocks 12212 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=251658240 8 block groups 32768 blocks per group, 32768 fragments per group 15296 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376 Writing inode tables: done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 25 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. 创建挂载目录 [root@node1 ~]# mkdir /mydata 挂载 [root@node1 ~]# mount /dev/drbd0 /mydata [root@node1 ~]# ls /mydata/ lost+found 卸载目录 [root@node1 ~]# umount /mydata/ [root@node1 ~]# drbdadm secondary mysql 然后关闭两个节点上的drbd服务,并且使之不能开机自启动 [root@node1 ~]# service drbd stop Stopping all DRBD resources: . [root@node1 ~]# chkconfig drbd off [root@node1 ~]# chkconfig --list drbd drbd 0:off 1:off 2:off 3:off 4:off 5:off 6:off [root@node2 ~]# service drbd stop Stopping all DRBD resources: . [root@node2 ~]# chkconfig drbd off [root@node2 ~]# chkconfig --list drbd drbd 0:off 1:off 2:off 3:off 4:off 5:off 6:off
四、corosync的安装与基本配置
1、配置yum源(也可以自行下载两个软件包,我这里直接使用yum源)
node1
[root@node1 ~]# cd /etc/yum.repos.d/ [root@node1 yum.repos.d]# wget http://clusterlabs.org/rpm/epel-5/clusterlabs.repo [root@node1 yum.repos.d]# yum install -y pacemaker corosync
node2
[root@node2 ~]# cd /etc/yum.repos.d/ [root@node2 yum.repos.d]# wget http://clusterlabs.org/rpm/epel-5/clusterlabs.repo [root@node2 yum.repos.d]# yum install -y pacemaker corosync
2、编辑corosync的主配置文件
[root@node1 ~]# cd /etc/corosync/ [root@node1 corosync]# ls corosync.conf.example service.d uidgid.d [root@node1 corosync]# cp corosync.conf.example corosync.conf [root@node1 corosync]# vi corosync.conf # Please read the corosync.conf.5 manual page compatibility: whitetank totem { version: 2 # 配置文件版本号,目前合法的版本号是2,不能修改 secauth: on # 开启安全认证功能,当使用aisexec时,会非常消耗CPU threads: 2 # 线程数,根据CPU个数和核心数确定 interface { ringnumber: 0 # 冗余环号,节点有多个网卡是可定义对应网卡在一个环内 bindnetaddr: 172.16.1.0 # 注意,这里填的是网络地址,不是某个ip mcastaddr: 226.94.19.37 # 心跳信息传递的组播地址 mcastport: 5405 # 心跳信息组播使用端口 } } logging { fileline: off # 指定要打印的行 to_stderr: no # 是否发送到标准错误输出,即显示器,建议不要开启 to_logfile: yes # 定义是否记录到日志文件 to_syslog: no # 定义是否记录到syslog,与logfile只启用一个即可 logfile: /var/log/cluster/corosync.log # 定义日志文件的保存位置 debug: off # 是否开启debug功能 timestamp: on # 是否打印时间戳,利于定位错误,但会消耗CPU logger_subsys { subsys: AMF debug: off } } service { ver: 0 name: pacemaker # 定义pacemaker作为corosync的插件运行,启动sync时则会同时启动pacemaker # use_mgmtd: yes } aisexec { user: root group: root } amf { mode: disabled } 注:各指令详细信息可man corosync.conf
3、生成authkey文件
[root@node1 corosync]# corosync-keygen Corosync Cluster Engine Authentication key generator. Gathering 1024 bits for key from /dev/random. Press keys on your keyboard to generate entropy. Writing corosync key to /etc/corosync/authkey. [root@node1 corosync]# ll total 40 -r-------- 1 root root 128 Jan 8 14:50 authkey # 权限为400 -rw-r--r-- 1 root root 536 Jan 8 14:48 corosync.conf -rw-r--r-- 1 root root 436 Jul 28 2010 corosync.conf.example drwxr-xr-x 2 root root 4096 Jul 28 2010 service.d drwxr-xr-x 2 root root 4096 Jul 28 2010 uidgid.d
4、将主配置文件和密钥认证文件拷贝至node2节点
[root@node1 corosync]# scp -p authkey corosync.conf node2:/etc/corosync/ authkey 100% 128 0.1KB/s 00:00 corosync.conf 100% 536 0.5KB/s 00:00 由于我们定义了记录到日志文件中,所以此处还得创建那个日志目录 [root@node1 corosync]# mkdir /var/log/cluster [root@node1 corosync]# ssh node2 ‘mkdir /var/log/cluster‘
5、启动corosync
首先启动corosync,再查看启动过程信息,这里以node1为例 [root@node1 ~]# service corosync start Starting Corosync Cluster Engine (corosync): [ OK ] [root@node1 ~]# ssh node2 ‘service corosync start‘ Starting Corosync Cluster Engine (corosync): [ OK ] 查看corosync引擎是否正常启动 [root@node1 corosync]# grep -e "Corosync Cluster Engine" -e "configuration file" /var/log/cluster/corosync.log Jan 08 15:37:22 corosync [MAIN ] Corosync Cluster Engine (‘1.2.7‘): started and ready to provide service. Jan 08 15:37:22 corosync [MAIN ] Successfully read main configuration file ‘/etc/corosync/corosync.conf‘. 查看初始化成员节点通知是否正常发出 [root@node1 corosync]# grep TOTEM /var/log/cluster/corosync.log Jan 08 15:37:22 corosync [TOTEM ] Initializing transport (UDP/IP). Jan 08 15:37:22 corosync [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0). Jan 08 15:37:22 corosync [TOTEM ] The network interface [172.16.1.101] is now up. Jan 08 15:37:22 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed. Jan 08 15:37:30 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed. 检查启动过程中是否有错误产生 [root@node1 corosync]# grep ERROR: /var/log/cluster/corosync.log Jan 08 15:37:46 node1.network.com pengine: [3688]: ERROR: unpack_resources: Resource start-up disabled since no STONITH resources have been defined Jan 08 15:37:46 node1.network.com pengine: [3688]: ERROR: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option Jan 08 15:37:46 node1.network.com pengine: [3688]: ERROR: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity 查看pacemaker是否正常启动 [root@node1 corosync]# grep pcmk_startup /var/log/cluster/corosync.log Jan 08 15:37:22 corosync [pcmk ] info: pcmk_startup: CRM: Initialized Jan 08 15:37:22 corosync [pcmk ] Logging: Initialized pcmk_startup Jan 08 15:37:22 corosync [pcmk ] info: pcmk_startup: Maximum core file size is: 18446744073709551615 Jan 08 15:37:22 corosync [pcmk ] info: pcmk_startup: Service: 9 Jan 08 15:37:22 corosync [pcmk ] info: pcmk_startup: Local hostname: node1.network.com 查看集群节点状态 [root@node1 ~]# crm status ============ Last updated: Fri Jan 8 18:32:18 2016 Stack: openais Current DC: node1.network.com - partition with quorum Version: 1.0.12-unknown 2 Nodes configured, 2 expected votes 0 Resources configured. ============ Online: [ node1.network.com node2.network.com ]
6、配置资源的默认属性
随便哪个节点上运行crm都可以,会自动同步操作至DC [root@node1 ~]# crm crm(live)# status ============ Last updated: Fri Jan 8 19:48:51 2016 Stack: openais Current DC: node2.network.com - partition with quorum Version: 1.0.12-unknown 2 Nodes configured, 2 expected votes 0 Resources configured. ============ Online: [ node1.network.com node2.network.com ] crm(live)# configure crm(live)configure# verify crm_verify[10513]: 2016/01/08_19:48:54 ERROR: unpack_resources: Resource start-up disabled since no STONITH resources have been defined crm_verify[10513]: 2016/01/08_19:48:54 ERROR: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option crm_verify[10513]: 2016/01/08_19:48:54 ERROR: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity Errors found during check: config not valid crm(live)configure# property stonith-enabled=false # 禁用stonith设备 crm(live)configure# property no-quorum-policy=ignore # 不具备法定票数时采取的动作 crm(live)configure# rsc_defaults target-role=stopped # 配置资源完成之后默认采取的动作 crm(live)configure# verify crm(live)configure# commit 查看配置文件中生成的定义 crm(live)configure# show node node1.network.com node node2.network.com property $id="cib-bootstrap-options" dc-version="1.0.12-unknown" cluster-infrastructure="openais" expected-quorum-votes="2" stonith-enabled="false" no-quorum-policy="ignore" rsc_defaults $id="rsc-options" target-role="stopped"
五、基于crm配置资源
1、定义drbd主从资源
首先定义一个主资源 crm(live)configure# primitive mysqldrbd ocf:heartbeat:drbd params drbd_resource=mysql ignore_deprecation=true op start timeout=240s op stop timeout=100s op monitor role=Master interval=10s timeout=20s op monitor role=Slave interval=20s timeout=20s crm(live)configure# verify 再将此主资源定义为主从资源 crm(live)configure# ms ms_mysqldrbd mysqldrbd meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true crm(live)configure# verify crm(live)configure# commit crm(live)configure# show node node1.network.com node node2.network.com primitive mysqldrbd ocf:heartbeat:drbd params drbd_resource="mysql" ignore_deprecation="true" op start interval="0" timeout="240s" op stop interval="0" timeout="100s" op monitor interval="10s" role="Master" timeout="20s" op monitor interval="20s" role="Slave" timeout="20s" ms ms_mysqldrbd mysqldrbd meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" property $id="cib-bootstrap-options" dc-version="1.0.12-unknown" cluster-infrastructure="openais" expected-quorum-votes="2" stonith-enabled="false" no-quorum-policy="ignore" rsc_defaults $id="rsc-options" target-role="stopped"
2、定义mysqlstore资源
crm(live)configure# primitive mysqlstore ocf:heartbeat:Filesystem params device=/dev/drbd0 directory=/mydata fstype=ext3 op start timeout=60s op stop timeout=60s crm(live)configure# verify crm(live)configure# commit crm(live)configure# show node node1.network.com node node2.network.com primitive mysqldrbd ocf:heartbeat:drbd params drbd_resource="mysql" ignore_deprecation="true" op start interval="0" timeout="240s" op stop interval="0" timeout="100s" op monitor interval="10s" role="Master" timeout="20s" op monitor interval="20s" role="Slave" timeout="20s" primitive mysqlstore ocf:heartbeat:Filesystem params device="/dev/drbd0" directory="/mydata" fstype="ext3" op start interval="0" timeout="60s" op stop interval="0" timeout="60s" ms ms_mysqldrbd mysqldrbd meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" property $id="cib-bootstrap-options" dc-version="1.0.12-unknown" cluster-infrastructure="openais" expected-quorum-votes="2" stonith-enabled="false" no-quorum-policy="ignore" rsc_defaults $id="rsc-options" target-role="stopped"
3、定义排列约束
crm(live)configure# colocation mysqlstore_with_ms_mysqldrbd inf: mysqlstore ms_mysqldrbd:Master crm(live)configure# verify crm(live)configure# commit crm(live)configure# show node node1.network.com node node2.network.com primitive mysqldrbd ocf:heartbeat:drbd params drbd_resource="mysql" ignore_deprecation="true" op start interval="0" timeout="240s" op stop interval="0" timeout="100s" op monitor interval="10s" role="Master" timeout="20s" op monitor interval="20s" role="Slave" timeout="20s" primitive mysqlstore ocf:heartbeat:Filesystem params device="/dev/drbd0" directory="/mydata" fstype="ext3" op start interval="0" timeout="60s" op stop interval="0" timeout="60s" ms ms_mysqldrbd mysqldrbd meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" colocation mysqlstore_with_ms_mysqldrbd inf: mysqlstore ms_mysqldrbd:Master property $id="cib-bootstrap-options" dc-version="1.0.12-unknown" cluster-infrastructure="openais" expected-quorum-votes="2" stonith-enabled="false" no-quorum-policy="ignore" rsc_defaults $id="rsc-options" target-role="stopped"
4、定义顺序约束
crm(live)configure# order mysqlstore_after_ms_mysqldrbd mandatory: ms_mysqldrbd:promote mysqlstore:start crm(live)configure# verify crm(live)configure# commit crm(live)configure# show node node1.network.com node node2.network.com primitive mysqldrbd ocf:heartbeat:drbd params drbd_resource="mysql" ignore_deprecation="true" op start interval="0" timeout="240s" op stop interval="0" timeout="100s" op monitor interval="10s" role="Master" timeout="20s" op monitor interval="20s" role="Slave" timeout="20s" primitive mysqlstore ocf:heartbeat:Filesystem params device="/dev/drbd0" directory="/mydata" fstype="ext3" op start interval="0" timeout="60s" op stop interval="0" timeout="60s" ms ms_mysqldrbd mysqldrbd meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" colocation mysqlstore_with_ms_mysqldrbd inf: mysqlstore ms_mysqldrbd:Master order mysqlstore_after_ms_mysqldrbd inf: ms_mysqldrbd:promote mysqlstore:start property $id="cib-bootstrap-options" dc-version="1.0.12-unknown" cluster-infrastructure="openais" expected-quorum-votes="2" stonith-enabled="false" no-quorum-policy="ignore" rsc_defaults $id="rsc-options" target-role="stopped"
5、启动配置好的资源
crm(live)configure# cd crm(live)# resource crm(live)resource# start ms_mysqldrbd crm(live)resource# start mysqlstore crm(live)resource# cd crm(live)# status ============ Last updated: Sun Jan 10 00:05:54 2016 Stack: openais Current DC: node1.network.com - partition with quorum Version: 1.0.12-unknown 2 Nodes configured, 2 expected votes 2 Resources configured. ============ Online: [ node1.network.com node2.network.com ] Master/Slave Set: ms_mysqldrbd Masters: [ node1.network.com ] Slaves: [ node2.network.com ] mysqlstore (ocf::heartbeat:Filesystem): Started node1.network.com 再来查看下/mydata目录是否挂载 crm(live)# exit bye [root@node1 ~]# ls /mydata/ fstab lost+found
6、测试高可用性
此时node2上肯定没有挂载drbd设备 [root@node2 ~]# ls /mydata/ 让node1节点下线,看资源是否会自动转移 [root@node2 ~]# crm node standby node1.network.com [root@node2 ~]# crm status ============ Last updated: Sun Jan 10 00:09:43 2016 Stack: openais Current DC: node1.network.com - partition with quorum Version: 1.0.12-unknown 2 Nodes configured, 2 expected votes 2 Resources configured. ============ Node node1.network.com: standby Online: [ node2.network.com ] Master/Slave Set: ms_mysqldrbd Masters: [ node2.network.com ] Stopped: [ mysqldrbd:0 ] mysqlstore (ocf::heartbeat:Filesystem): Started node2.network.com 可以看到已成功转移,node2已经挂载了drbd设备 [root@node2 ~]# ls /mydata/ fstab lost+found 再让node1节点上线,发现资源又转移回去了 [root@node2 ~]# crm node online node1.network.com [root@node2 ~]# crm status ============ Last updated: Sun Jan 10 00:11:22 2016 Stack: openais Current DC: node1.network.com - partition with quorum Version: 1.0.12-unknown 2 Nodes configured, 2 expected votes 2 Resources configured. ============ Online: [ node1.network.com node2.network.com ] Master/Slave Set: ms_mysqldrbd Masters: [ node1.network.com ] Slaves: [ node2.network.com ] mysqlstore (ocf::heartbeat:Filesystem): Started node1.network.com
本文出自 “Hello,Linux” 博客,请务必保留此出处http://soysauce93.blog.51cto.com/7589461/1733328
Linux HA集群之Corosync + Pacemaker + DRBD + MySQL实现MySQL高可用
标签:linux、corosync、pacemaker、drbd、mysql
原文地址:http://soysauce93.blog.51cto.com/7589461/1733328