标签:drbd corosync pacemaker mariadb
drbd与corosync/pacemaker
结合构建高可用mariadb
drbd介绍:
高可用节点之间为了使切换节点时数据一致,不得不使用共享存储,共享存储一般只有两种选择:NAS 和 SAN。NAS是文件系统级别的共享,性能低下,访问也受限制,使用时有诸多不变;SAN块级别共享存储,但又太贵。当资金不足时,就可以考虑drbd。
drbd是跨主机的块设备镜像系统,一主一从(两个主机只能有一个能进行写操作,slave主机只能接受master主机传过来的数据)。drbd是工作于内核中的,工作时,在内核内存的buffer cache与disk scheduler之间增加一个全透明无影响的数据抄送备份过程,复制的数据备份通过tcp/ip协议传送到互为镜像的从节点上,从而实现数据备份功能,提供了数据的冗余能力,但可靠性有待考虑,偶尔会抽风,损失数据——脑分裂,两个主机都能写,使数据文件系统错乱、数据丢失。因此,如果要使用则用在高可用中,且提供stonith设备,以保证只能有一主。
工作模式与netfilter相似,提供了某种功能的模块,但不一定需要工作,只有通过用户空间管理工具(drbdadm)定义了规则发送往内核后才会工作。
架构图:
实验流程:
1、准备2台虚拟机做节点,做好时间同步、基于主机名访问、ssh互信的准备工作,IP地址如架构图所示配置
2、提供新分区(试验中是/dev/sda3,可以两节点的主次设备号不一样,但大小得一样),无需格式化
3、做drbd主从
4、安装mariadb提供存储服务
5、安装corosync和pacemaker,并提供CLI:crmsh
6、定义资源,开启高可用drbd存储服务
试验环境:
三台虚拟机
内核:2.6.32-504.el6.x86_64
发行版:CentOS-6.6-x86_64
无stonith设备
一、配置drbd
1、配置前提:
时间同步、基于主机名访问、ssh互信
两个节点主机都需要做如下操作:
① 时间同步:
# ntpdate 172.16.0.1 31 May 19:54:06 ntpdate[51867]: step time server 172.16.0.1 offset 304.909926 sec # crontab -e */3 * * * * /usr/sbin/ntpdate 172.16.0.1 &> /dev/null
② 基于主机名访问
# vim /etc/hosts 127.0.0.1 localhost.localdomain localhost.localdomain localhost4 localhost4.localdomain4 localhost ::1 localhost.localdomain localhost.localdomain localhost6 localhost6.localdomain6 localhost 172.16.0.1 server.magelinux.com server 172.16.20.100 node1 172.16.20.200 node2
③ 建立各节点之间的root用户能够基于ssh密钥认证通信;
# ssh-keygen -t rsa -P ‘‘ # ssh-copy-id -i /root/.ssh/id_rsa.pub root@172.16.20.100 # ssh node1 ‘ifconfig‘
2、提供磁盘分区: (两个节点都需要同时提供相同大小的存储分区)
# fdisk /dev/sda
提供一个5G的新分区,主分区或者扩展分区都行,此次是 /dev/sda3 +5G
读取新分区:
# partx -a /dev/sda
注:提供分区后不需要格式化分区
3、程序包选择及安装:
说明:
① 内核空间主程序包:kmod-drbd84-8.4.5-504.1.el6.x86_64.rpm 仅生成一个内核模块,其他的都是说明文件,没有配置文件
release版本号,必须与内核的发行版本号一致(# uname -r查看),向内核打补丁特别严格!
# uname -r 2.6.32-504.el6.x86_64
② 用户空间程序包: 选择更多,不是特别严格,至少主板本号和次版本号要一致
drbd84-utils-8.9.1-1.el6.elrepo.x86_64.rpm
两个节点都安装两个程序包:
lftp 172.16.0.1:/pub/Sources/6.x86_64/drbd> get kmod-drbd84-8.4.5-504.1.el6.x86_64.rpm drbd84-utils-8.9.1-1.el6.elrepo.x86_64.rpm
# rpm -ivh kmod-drbd84-8.4.5-504.1.el6.x86_64.rpm drbd84-utils-8.9.1-1.el6.elrepo.x86_64.rpm warning: drbd84-utils-8.9.1-1.el6.elrepo.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID baadae52: NOKEY Preparing... ########################################### [100%] 1:drbd84-utils ########################################### [ 50%] 2:kmod-drbd84 ########################################### [100%] Working. This may take some time ... Done.
(没有做安全校验的key,可以安全忽略)
4、设置drbd配置文件
生成随机字符串以便下面文件中drbd网络通信安全认证中使用
# openssl rand -base64 16 5KY86Kw3TzZ4kHbZkrP8Hw==
修改配置文件
# vim /etc/drbd.d/global_common.conf global { usage-count no; } common { handlers { } startup { } options { } disk { on-io-error detach; } net { cram-hmac-alg "sha1"; shared-secret "5KY86Kw3TzZ4kHbZkrP8Hw"; } syncer { rate 500M; } }
5、定义资源:
增加文件,定义存储资源文件
# vim /etc/drbd.d/mystore.res resource mystore { device /dev/drbd0; disk /dev/sda3; meta-disk internal; on node1 { address 172.16.20.100:7789; } on node2 { address 172.16.20.200:7789; }
为了保持两个节点的配置是一样的,这里采用基于ssh的scp复制的方式,将配置文件同步至node2:
# scp -r /etc/drbd.* node2:/etc/ drbd.conf 100% 133 0.1KB/s 00:00 global_common.conf 100% 2105 2.1KB/s 00:00 mystore.res 100% 171 0.2KB/s 00:00
3、在两个节点上初始化已定义的资源res并启动服务:
1)初始化资源,在node1和node2上分别执行:
# drbdadm create-md mystore initializing activity log NOT initializing bitmap Writing meta data... New drbd meta data block successfully created.
2)启动服务,在Node1和Node2上分别执行:
[root@node1 ~]# service drbd start Starting DRBD resources: [ create res: mystore prepare disk: mystore adjust disk: mystore adjust net: mystore ] ......... [root@node1 ~]# [root@node2 ~]# drbdadm create-md mystore initializing activity log NOT initializing bitmap Writing meta data... New drbd meta data block successfully created. [root@node2 ~]# service drbd start Starting DRBD resources: [ create res: mystore prepare disk: mystore adjust disk: mystore adjust net: mystore ] . [root@node2 ~]#
3)查看启动状态并设置主节点:
# cat /proc/drbd version: 8.4.5 (api:1/proto:86-101) GIT-hash: 1d360bde0e095d495786eaeb2a1ac76888e4db96 build by root@node1.magedu.com, 2015-01-02 12:06:20 0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r----- ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:5252056
注: Secondary Inconsistent表示两个都是从节点,磁盘块还没有按位对齐,不同步。
也可以使用drbd-overview命令来查看:
# drbd-overview 0:mystore/0 Connected Secondary/Secondary Inconsistent/Inconsistent
从上面的信息中可以看出此时两个节点均处于Secondary状态。于是,我们接下来需要将其中一个节点设置为Primary。在要设置为Primary的节点上执行如下命令:
# drbdadm primary --force mystore
第一次创建主节点都需要“--force”选项,否则报如下错误:
0: State change failed: (-2) Need access to UpToDate data
Command ‘drbdsetup-84 primary 0‘ terminated with exit code 17
注: 也可以在要设置为Primary的节点上使用如下命令来设置主节点:
# drbdadm -- --overwrite-data-of-peer primary mystore
4)格式化挂载使用并验证同步情况
# mke2fs -t ext4 /dev/drbd0 # mount /dev/drbd0 /mnt
5)主从节点转换
切换Primary和Secondary节点
对主Primary/Secondary模型的drbd服务来讲,在某个时刻只能有一个节点为Primary,因此,要切换两个节点的角色,只能在先将原有的Primary节点设置为Secondary后,才能原来的Secondary节点设置为Primary:
主节点卸载/dev/drbd# 设备 --> 主节点对自己降级 --> 降级成功后,从节点升级自己为主节点 --> 新的主节点挂载/dev/drbd# 设备使用
[root@node2 ~]# sync [root@node2 ~]# umount /mnt [root@node2 ~]# drbdadm secondary mystore [root@node2 ~]# drbd-overview 0:mystore/0 Connected Secondary/Secondary UpToDate/UpToDate [root@node1 ~]# drbdadm primary mystore [root@node1 ~]# mount /dev/drbd0 /mnt/ [root@node1 ~]# ls /mnt issue lost+found [root@node1 ~]# touch /mnt/node1.txt [root@node1 ~]# ls /mnt issue lost+found node1.txt [root@node1 ~]# umount /mnt [root@node1 ~]# drbdadm secondary mystore [root@node1 ~]# drbd-overview 0:mystore/0 Connected Secondary/Secondary UpToDate/UpToDate
二、提供mariadb服务程序
1、创建专用用户:
# groupadd -r -g 27 mysql # useradd -r -u 27 -g 27 mysql # id mysql uid=27(mysql) gid=27(mysql) groups=27(mysql)
两个节点都需要创建,且创建的mysql用户的uid、gid号,mysql组的id号都得一致。
2、安装mariadb二进制程序
程序包:mariadb-5.5.43-linux-x86_64.tar.gz
先在两个节点主机上都创建专门的drbd0的挂载点:
# mkdir /mydata
将其中一个调整为主节点,另一个调整为从节点:
[root@node1 ~]# drbd-overview 0:mystore/0 Connected Secondary/Secondary UpToDate/UpToDate [root@node1 ~]# drbdadm primary mystore [root@node1 ~]# drbd-overview 0:mystore/0 Connected Primary/Secondary UpToDate/UpToDate
在主节点上进行mariadb安装及数据初始化:
[root@node1 ~]# tar xf mariadb-5.5.43-linux-x86_64.tar.gz -C /usr/local/ [root@node1 ~]# ln -sv /usr/local/mariadb-5.5.43-linux-x86_64/ /usr/local/mysql `/usr/local/mysql‘ -> `/usr/local/mariadb-5.5.43-linux-x86_64/‘ [root@node1 ~]# cd /usr/local/mysql [root@node1 mysql]# ls bin COPYING.LESSER EXCEPTIONS-CLIENT INSTALL-BINARY man README share support-files COPYING data include lib mysql-test scripts sql-bench [root@node1 mysql]# [root@node1 mysql]# ll total 220 …… drwxr-xr-x 4 root root 4096 Jun 1 20:22 sql-bench drwxr-xr-x 3 root root 4096 Jun 1 20:22 support-files [root@node1 mysql]# chown -R root:mysql ./* [root@node1 mysql]# mkdir /mydata/data [root@node1 mysql]# chown -R mysql:mysql /mydata/data [root@node1 mysql]# ll /mydata total 24 drwxr-xr-x 2 mysql mysql 4096 Jun 1 20:31 data -rw-r--r-- 1 root root 103 Jun 1 20:02 issue drwx------ 2 root root 16384 Jun 1 20:01 lost+found -rw-r--r-- 1 root root 0 Jun 1 20:05 node1.txt [root@node1 mysql]# ./scripts/mysql_install_db --user=mysql --datadir=/mydata/data [root@node1 mysql]# ls /mydata/data aria_log.00000001 aria_log_control mysql performance_schema test
初始化mariadb后提供相关的配置服务:
[root@node1 mysql]# cp support-files/mysql.server /etc/rc.d/init.d/mysqld [root@node1 mysql]# chkconfig --add mysqld [root@node1 mysql]# chkconfig mysqld off [root@node1 mysql]# chkconfig | grep mysqld mysqld 0:off1:off2:off3:off4:off5:off6:off [root@node1 mysql]# mkdir /etc/mysql [root@node1 mysql]# cp support-files/my-large.cnf /etc/mysql/my.cnf [root@node1 mysql]# vim /etc/mysql/my.cnf 添加下面三项: datadir = /mydata/data innodb_file_per_table = on skip_name_resolve = on
验证主节点上的mariadb服务是否配置成功:
[root@node1 mysql]# service mysqld start Starting MySQL... [ OK ] [root@node1 mysql]# /usr/local/mysql/bin/mysql MariaDB [(none)]> GRANT ALL ON *.* TO "root"@"172.16.20.%" IDENTIFIED BY "123"; Query OK, 0 rows affected (0.01 sec) MariaDB [(none)]> FLUSH PRIVILEGES; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> \q Bye
将主节点node1降为从节点,从节点node2升为主节点,以便在node2上配置好mariadb服务程序(无需执行数据库初始化):
[root@node1 mysql]# service mysqld stop Shutting down MySQL. [ OK ] [root@node1 mysql]# umount /mydata [root@node1 mysql]# drbdadm secondary mystore [root@node1 mysql]# drbd-overview 0:mystore/0 Connected Secondary/Secondary UpToDate/UpToDate [root@node1 mysql]# [root@node2 ~]# drbd-overview 0:mystore/0 Connected Secondary/Secondary UpToDate/UpToDate [root@node2 ~]# drbdadm primary mystore [root@node2 ~]# drbd-overview 0:mystore/0 Connected Primary/Secondary UpToDate/UpToDate [root@node2 ~]# mount /dev/drbd0 /mydata [root@node2 ~]# ls /mydata data issue lost+found node1.txt [root@node2 ~]# ll /mydata total 24 drwxr-xr-x 6 mysql mysql 4096 Jun 1 21:40 data -rw-r--r-- 1 root root 103 Jun 1 20:02 issue drwx------ 2 root root 16384 Jun 1 20:01 lost+found -rw-r--r-- 1 root root 0 Jun 1 20:05 node1.txt [root@node2 ~]# ll /mydata/data/ total 28720 -rw-rw---- 1 mysql mysql 16384 Jun 1 21:40 aria_log.00000001 -rw-rw---- 1 mysql mysql 52 Jun 1 21:40 aria_log_control -rw-rw---- 1 mysql mysql 18874368 Jun 1 21:40 ibdata1 -rw-rw---- 1 mysql mysql 5242880 Jun 1 21:40 ib_logfile0 …… [root@node2 ~]# tar xf mariadb-5.5.43-linux-x86_64.tar.gz -C /usr/local/ [root@node2 ~]# ln -sv /usr/local/mariadb-5.5.43-linux-x86_64/ /usr/local/mysql `/usr/local/mysql‘ -> `/usr/local/mariadb-5.5.43-linux-x86_64/‘ [root@node2 ~]# mkdir /etc/mysql [root@node1 mysql]# scp /etc/mysql/my.cnf node2:/etc/mysql/ my.cnf 100% 4974 4.9KB/s 00:00 [root@node1 mysql]# scp /etc/rc.d/init.d/mysqld node2:/etc/rc.d/init.d/ mysqld 100% 12KB 11.9KB/s 00:00
验证node2是否能正常使用:
[root@node2 ~]# service mysqld start Starting MySQL... [ OK ] [root@node2 ~]# /usr/local/mysql/bin/mysql MariaDB [(none)]> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | test | | testdb | +--------------------+ 5 rows in set (0.06 sec) MariaDB [(none)]> create database test2db; Query OK, 1 row affected (0.01 sec) MariaDB [(none)]> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | test | | test2db | | testdb | +--------------------+ 6 rows in set (0.00 sec) MariaDB [(none)]> \q Bye [root@node2 ~]#
节点2也能正常使用。至此,mariadb服务提供完毕。
(如果安装mariadb时出现了drbd0出现分裂,出现/dev/drbd0~drbd15时,可以先service drbd stop 停止服务,再删除磁盘分区,再重新创建分区,然后重新 ”# drbdadm primary --force mystore“ 初始化同步drbd0。同步完后就可以重新初始化mariadb了。
以后使用时,必须先提升为primary后才能再挂载drbd0使用。
!!要换挂载点时,必须先umount后在降级,然后在另一个节点先升级为primary后再挂载!!
!!总之,无论如何都必须达到挂载时都是主机处于primary状态!!)
三、 安装corosync和pacemaker
1、两个节点都需要安装主程序包
主程序包:
corosync-1.4.7-1.el6.x86_64
pacemaker-1.1.12-4.el6.x86_64
# yum install -y corosync pacemaker …… Installed: corosync.x86_64 0:1.4.7-1.el6 pacemaker.x86_64 0:1.1.12-4.el6 Dependency Installed: clusterlib.x86_64 0:3.0.12.1-68.el6 corosynclib.x86_64 0:1.4.7-1.el6 libibverbs.x86_64 0:1.1.8-3.el6 libqb.x86_64 0:0.16.0-2.el6 librdmacm.x86_64 0:1.0.18.1-1.el6 lm_sensors-libs.x86_64 0:3.1.1-17.el6 net-snmp-libs.x86_64 1:5.5-49.el6_5.3 pacemaker-cli.x86_64 0:1.1.12-4.el6 pacemaker-cluster-libs.x86_64 0:1.1.12-4.el6 pacemaker-libs.x86_64 0:1.1.12-4.el6 perl-TimeDate.noarch 1:1.16-13.el6 resource-agents.x86_64 0:3.9.5-12.el6 Complete!
2、提供配置文件:
# cp /etc/corosync/corosync.conf.example /etc/corosync/corosync.conf # vim /etc/corosync/corosync.conf 内容修改为如下内容: compatibility: whitetank totem { version: 2 secauth: on threads: 0 interface { ringnumber: 0 bindnetaddr: 172.16.0.0 mcastaddr: 239.255.11.11 mcastport: 5405 ttl: 1 } } logging { fileline: off to_stderr: no to_logfile: yes logfile: /var/log/cluster/corosync.log to_syslog: yes debug: off timestamp: on logger_subsys { subsys: AMF debug: off } } service { ver: 0 name: pacemaker use_mgmtd: yes } aisexec { user: root group: root }
service段定义pacemaker为corosync插件模式工作
3、各种检验
验证网卡是否打开多播功能:
# ip link show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:10:05:bf brd ff:ff:ff:ff:ff:ff
(# ip addr 也能查看)
如果没有需要使用如下命令打开:
# ip link set eth0 multicast on
生成corosync的密钥文件:
# corosync-keygen
如果熵池中的随机数不够,可以通过从网络上下载打文件或手动敲击键盘生成。
将生成的密钥文件和corosync配置文件保留所有属性复制一份到node2节点,并确保属性符合要求:
# scp -p /etc/corosync/{corosync.conf,authkey} node2:/etc/corosync/ corosync.conf 100% 2757 2.7KB/s 00:00 authkey 100% 128 0.1KB/s 00:00 [root@node2 ~]# ll /etc/corosync/authkey -r-------- 1 root root 128 May 30 11:56 authkey
(密钥文件必须为400或600,不是的话须手动chmod修改)
启动两个节点的corosync:
[root@node1 corosync]# service corosync start ; ssh node2 ‘service corosync start‘ Starting Corosync Cluster Engine (corosync): [ OK ] Starting Corosync Cluster Engine (corosync): [ OK ]
验证corosync引擎是否正常启动:
[root@node1 corosync]# ss -unlp | grep corosync UNCONN 0 0 172.16.20.100:5404 *:* users:(("corosync",14270,13)) UNCONN 0 0 172.16.20.100:5405 *:* users:(("corosync",14270,14)) UNCONN 0 0 239.255.11.11:5405 *:* users:(("corosync",14270,10)) [root@node1 corosync]# grep -e "Corosync Cluster Engine" -e "configuration file" /var/log/cluster/corosync.log Jun 01 22:15:57 corosync [MAIN ] Corosync Cluster Engine (‘1.4.7‘): started and ready to provide service. Jun 01 22:15:57 corosync [MAIN ] Successfully read main configuration file ‘/etc/corosync/corosync.conf‘.
查看初始化成员节点通知是否正常发出:
[root@node1 corosync]# grep TOTEM /var/log/cluster/corosync.log Jun 01 22:15:57 corosync [TOTEM ] Initializing transport (UDP/IP Multicast). Jun 01 22:15:57 corosync [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0). Jun 01 22:15:57 corosync [TOTEM ] The network interface [172.16.20.100] is now up. Jun 01 22:15:57 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed. Jun 01 22:15:57 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.
检查启动过程中是否有错误产生,下面的错误信息表示pacemaker不久之后将不再作为corosync的插件运行,因此,建议使用cman作为集群基础架构服务,此处可安全忽略:
[root@node1 corosync]# grep ERROR: /var/log/cluster/corosync.log | grep -v unpack_resources Jun 01 22:15:57 corosync [pcmk ] ERROR: process_ais_conf: You have configured a cluster using the Pacemaker plugin for Corosync. The plugin is not supported in this environment and will be removed very soon. Jun 01 22:15:57 corosync [pcmk ] ERROR: process_ais_conf: Please see Chapter 8 of ‘Clusters from Scratch‘ (http://www.clusterlabs.org/doc) for details on using Pacemaker with CMAN Jun 01 22:15:58 corosync [pcmk ] ERROR: pcmk_wait_dispatch: Child process mgmtd exited (pid=14282, rc=100)
说明:
第一和第二个error可以安全忽略;
对于第三个error,仔细看了/var/log/messages日志,或者使用crm_verify -L检查一下错误,其实没必要卸载重装。这个错误是由于缺少snoith设备引起的,并不会影响corosync的运行。可以安全忽略。
查看pacemaker是否正常启动:
[root@node1 corosync]# grep pcmk_startup /var/log/cluster/corosync.log Jun 01 22:15:57 corosync [pcmk ] info: pcmk_startup: CRM: Initialized Jun 01 22:15:57 corosync [pcmk ] Logging: Initialized pcmk_startup Jun 01 22:15:57 corosync [pcmk ] info: pcmk_startup: Maximum core file size is: 18446744073709551615 Jun 01 22:15:57 corosync [pcmk ] info: pcmk_startup: Service: 9 Jun 01 22:15:57 corosync [pcmk ] info: pcmk_startup: Local hostname: node1
上面检查情况都是正常或可安全忽略的,在node2上的也执行同样命令查看检查状况是否正常。
4、安装命令行客户端程序crmsh及其依赖的pssh包:
从ftp://172.16.0.1下载程序包 :
lftp 172.16.0.1:/pub/Sources/6.x86_64/crmsh> get pssh-2.3.1-2.el6.x86_64.rpm
lftp 172.16.0.1:/pub/Sources/6.x86_64/corosync> get crmsh-2.1-1.6.x86_64.rpm
crmsh-2.1-1.6.x86_64.rpm pssh-2.3.1-2.el6.x86_64.rpm
一般都只需要在一个节点上安装crmsh即可,但为了使用方便,可以在两个节点上都安装crmsh
# yum install --nogpgcheck -y crmsh-2.1-1.6.x86_64.rpm pssh-2.3.1-2.el6.x86_64.rpm Installed: crmsh.x86_64 0:2.1-1.6 pssh.x86_64 0:2.3.1-2.el6 Dependency Installed: python-dateutil.noarch 0:1.4.1-6.el6 python-lxml.x86_64 0:2.2.3-1.1.el6 Complete!
安装完后可以查看节点信息并使用了:
此时0个资源、2个节点
# crm status Last updated: Mon Jun 1 22:49:35 2015 Last change: Mon Jun 1 22:16:07 2015 Stack: classic openais (with plugin) Current DC: node2 - partition with quorum Version: 1.1.11-97629de 2 Nodes configured, 2 expected votes 0 Resources configured Online: [ node1 node2 ] [root@node1 ~]#
四、配置资源,启动高可用
1、配置资源前准备
两个节点都停止资源的服务
# chkconfig drbd off # chkconfig | grep drbd drbd 0:off1:off2:off3:off4:off5:off6:off [root@node2 ~]# service mysqld stop Shutting down MySQL. [ OK ] [root@node2 ~]# umount /mydata [root@node2 ~]# drbdadm secondary mystore [root@node2 ~]# drbd-overview 0:mystore/0 Connected Secondary/Secondary UpToDate/UpToDate
2、配置全局属性参数:
在本次试验中,有两个参数是必须得有的:
① stonith-enabled=false :因为我们这没有使用stonith设备,没有这个选项是会报严重错误的;
② no-quorum-policy=ignore : 这是防止两节点中其中一个节点处于offline时另一个节点因为没有法定票数(without quorum)而不会自动启动资源服务。
[root@node1 ~]# crm crm(live)# configure show node node1 node node2 property cib-bootstrap-options: dc-version=1.1.11-97629de cluster-infrastructure="classic openais (with plugin)" expected-quorum-votes=2 crm(live)# configure crm(live)configure# property stonith-enabled=false crm(live)configure# property no-quorum-policy=ignore crm(live)configure# verify crm(live)configure# commit crm(live)configure# show node node1 node node2 property cib-bootstrap-options: dc-version=1.1.11-97629de cluster-infrastructure="classic openais (with plugin)" expected-quorum-votes=2 stonith-enabled=false no-quorum-policy=ignore crm(live)configure#
3、配置drbd资源
无论定义一个master/slave资源还是clone资源,都必须先是primitive。即先定义成primitive资源,再将次primitive资源克隆成clone资源或master/slave资源。
本次需要定义的资源有4个:
① IP: 172.16.20.50
② mariadb服务程序
③ drbd主从
④ Filesystem的挂载
定义4个主资源及一个master/slave类型,每定义一个资源就用verify校验以便及时修改:
crm(live)configure# primitive myip ocf:heartbeat:IPaddr params ip="172.16.20.50" nic="eth0" cidr_netmask="16" op monitor interval=10s timeout=20s crm(live)configure# verify crm(live)configure# primitive mymariadb lsb:mysqld op monitor interval=10s timeout=20s crm(live)configure# verify crm(live)configure# primitive mydrbd ocf:linbit:drbd params drbd_resource="mystore" op monitor role="Master" interval=10s timeout=20s op monitor role="Slave" interval=20s timeout=20s op start timeout=240s op stop timeout=100s crm(live)configure# verify crm(live)configure# primitive mydisk ocf:heartbeat:Filesystem params device="/dev/drbd0" directory="/mydata" fstype=ext4 op monitor interval=20s timeout=40s op start timeout=60s op stop timeout=60s crm(live)configure# verify crm(live)configure# ms ms_mydrbd mydrbd meta clone-max=2 clone-node-max=1 master-max=1 master-node-max=1 notify=true crm(live)configure# verify
定义资源的排列约束(资源的互相之间应该在哪启动):
crm(live)configure# colocation myip_with_mymariadb_with_ms_mydrbd_with_mydisk inf: myip mymariadb ms_mydrbd:Master mydisk crm(live)configure# verify
定义资源的顺序约束(也即启动顺序——从左往右依次进行):
crm(live)configure# order mymariadb_after_mydisk_after_ms_mydrbd_master Mandatory: ms_mydrbd:promote mydisk:start mymariadb:start crm(live)configure# verify crm(live)configure# order mymariadb_after_myip Mandatory: myip:start mymariadb:start crm(live)configure# verify crm(live)configure# commit
最终的资源配置如下:
crm(live)configure# show node node1 node node2 primitive mydisk Filesystem params device="/dev/drbd0" directory="/mydata" fstype=ext4 op monitor interval=20s timeout=40s op start timeout=60s interval=0 op stop timeout=60s interval=0 primitive mydrbd ocf:linbit:drbd params drbd_resource=mystore op monitor role=Master interval=10s timeout=20s op monitor role=Slave interval=20s timeout=20s op start timeout=240s interval=0 op stop timeout=100s interval=0 primitive myip IPaddr params ip=172.16.20.50 nic=eth0 cidr_netmask=16 op monitor interval=10s timeout=20s primitive mymariadb lsb:mysqld op monitor interval=10s timeout=20s ms ms_mydrbd mydrbd meta clone-max=2 clone-node-max=1 master-max=1 master-node-max=1 notify=true colocation myip_with_mymariadb_with_ms_mydrbd_with_mydisk inf: myip mymariadb ms_mydrbd:Master mydisk order mymariadb_after_mydisk_after_ms_mydrbd_master Mandatory: ms_mydrbd:promote mydisk:start mymariadb:start order mymariadb_after_myip Mandatory: myip:start mymariadb:start property cib-bootstrap-options: dc-version=1.1.11-97629de cluster-infrastructure="classic openais (with plugin)" expected-quorum-votes=2 stonith-enabled=false no-quorum-policy=ignore crm(live)configure# cd
上面commit后资源就开始运作了,可以看到,总共有2个节点、5个资源,现在node1是master,所有资源都运行在node1上,
crm(live)# status Last updated: Tue Jun 2 00:08:43 2015 Last change: Tue Jun 2 00:07:00 2015 Stack: classic openais (with plugin) Current DC: node2 - partition with quorum Version: 1.1.11-97629de 2 Nodes configured, 2 expected votes 5 Resources configured Online: [ node1 node2 ] myip(ocf::heartbeat:IPaddr):Started node1 mymariadb(lsb:mysqld):Started node1 mydisk(ocf::heartbeat:Filesystem):Started node1 Master/Slave Set: ms_mydrbd [mydrbd] Masters: [ node1 ] Slaves: [ node2 ] crm(live)#
4、配置后的验证与使用
drbd与corosync/pacemaker 结合构建高可用mariadb完成。
drbd与corosync/pacemaker 结合构建高可用mariadb服务
标签:drbd corosync pacemaker mariadb
原文地址:http://ctrry.blog.51cto.com/9990018/1658665