标签:linux corosync pacemaker mariadb 高可用存储
目的:
通过corosync v1 和pacemaker 提供高可用mariadb,在调度切换RS时,仍能数据库访问及操作不中断。
其中corosyncv1提供底层心跳及事务信息传递功能,pacemaker提供CRM、LRM
试验环境流程:
第一台虚拟机提供nfs文件共享存储;中间两台虚拟机配置相同mariadb服务,数据库存储位置为第四台虚拟机提供的nfs文件系统。对pacemaker的配置操作可以有多种形式,如CLI接口的crmsh和pcs,GUI的 hawk(webgui)和LCMC。其中crmsh功能强大,本次就采用crmsh。由于虚拟机安装的是centos6,不带crmsh,因此需手动安装crmsh,crmsh的安装依赖于pssh
corosync.x86_64 0:1.4.7-1.el6
pacemaker.x86_64 0:1.1.12-4.el6
crmsh-2.1-1.6.x86_64.rpm
pssh-2.3.1-2.el6.x86_64.rpm
mariadb-5.5.43-linux-x86_64.tar.gz(二进制程序包)
架构图:
一、corosync/pacemaker 的安装配置
1、配置HA集群的准备工作:各节点都做如下工作,本次试验是2节点
时间同步、名称访问、ssh 互信、仲裁设备(本次不提供仲裁设备)
(1) 节点间时间必须同步:使用ntp协议实现;
各节点
# crontab -e */3 * * * * /usr/sbin/ntpdate 172.16.0.1 &> /dev/null
(2) 节点间需要通过主机名互相通信,必须解析主机至IP地址;
(a) 建议名称解析功能使用hosts文件来实现;
# vim /etc/hosts 127.0.0.1 localhost.localdomain localhost.localdomain localhost4 localhost4.localdomain4 localhost ::1 localhost.localdomain localhost.localdomain localhost6 localhost6.localdomain6 localhost 172.16.0.1 server.magelinux.com server 172.16.20.100 node1 172.16.20.200 node2
(b) 通信中使用的名字与节点名字必须保持一致:“uname -n”命令,或“hostname”展示出的名字保持一致;
(3) 考虑仲裁设备是否会用到:
2节点必须得有,以防发生集群分裂发生,超过2 的偶数几点可以没有。
教室环境可以使用 ping node 172.16.0.1
此次试验不考虑仲裁设备。
(4) 建立各节点之间的root用户能够基于密钥认证;
# ssh-keygen -t rsa -P ‘‘ # ssh-copy-id -i /root/.ssh/id_rsa.pub root@172.16.20.100 # ssh node1 ‘ifconfig‘
2、centos6自带了corosync和pacemaker的安装树,可以查询到并直接yum安装
各节点都安装corosync和pacemaker:
# yum install -y corosync …… Installed: corosync.x86_64 0:1.4.7-1.el6 Dependency Installed: corosynclib.x86_64 0:1.4.7-1.el6 libibverbs.x86_64 0:1.1.8-3.el6 librdmacm.x86_64 0:1.0.18.1-1.el6 lm_sensors-libs.x86_64 0:3.1.1-17.el6 net-snmp-libs.x86_64 1:5.5-49.el6_5.3 Complete!
查看生成的文件:
# rpm -ql corosync /etc/corosync /etc/corosync/corosync.conf.example /etc/corosync/corosync.conf.example.udpu /etc/corosync/service.d /etc/corosync/uidgid.d /etc/dbus-1/system.d/corosync-signals.conf /etc/rc.d/init.d/corosync /etc/rc.d/init.d/corosync-notifyd /etc/sysconfig/corosync-notifyd /usr/bin/corosync-blackbox /usr/libexec/lcrso /usr/libexec/lcrso/coroparse.lcrso …… /usr/libexec/lcrso/service_pload.lcrso /usr/libexec/lcrso/vsf_quorum.lcrso /usr/libexec/lcrso/vsf_ykd.lcrso /usr/sbin/corosync /usr/sbin/corosync-cfgtool /usr/sbin/corosync-cpgtool /usr/sbin/corosync-fplay /usr/sbin/corosync-keygen /usr/sbin/corosync-notifyd /usr/sbin/corosync-objctl /usr/sbin/corosync-pload /usr/sbin/corosync-quorumtool /usr/share/doc/corosync-1.4.7 …… /var/log/cluster
# yum install pacemaker -y Installed: pacemaker.x86_64 0:1.1.12-4.el6 Dependency Installed: clusterlib.x86_64 0:3.0.12.1-68.el6 libqb.x86_64 0:0.16.0-2.el6 pacemaker-cli.x86_64 0:1.1.12-4.el6 pacemaker-cluster-libs.x86_64 0:1.1.12-4.el6 pacemaker-libs.x86_64 0:1.1.12-4.el6 perl-TimeDate.noarch 1:1.16-13.el6 resource-agents.x86_64 0:3.9.5-12.el6 Complete!
提供配置文件:
# cp /etc/corosync/corosync.conf.example /etc/corosync/corosync.conf # vim /etc/corosync/corosync.conf
验证网卡是否打开多播功能:
# ip link show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:10:05:bf brd ff:ff:ff:ff:ff:ff
如果没有需要使用如下命令打开:
# ip link set eth0 multicast on
生成corosync的密钥文件:
# corosync-keygen
如果熵池中的随机数不够,可以通过从网络上下载打文件或手动敲击键盘生成。
将生成的密钥文件和corosync配置文件保留所有属性复制一份到node2节点,并确保属性符合要求:
# scp -p /etc/corosync/{authkey,corosync.conf} node2:/etc/corosync/ authkey 100% 128 0.1KB/s 00:00 corosync.conf 100% 2754 2.7KB/s 00:00 [root@node2 ~]# ll /etc/corosync/ total 24 -r-------- 1 root root 128 May 30 11:56 authkey -rw-r--r-- 1 root root 2754 May 30 11:48 corosync.conf -rw-r--r-- 1 root root 2663 Oct 15 2014 corosync.conf.example -rw-r--r-- 1 root root 1073 Oct 15 2014 corosync.conf.example.udpu drwxr-xr-x 2 root root 4096 Oct 15 2014 service.d drwxr-xr-x 2 root root 4096 Oct 15 2014 uidgid.d
启动两个节点的corosync:
[root@node1 corosync]# service corosync start ; ssh node2 ‘service corosync start‘ Starting Corosync Cluster Engine (corosync): [ OK ] Starting Corosync Cluster Engine (corosync): [ OK ]
验证corosync引擎是否正常启动:
[root@node1 corosync]# ss -unlp | grep corosync UNCONN 0 0 172.16.20.100:5404 *:* users:(("corosync",3906,13)) UNCONN 0 0 172.16.20.100:5405 *:* users:(("corosync",3906,14)) UNCONN 0 0 239.254.11.11:5405 *:* users:(("corosync",3906,10)) [root@node1 corosync]# grep -e "Corosync Cluster Engine" -e "configuration file" /var/log/cluster/corosync.log May 30 12:08:44 corosync [MAIN ] Corosync Cluster Engine (‘1.4.7‘): started and ready to provide service. May 30 12:08:44 corosync [MAIN ] Successfully read main configuration file ‘/etc/corosync/corosync.conf‘.
查看初始化成员节点通知是否正常发出:
[root@node1 corosync]# grep TOTEM /var/log/cluster/corosync.log May 30 12:08:44 corosync [TOTEM ] Initializing transport (UDP/IP Multicast). May 30 12:08:44 corosync [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0). May 30 12:08:44 corosync [TOTEM ] The network interface [172.16.20.100] is now up. May 30 12:08:44 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed. May 30 12:08:46 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.
检查启动过程中是否有错误产生,下面的错误信息表示pacemaker不久之后将不再作为corosync的插件运行,因此,建议使用cman作为集群基础架构服务,此处可安全忽略:
[root@node1 corosync]# grep ERROR: /var/log/cluster/corosync.log | grep -v unpack_resources May 30 12:23:17 corosync [pcmk ] ERROR: process_ais_conf: You have configured a cluster using the Pacemaker plugin for Corosync. The plugin is not supported in this environment and will be removed very soon. May 30 12:23:17 corosync [pcmk ] ERROR: process_ais_conf: Please see Chapter 8 of ‘Clusters from Scratch‘ (http://www.clusterlabs.org/doc) for details on using Pacemaker with CMAN May 30 12:23:18 corosync [pcmk ] ERROR: pcmk_wait_dispatch: Child process mgmtd exited (pid=4144, rc=100)
查看pacemaker是否正常启动:
[root@node1 corosync]# grep pcmk_startup /var/log/cluster/corosync.log May 30 12:23:17 corosync [pcmk ] info: pcmk_startup: CRM: Initialized May 30 12:23:17 corosync [pcmk ] Logging: Initialized pcmk_startup May 30 12:23:17 corosync [pcmk ] info: pcmk_startup: Maximum core file size is: 18446744073709551615 May 30 12:23:17 corosync [pcmk ] info: pcmk_startup: Service: 9 May 30 12:23:17 corosync [pcmk ] info: pcmk_startup: Local hostname: node1
上面查看都正常就可以在node2上的也查看检查状况是否正常。
安装crmsh及其依赖的pssh包:
从ftp://172.16.0.1下载程序包 :
crmsh-2.1-1.6.x86_64.rpm pssh-2.3.1-2.el6.x86_64.rpm
一般都只需要在一个节点上安装crmsh即可,但为了使用方便,可以在两个节点上都安装crmsh
# yum install --nogpgcheck -y crmsh-2.1-1.6.x86_64.rpm pssh-2.3.1-2.el6.x86_64.rpm
安装完后可以查看节点信息并使用了:
# crm status Last updated: Sat May 30 12:48:24 2015 Last change: Sat May 30 12:33:42 2015 Stack: classic openais (with plugin) Current DC: node2 - partition with quorum Version: 1.1.11-97629de 2 Nodes configured, 2 expected votes 0 Resources configured Online: [ node1 node2 ]
二、mariadb的配置使用
mariadb要能使用高可用,为了保证数据读写一致,就必须用到共享存储。这里选用nfs共享存储方式。
共享存储时,需要将数据库所在主机对远程mysql程序所在主机开放读写访问权限(以mysql:mysql 主:组);
做节点访问时需要用mysql用户访问出故障换节点访问时,就需要右几点在访问时仍以mysql用户访问,这时两个节点和共享存储点的都得需要有mysql用户,且id号都得相同;
访问的格式化时只需要在一个节点上初始化就可以,另一节点不需要再初始化了。
(一)准备nfs共享文件存储系统
可以只导出一个文件目录,也可以创建一个新分区作为导出存储空间。
在第三台主机上创建一个新分区,挂载至 /data目录下
[root@aunt-s ~]# fdisk /dev/sda [root@aunt-s ~]# partx -a /dev/sda [root@aunt-s ~]# pvcreate /dev/sda6 [root@aunt-s ~]# vgcreate myvg /dev/sda6 [root@aunt-s ~]# lvcreate -L 4G -n mydata myvg [root@aunt-s ~]# mke2fs -t ext4 /dev/myvg/mydata
编辑/etc/fstab,添加一行,使新分区能开机自动挂载
[root@aunt-s ~]# vim /etc/fstab /dev/myvg/mydata /data ext4 defaults,noatime 0 0 [root@aunt-s ~]# mount -a [root@aunt-s ~]# mount …… /dev/mapper/myvg-mydata on /data type ext4 (rw,noatime) [root@aunt-s data]# mkdir /data/cldata
编辑 /etc/exports ,添加如下行,输出文件系统,因为是共享存储,所以需要能写,且不挤压root权限:
[root@aunt-s ~]# vim /etc/exports /data/cldata 172.16.0.0/16(rw,no_root_squash)
远程节点mysql访问时,是以mysql用户进行访问的,所有需要将文件属主数组改为mysql,且与节点的mysql用户的ID要一致:
[root@aunt-s ~]# groupadd -r -g 492 mysql [root@aunt-s ~]# useradd -r -g 492 -u 492 mysql [root@aunt-s ~]# id mysql uid=492(mysql) gid=492(mysql) groups=492(mysql) [root@aunt-s data]# chown -R mysql:mysql /data/cldata [root@aunt-s data]# ll -d /data/cldata drwxr-xr-x 2 mysql mysql 4096 May 30 15:56 /data/cldata [root@aunt-s data]# exportfs -arv exporting 172.16.0.0/16:/data/cldata
(二)节点配置mysql服务
1、 挂载远程nfs
[root@node1 ~]# showmount -e 172.16.20.150 clnt_create: RPC: Program not registered
解决办法:在服务器上先停止rpcbind,然后在停止nfs
最后在重启rpcbind和nfs,一定要按顺序启动和停止
做完这个动作之后,访问成功
[root@node1 ~]# showmount -e 172.16.20.150 Export list for 172.16.20.150: /data 172.16.0.0/16 [root@node1 ~]# mkdir /mydata ; ssh node2 ‘mkdir /mydata‘ [root@node1 ~]# mount -t nfs 172.16.20.150:/data /mydata [root@node1 ~]# mount …… 172.16.20.150:/data on /mydata type nfs (rw,vers=4,addr=172.16.20.150,clientaddr=172.16.20.100)
创建mysql用户并验证是否对nfs共享存储有写权限
[root@node1 ~]# id mysql id: mysql: No such user [root@node1 ~]# groupadd -r -g 492 mysql [root@node1 ~]# useradd -r -g 492 -u 492 mysql [root@node1 ~]# su - mysql su: warning: cannot change directory to /home/mysql: No such file or directory -bash-4.1$ tree /mydata /mydata ├── cldata └── lost+found [error opening dir] 2 directories, 0 files -bash-4.1$ cd /mydata/cldata/ -bash-4.1$ touch a.txt -bash-4.1$ ls /mydata/cldata/ a.txt -bash-4.1$ rm /mydata/cldata/a.txt -bash-4.1$ exit logout [root@node1 ~]#
在node2上也需要创建相同的mysql用户,并挂载nfs共享目录,然后验证mysql用户是否对其有写权限。过程略。
2、 安装mariadb并初始化
主程序包: mariadb-5.5.43-linux-x86_64.tar.gz
先将二进制程序包解压缩:
[root@node1 ~]# tar xf mariadb-5.5.43-linux-x86_64.tar.gz -C /usr/local [root@node1 ~]# ln -sv mariadb-5.5.43-linux-x86_64/ mysql `mysql‘ -> `mariadb-5.5.43-linux-x86_64/‘ [root@node1 ~]# cd /usr/local/mysql/ [root@node1 mysql]# ls bin COPYING.LESSER EXCEPTIONS-CLIENT INSTALL-BINARY man README share support-files COPYING data include lib mysql-test scripts sql-bench [root@node1 mysql]# chown -R root:mysql ./* [root@node1 mysql]# ll total 220 drwxr-xr-x 2 root mysql 4096 May 30 16:45 bin -rw-r--r-- 1 root mysql 17987 Apr 30 02:55 COPYING …… [root@node1 mysql]# ./scripts/mysql_install_db --user=mysql --datadir=/mydata/cldata/
在第二节点上可以看到初始化已成功:
[root@node2 ~]# ls /mydata/cldata/ aria_log.00000001 aria_log_control mysql performance_schema test
创建完成后,nfs共享存储就不需要管理员操作了,可以将nfs共享修改为 root_squash:
[root@aunt-s data]# vim /etc/exports /data 172.16.0.0/16(rw,root_squash) [root@aunt-s data]# exportfs -arv exporting 172.16.0.0/16:/data
设置好配置文件:
[root@node1 mysql]# ls bin COPYING.LESSER EXCEPTIONS-CLIENT INSTALL-BINARY man README share support-files COPYING data include lib mysql-test scripts sql-bench [root@node1 mysql]# cp support-files/ binary-configure my-innodb-heavy-4G.cnf my-small.cnf mysql.server magic my-large.cnf mysqld_multi.server SELinux/ my-huge.cnf my-medium.cnf mysql-log-rotate
将样本复制到 /etc下,并修改或添加下面四项:
[root@node1 mysql]# cp support-files/my-large.cnf /etc/my.cnf thread_concurrency = 2 datadir = /mydata/cldata innodb_file_per_table = on skip_name_resolve = on [root@node1 mysql]# service mysqld start Starting MySQL. [FAILED] [root@node1 mysql]# service mysqld status MySQL is not running, but lock file (/var/lock/subsys/mysql[FAILED]
如果出现这种情况,是原来的程序没有处理干净,需要将锁文件/var/lock/subsys/mysql删除。如果还是不行,则需要删除所有安装文件及生成的库文件后再启动就可以了。
[root@node1 mysql]# service mysqld start Starting MySQL... [ OK ]
验证mysql能正常使用:
[root@node1 mysql]# /usr/local/mysql/bin/mysql …… MariaDB [(none)]> CREATE DATABASE testclusterdb; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> quit Bye [root@node1 mysql]# service mysqld stop Shutting down MySQL. [ OK ] [root@node1 mysql]# chkconfig mysqld off [root@node1 mysql]# chkconfig | grep mysqld mysqld 0:off1:off2:off3:off4:off5:off6:off
在node2主机上执行上述过程,但不需要执行初始化(# ./scripts/mysql_install_db --user=mysql --datadir=/mydata/cldata/)
mariadb服务安装配置好,停止mysqld服务,并卸载nfs挂载:
[root@node2 ~]# umount /mydata ; ssh node1 ‘umount /mydata‘
三 corosync/pacemaker与mariadb结合构建高可用存储
根据mariadb的高可用存储特点,按如下顺序定义:
① 三种资源定义:
IP、mariadb服务程序、nfs共享存储
② 排列约束定义
这三者之间都是都是具有高粘性的,必须在同一节点上
③ 顺序约束
启动顺序为 IP --> nfs挂载 --> mariadb服务
关闭顺序为 mariadb --> nfs卸载 --> IP
1、定义两个关键全局配置:
stonith-enabled=false
no-quorum-policy=ignore
[root@node1 ~]# crm crm(live)# configure property no-quorum-policy=ignore crm(live)configure# property stonith-enabled=false
2、定义资源
注意:如果是在configure外定义,在定义完无错误时是自动commit的,只有在configure里定义完后,可以做verify和手动commit。
(1)ip资源
[root@node1 ~]# crm crm(live)configure# primitive mdbip ocf:heartbeat:IPaddr params ip=172.16.20.50 nic=eth0 cidr_netmask=16 op monitor interval=10s timeout=20s
(2)nfs挂载
[root@node1 ~]# crm crm(live)# ra info ocf:heartbeat:Filesystem (查看挂载文件系统需要配置的参数) crm(live)# configure primitive mdbstore ocf:heartbeat:Filesystem params device="172.16.20.150:/data" directory="/mydata/" fstype=nfs op monitor interval=10s timeout=40s op start timeout=60s op stop timeout=60
(3)mariadb服务资源
[root@node1 ~]# crm crm(live)configure# primitive maria lsb:mysqld op monitor interval=10s timeout=20s crm(live)configure# show node node1 attributes standby=off node node2 primitive maria lsb:mysqld op monitor interval=10s timeout=20s primitive mdbip IPaddr params ip=172.16.20.50 nic=eth0 cidr_netmask=16 op monitor interval=10s timeout=20s primitive mdbstore Filesystem params device="172.16.20.150:/data" directory="/mydata/" fstype=nfs op monitor interval=10s timeout=40s op start timeout=60s interval=0 op stop timeout=60 interval=0 property cib-bootstrap-options: dc-version=1.1.11-97629de cluster-infrastructure="classic openais (with plugin)" expected-quorum-votes=2 stonith-enabled=false no-quorum-policy=ignore last-lrm-refresh=1432984183 crm(live)configure#
3、定义排列约束
将资源绑定在一个节点上运行有多种方法,一是将资源定义到一个group里,二是定义colocation。这里用group方法
[root@node1 ~]# crm crm(live)configure# group mdbservice mdbip mdbstore maria crm(live)configure# show node node1 attributes standby=off node node2 primitive maria lsb:mysqld op monitor interval=10s timeout=20s primitive mdbip IPaddr params ip=172.16.20.50 nic=eth0 cidr_netmask=16 op monitor interval=10s timeout=20s primitive mdbstore Filesystem params device="172.16.20.150:/data" directory="/mydata/" fstype=nfs op monitor interval=10s timeout=40s op start timeout=60s interval=0 op stop timeout=60 interval=0 group mdbservice mdbip mdbstore maria property cib-bootstrap-options: dc-version=1.1.11-97629de cluster-infrastructure="classic openais (with plugin)" expected-quorum-votes=2 stonith-enabled=false no-quorum-policy=ignore last-lrm-refresh=1432984183
此时,就定义完了mariadb的高可用集群了,可以通过网页进行测试
4、测试高可用
(1)查看几点信息
在node1节点可以看到,三个资源都已经启动,且运行在同一个节点上(node2):
[root@node1 ~]# crm crm(live)# status Last updated: Sat May 30 19:59:49 2015 Last change: Sat May 30 19:59:32 2015 Stack: classic openais (with plugin) Current DC: node1 - partition with quorum Version: 1.1.11-97629de 2 Nodes configured, 2 expected votes 3 Resources configured Online: [ node1 node2 ] Resource Group: mdbservice mdbip(ocf::heartbeat:IPaddr):Started node2 mdbstore(ocf::heartbeat:Filesystem):Started node2 maria(lsb:mysqld):Started node2 crm(live)#
(2)远程第三方主机测试:
用主机172.16.20.96 [root@hot-d ~] 进行远程登录操作测试:
在mysql中授权能远程登录并退出:
[root@node2 ~]# mysql mysql> GRANT ALL ON *.* to tom@‘172.16.20.96‘ IDENTIFIED BY ‘123‘; Query OK, 0 rows affected (0.00 sec) mysql> FLUSH PRIVILEGES; Query OK, 0 rows affected (0.01 sec) mysql> \q Bye [root@node2 ~]#
在新主机hot-d上登录mysql,并做资源转移测试:
通过 corosync/pacemaker实现高可用的MariaDB
标签:linux corosync pacemaker mariadb 高可用存储
原文地址:http://ctrry.blog.51cto.com/9990018/1656701