master1:MHA Manager
master2:MariaDB master
master3:MariaDB slave
master4:MariaDB slave
各节点的/etc/hosts
10.201.106.131 master1 master1.com
10.201.106.132 master2 master2.com
10.201.106.133 master3 master3.com
10.201.106.134 master4 master4.com
[root@master2 ~]# vim /etc/my.cnf
[mysqld]
datadir=/data/mysql
innodb_file_per_table = 1
skip_name_resolve = 1
log-bin = master-bin
relay-log = relay-bin
server_id = 1
[root@master2 ~]# systemctl start mariadb.service
创建一个拥有复制权限的账号:
MariaDB [(none)]> GRANT REPLICATION SLAVE,REPLICATION CLIENT ON *.* TO ‘repluser‘@‘10.201.106.%‘ IDENTIFIED BY ‘replpass‘;
MariaDB [(none)]> FLUSH PRIVILEGES;
查看二进制文件和位置
MariaDB [(none)]> SHOW MASTER STATUS;
+-------------------+----------+--------------+------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+-------------------+----------+--------------+------------------+
| master-bin.000001 | 498 | | |
|
从节点1配置:
[root@master3 ~]# vim /etc/my.cnf
[mysqld]
datadir=/var/lib/mysql
innodb_file_per_table = 1
skip_name_resolve = 1
log-bin = master-bin
relay-log = relay-bin
server_id = 2
read_only = 1
relay_log_purge = 0
[root@master3 ~]# systemctl restart mariadb.service
拷贝节点1配置到节点2:
[root@master3 ~]# scp /etc/my.cnf master4:/etc/
root@master4‘s password:
my.cnf 100% 713 0.7KB/s 00:00
[root@master3 ~]#
节点2修改server_id
[root@master4 ~]# vim /etc/my.cnf
[mysqld]
server_id = 3
[root@master4 ~]# systemctl restart mariadb.service
[root@master3 mysql]#
MariaDB [(none)]> CHANGE MASTER TO MASTER_HOST=‘10.201.106.132‘,MASTER_USER=‘repluser‘,MASTER_PASSWORD=‘replpass‘,MASTER_LOG_FILE=‘master-bin.000001‘,MASTER_LOG_POS=245;
MariaDB [(none)]> START SLAVE;
查看状态:
MariaDB [(none)]> SHOW SLAVE STATUS\G
从节点2配置:
[root@master4 ~]# mysql
MariaDB [(none)]> CHANGE MASTER TO MASTER_HOST=‘10.201.106.132‘,MASTER_USER=‘repluser‘,MASTER_PASSWORD=‘replpass‘,MASTER_LOG_FILE=‘master-bin.000001‘,MASTER_LOG_POS=245;
MariaDB [(none)]> START SLAVE;
[root@master2 ~]#
MariaDB [(none)]> GRANT ALL ON *.* TO ‘mhauser‘@‘10.201.106.%‘ IDENTIFIED BY ‘mhapass‘;
MariaDB [(none)]> FLUSH PRIVILEGES;
创建覆盖密钥
[root@master1 ~]# ssh-keygen -t rsa -P ‘‘
将密钥内容覆盖到SSH密钥文件中
[root@master1 ~]# cat .ssh/id_rsa.pub > .ssh/authorized_keys
设置权限:
[root@master1 ~]# chmod 600 .ssh/authorized_keys
复制文件到另外三个节点
[root@master1 ~]# scp -p .ssh/id_rsa .ssh/authorized_keys master2:/root/.ssh
id_rsa 100% 1679 1.6KB/s 00:00
authorized_keys 100% 398 0.4KB/s 00:00
[root@master1 ~]# scp -p .ssh/id_rsa .ssh/authorized_keys master3:/root/.ssh
id_rsa 100% 1679 1.6KB/s 00:00
authorized_keys 100% 398 0.4KB/s 00:00
[root@master1 ~]# scp -p .ssh/id_rsa .ssh/authorized_keys master4:/root/.ssh
id_rsa 100% 1679 1.6KB/s 00:00
authorized_keys 100% 398 0.4KB/s 00:00
[root@master1 ~]#
测试:
[root@master1 ~]# ssh master2 ‘ifconfig ens33‘ | grep 13
inet 10.201.106.132 netmask 255.255.255.0 broadcast 10.201.106.255
[root@master1 ~]# ssh master3 ‘ifconfig ens33‘ | grep 13
inet 10.201.106.133 netmask 255.255.255.0 broadcast 10.201.106.255
[root@master1 ~]# ssh master4 ‘ifconfig ens33‘ | grep 13
inet 10.201.106.134 netmask 255.255.255.0 broadcast 10.201.106.255
[root@master1 ~]# yum -y install mha4mysql-*
[root@master1 ~]# scp mha4mysql-node-0.57-0.el7.noarch.rpm master2:/root
mha4mysql-node-0.57-0.el7.noarch.rpm 100% 35KB 34.5KB/s 00:00
[root@master1 ~]# scp mha4mysql-node-0.57-0.el7.noarch.rpm master3:/root
mha4mysql-node-0.57-0.el7.noarch.rpm 100% 35KB 34.5KB/s 00:00
[root@master1 ~]# scp mha4mysql-node-0.57-0.el7.noarch.rpm master4:/root
mha4mysql-node-0.57-0.el7.noarch.rpm 100% 35KB 34.5KB/s 00:00
[root@master2 ~]# yum -y install mha4mysql-node-0.57-0.el7.noarch.rpm
[root@master3 ~]# yum -y install mha4mysql-node-0.57-0.el7.noarch.rpm
[root@master4 ~]# yum -y install mha4mysql-node-0.57-0.el7.noarch.rpm
[root@master1 ~]# rpm -ql mha4mysql-manager
查看node文件
[root@master1 ~]# rpm -ql mha4mysql-node
[root@master1 ~]# mkdir /etc/masterha
[root@master1 ~]# vim /etc/masterha/app1.cnf
[server default]
user=mhauser
password=mhapass
manager_workdir=/data/masterha/app1
manager_log=/data/masterha/app1/manager.log
remote_workdir=/data/masterha/app1
ssh_user=root
repl_user=repluser
repl_password=replpass
ping_interval=1
[server1]
hostname=10.201.106.132
[server2]
hostname=10.201.106.133
candidate_master=1
[server3]
hostname=10.201.106.134
[root@master1 ~]# masterha_check_ssh --conf=/etc/masterha/app1.cnf
Wed Jan 18 11:29:21 2017 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Wed Jan 18 11:29:21 2017 - [info] Reading application default configuration from /etc/mangerha/app1.cnf..
Wed Jan 18 11:29:21 2017 - [info] Reading server configuration from /etc/mangerha/app1.cnf..
Wed Jan 18 11:29:21 2017 - [info] Starting SSH connection tests..
Wed Jan 18 11:29:22 2017 - [debug]
Wed Jan 18 11:29:21 2017 - [debug] Connecting via SSH from root@10.201.106.132(10.201.106.132:22) to root@10.201.106.133(10.201.106.133:22)..
Wed Jan 18 11:29:21 2017 - [debug] ok.
Wed Jan 18 11:29:21 2017 - [debug] Connecting via SSH from root@10.201.106.132(10.201.106.132:22) to root@10.201.106.134(10.201.106.134:22)..
Wed Jan 18 11:29:22 2017 - [debug] ok.
Wed Jan 18 11:29:23 2017 - [debug]
Wed Jan 18 11:29:21 2017 - [debug] Connecting via SSH from root@10.201.106.133(10.201.106.133:22) to root@10.201.106.132(10.201.106.132:22)..
Wed Jan 18 11:29:22 2017 - [debug] ok.
Wed Jan 18 11:29:22 2017 - [debug] Connecting via SSH from root@10.201.106.133(10.201.106.133:22) to root@10.201.106.134(10.201.106.134:22)..
Wed Jan 18 11:29:23 2017 - [debug] ok.
Wed Jan 18 11:29:24 2017 - [debug]
Wed Jan 18 11:29:22 2017 - [debug] Connecting via SSH from root@10.201.106.134(10.201.106.134:22) to root@10.201.106.132(10.201.106.132:22)..
Wed Jan 18 11:29:23 2017 - [debug] ok.
Wed Jan 18 11:29:23 2017 - [debug] Connecting via SSH from root@10.201.106.134(10.201.106.134:22) to root@10.201.106.133(10.201.106.133:22)..
Wed Jan 18 11:29:24 2017 - [debug] ok.
Wed Jan 18 11:29:24 2017 - [info] All SSH connection tests passed successfully.
[root@master1 masterha]# masterha_check_repl --conf=/etc/masterha/app1.cnf
……
MySQL Replication Health is OK.
[root@master1 masterha]# masterha_manager --conf=/etc/masterha/app1.cnf
Wed Jan 18 16:24:30 2017 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Wed Jan 18 16:24:30 2017 - [info] Reading application default configuration from /etc/masterha/app1.cnf..
Wed Jan 18 16:24:30 2017 - [info] Reading server configuration from /etc/masterha/app1.cnf..
kill掉主mysql进程
[root@master2 mysql]# killall mysqld mysqld_safe
从节点2查看:
MariaDB [(none)]> SHOW SLAVE STATUS\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 10.201.106.133
从节点1的read_only已关掉:
MariaDB [(none)]> SHOW GLOBAL VARIABLES LIKE ‘read_only‘;
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| read_only | OFF |
+---------------+-------+
怎么加从节点:
1、先在主节点进行备份,记住新的日志文件名字和POS;
2、把备份二进制导入到从节点,让从节点从指定位置开始复制;
3、
[root@master2 ~]# vim /etc/my.cnf
[mysqld]
read_only = 1
relay_log_purge = 0
原来节点数据删除,重新做初始化。生产环境切勿操作:
[root@master2 ~]# rm -rf /var/lib/mysql/*
启动数据库:
[root@master2 ~]# systemctl start mariadb.service
从节点配置:
设置从头开始复制:
MariaDB [(none)]> CHANGE MASTER TO MASTER_HOST=‘10.201.106.133‘,MASTER_USER=‘repluser‘,MASTER_PASSWORD=‘replpass‘;
MariaDB [(none)]> GRANT ALL ON *.* TO ‘mhauser‘@‘10.201.106.%‘ IDENTIFIED BY ‘mhapass‘;
Query OK, 0 rows affected (0.01 sec)
MariaDB [(none)]> GRANT REPLICATION SLAVE,REPLICATION CLIENT ON *.* TO ‘repluser‘@‘10.201.106.%‘ IDENTIFIED BY ‘replpass‘;
开启从节点线程:
MariaDB [(none)]> START SLAVE;
[root@master1 masterha]# masterha_check_repl --conf=/etc/masterha/app1.cnf
MySQL Replication Health is OK.
[root@master1 masterha]# masterha_manager --conf=/etc/masterha/app1.cnf
[root@master1 ~]# masterha_check_status --conf=/etc/masterha/app1.cnf
app1 (pid:31234) is running(0:PING_OK), master:10.201.106.133
原文地址:http://blog.51cto.com/zhongle21/2089220