标签:
使用专用的管理服务器管理多组MySQL主从服务器
Since MHA Manager uses very little CPU/Memory resources, you can manage lots of (master, slaves) pairs from single MHA Manager. It is even possible to manage 100+ pairs from single manager server.
在一个从库上部署管理节点
If you have only one (master, slaves) pair, you may not like allocating dedicated hardware for MHA Manager because it adds relatively high costs. In such cases, running MHA Manager on one of slaves makes sense. Note that current version of MHA Manager connects to MySQL slave server via SSH even though the MySQL server is located on the same host as MHA Manager, so you need to enable SSH public key authentication from the same host.
M(RW) M(RW), promoted from S1
| |
+------+------+ --(master crash)--> +-----+-----+
S1(R) S2(R) S3(R) S2(R) S3(R)
This is the most common replication settings. MHA works very well here.
M(RW) M(RW), promoted from S1
| |
+------+---------+ --(master crash)--> +-----+------+
S1(R) S2(R) Sr(R,no_master=1) S2(R) Sr(R,no_master=1)
In many cases you want to deploy at least one slave server on a remote datacenter. When the master crashes, you may not want to promote the remote slave to the new master, but let one of other slaves running on the local datacenter become the new master. MHA supports such requirements. Setting no_master=1 in the configuration file makes the slave never becomes new master.
M(RW)-----S0(R,candidate_master=1) M(RW), promoted from S0
| |
+----+----+ --(master crash)--> +----+----+
S1(R) S2(R) S1(R) S2(R)
In some cases you may want to promote a specific server to the new master if the current master crashes. In such cases, setting candidate_master=1 in the configuration file will help.
M(RW)<--->M2(R,candidate_master=1) M(RW), promoted from M2
| |
+----+----+ --(master crash)--> +----+----+
S(R) S2(R) S1(R) S2(R)
In some cases you may want to use multi-master configurations, and you may want to make the read-only master the new master if the current master crashes. MHA Manager supports multi-master configurations as long as all non-primary masters (M2 in this figure) are read-only.
M(RW) M(RW), promoted from S1
| |
+------+---------+ --(master crash)--> +-----+------+
S1(R) S2(R) Sr(R) S2(R) Sr(R)
| |
+ +
Sr2 Sr2
In some cases you may want to use three-tier replication like this. MHA can still be used for master failover. In the configuration file, manage the master and all second-tier slaves (in this figure, add M,S1,S2 and Sr in the MHA config file, but do not add Sr2). If the current master (M) fails, MHA automatically promotes one of the second-tier slaves(S1,S2,Sr, and you can also set priorities) to the new master, and recover the rest second-tier slaves. The third tier slave(Sr2) is not managed by MHA, but as long as Sr (Sr2‘s master) is alive, Sr2 can continue replication without changing anything.
If Sr crashes, Sr2 can‘t continue replication because Sr2‘s master is Sr. MHA can NOT be used to recover Sr2. This is where support for global transaction id is desired. Hopefully this is less serious than master crash.
M1(host1,RW) <-----------------> M2(host2,read-only)
| |
+-----+--------+ +
S1(host3,R) S2(host4,R) S3(host5,R)
=> After failover
M2(host2,RW)
|
+--------------+--------------------------+
S1(host3,R) S2(host4,R) S3(host5,R)
This structure is also supported. In this case, host5 is a third-tier slave, so MHA does not manage host5(MHA does not execute CHANGE MASTER on host5 when the primary master host1 fails). When current master host1 is down, host2 will be new master, so host5 can keep replication from host2 without doing anything. Here are configuration file example.
[server default] multi_tier_slave=1 [server1] hostname=host1 candidate_master=1 [server2] hostname=host2 candidate_master=1 [server3] hostname=host3 [server4] hostname=host4 [server5] hostname=host5
标签:
原文地址:http://blog.csdn.net/leshami/article/details/43702495