一、Mysql Cluster拓扑环境
Management:ClusterManager 222.9.9.161
ndb1:Clusterndb1 222.9.9.162
ndb2:Clusterndb2 222.9.9.163
sql1:Clustersql1 222.9.9.164
sql1:Clustersql2 222.9.9.165
拓扑中的服务器均为HyperV虚拟机镜像拷贝,仅有Centos 6.4操作系统,需要做一些基本配置
1、配置网卡IP地址(可直接在网卡管理器上进行,删除原来的system eth0,新建一个有线网卡)
2、备份my.cnf
[root@ClusterManager ~]# mv /etc/my.cnf /etc/my.cnf.bak
3、配置/etc/hosts
[root@ClusterManager ~]# gedit /etc/hosts
222.9.9.161 ClusterManager
222.9.9.162 Clusterndb1
222.9.9.163 Clusterndb2
222.9.9.164 Clustersql1
222.9.9.165 Clustersql2
保存退出(所有服务器均须配置,步骤相同)
重启服务器
二、配置SSH免密码登录及应用
1、生成公钥和私钥(此一步在每台服务器上均需要执行)
[root@ClusterManager ~]# ssh-keygen -t rsa
一路回车
[root@ClusterManager ~]# cd /root/.ssh/
[root@ClusterManager ~]# ll
目录下生成2个文件:id_rsa和id_rsa.pub,分别为私钥和公钥。
2、导入公钥到认证文件,更改权限
[root@ClusterManager ~]# mv ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys
[root@ClusterManager ~]# chmod 700 ~/.ssh
[root@ClusterManager ~]# chmod 600 ~/.ssh/authorized_keys
3、同步认证文件至其他4台服务器
[root@ClusterManager ~]# scp ~/.ssh/authorized_keys root@Clusterndb1:/root/.ssh/
[root@ClusterManager ~]# scp ~/.ssh/authorized_keys root@Clusterndb2:/root/.ssh/
[root@ClusterManager ~]# scp ~/.ssh/authorized_keys root@Clustersql1:/root/.ssh/
[root@ClusterManager ~]# scp ~/.ssh/authorized_keys root@Clustersql2:/root/.ssh/
过程中需要输入4台服务器的root密码,输入后回车即可
4、测试
[root@ClusterManager ~]# ssh Clusterndb1
Last login: Tue Sep 22 10:57:45 2015 from clustermanager
[root@Clusterndb1 ~]#
表示配置完成,其他三台同理测试。
5、将/data/目录下所有文件拷贝至其他4台服务器上
[root@ClusterManager ~]# scp -r /data/* root@Clusterndb1:/data/
[root@ClusterManager ~]# scp -r /data/* root@Clusterndb2:/data/
[root@ClusterManager ~]# scp -r /data/* root@Clustersql1:/data/
[root@ClusterManager ~]# scp -r /data/* root@Clustersql2:/data/
三、部署ClusterManager(ndb1、ndb2、sql1、sql2节点部署时1~3步与此相同)
1、安装依赖库文件
[root@ClusterManager ~]# yum install -y cmake gcc gcc-c++ bison ncurses-devel
2、配置java环境
[root@ClusterManager data]# tar zxf jdk-7u7-linux-i586.tar.gz -C /usr/local/java
另起一个终端
[root@ClusterManager ~]# gedit /etc/profile
export JAVA_HOME=/usr/local/java/jdk1.7.0_07
export CLASSPATH=/usr/local/java/jdk1.7.0_07/lib
export PATH=$JAVA_HOME/bin:$PATH
保存退出
[root@ClusterManager ~]# source /etc/profile
[root@ClusterManager ~]# java -version
java version "1.7.0_09-icedtea"
OpenJDK Runtime Enviroment (rhel-2.3.4.1.el6_3-i386)
OpenJDK Server VM (build 23.2-b09, mixed mode)
表示JDK环境配置正确,已生效。
3、安装mysql-cluster
回到第一个终端
[root@ClusterManager data]# tar zxf mysql-cluster-gpl-7.2.8.tar.gz
[root@ClusterManager data]# cd mysql-cluster-gpl-7.2.8
[root@ClusterManager mysql-cluster-gpl-7.2.8]# cmake -DCMAKE_INSTALL_PREFIX=/usr/local/mysql -DWITH_INNOBASE_STORAGE_ENGINE=ON -DWITH_MYISAM_STORAGE_ENGINE=1 -DEXTRA_CHARSETS=all -DDEFAULT_CHARSET=utf8 -DDEFAULT_COLLATION=utf8_general_ci
[root@ClusterManager mysql-cluster-gpl-7.2.8]# make
[root@ClusterManager mysql-cluster-gpl-7.2.8]# make install
[root@ClusterManager mysql-cluster-gpl-7.2.8]# cd
4、配置管理节点
[root@ClusterManager ~]# groupadd mysql
[root@ClusterManager ~]# useradd mysql -g mysql
[root@ClusterManager ~]# cd /usr/local/mysql
[root@ClusterManager mysql]# mkdir -p clusterconf
[root@ClusterManager mysql]# cp support-files/ndb-config-2-node.ini /usr/local/mysql/clusterconf/ndb-config.ini
[root@ClusterManager mysql]# chown mysql:mysql -R /usr/local/mysql/
[root@ClusterManager mysql]# cd clusterconf/
[root@ClusterManager clusterconf]# gedit /ndb-config.ini
—————————————————————————————————
# Copyright (c) 2006, 2010, Oracle and/or its affiliates. All rights reserved.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; version 2 of the License.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
#
# Example Ndbcluster storage engine config file.
#
[ndbd default]
NoOfReplicas= 2
MaxNoOfConcurrentOperations= 10000
DataMemory= 80M
IndexMemory= 24M
TimeBetweenWatchDogCheck= 30000
DataDir= /usr/local/mysql/data
MaxNoOfOrderedIndexes= 512
[ndb_mgmd default]
DataDir= /usr/local/mysql/data
[ndb_mgmd]
Id=1
HostName= 222.9.9.161
Datadir=/Data/apps/mysql/data
[ndbd]
Id= 2
HostName= 222.9.9.162
Datadir=/Data/apps/mysql/ndbdata
[ndbd]
Id= 3
HostName= 222.9.9.163
Datadir=/Data/apps/mysql/ndbdata
[mysqld]
Id= 4
HostName= 222.9.9.164
[mysqld]
Id= 5
HostName= 222.9.9.164
[mysqld]
Id= 6
[mysqld]
Id= 7
# choose an unused port number
# in this configuration 63132, 63133, and 63134
# will be used
[tcp default]
PortNumber= 63132
——————————————————————————————————
四、部署Clusterndb1及Clusterndb2
1、同ClusterManager部署第1步
2、同ClusterManager部署第2步
3、同ClusterManager部署第3步
4、配置Clusterndb1
[root@Clusterndb1 ~]# groupadd mysql
[root@Clusterndb1 ~]# useradd mysql -g mysql
[root@Clusterndb1 ~]# cd /usr/local/mysql
[root@Clusterndb1 mysql]# mkdir -p ndbdata
[root@Clusterndb1 mysql]# chown mysql:mysql -R /usr/local/mysql/ndbdata
[root@Clusterndb1 mysql]# cp support-files/my-medium.cnf /etc/my.cnf
[root@Clusterndb1 mysql]# gedit /etc/my.cnf
在末尾加上如下内容:
[mysqld]
datadir=/usr/local/mysql/ndbdata
ndbclustter
ndb-connectstring=222.9.9.161
[mysql_cluster]
ndb-connectstring=222.9.9.161
并修改server id 为2
保存退出
5、配置Clusterndb2
[root@Clusterndb2 ~]# groupadd mysql
[root@Clusterndb2 ~]# useradd mysql -g mysql
[root@Clusterndb2 ~]# cd /usr/local/mysql
[root@Clusterndb2 mysql]# mkdir -p ndbdata
[root@Clusterndb2 mysql]# chown mysql:mysql -R /usr/local/mysql/ndbdata
[root@Clusterndb2 mysql]# cp support-files/my-medium.cnf /etc/my.cnf
[root@Clusterndb2 mysql]# gedit /etc/my.cnf
在末尾加上如下内容:
[mysqld]
datadir=/usr/local/mysql/ndbdata
ndbclustter
ndb-connectstring=222.9.9.161
[mysql_cluster]
ndb-connectstring=222.9.9.161
并修改server id 为3
保存退出
五、部署Clustersql1及Clustersql2
1、同ClusterManager部署第1步
2、同ClusterManager部署第2步
3、同ClusterManager部署第3步
4、配置Clustersql1
[root@Clustersql1 ~]# groupadd mysql
[root@Clustersql1 ~]# useradd mysql -g mysql
[root@Clustersql1 ~]# cd /usr/local/mysql
[root@Clustersql1 mysql]# cp support-files/my-medium.cnf /etc/my.cnf
在末尾加上如下内容:
[mysqld]
datadir=/usr/local/mysql/ndbdata
ndbclustter
default-storage-engine=ndbcluster
[mysql_cluster]
ndb-connectstring=222.9.9.161
并修改server id 为4
保存退出
初始化数据库:
[root@Clustersql1 mysql]# /usr/local/mysql/scripts/mysql_install_db --user=mysql --basedir=/usr/local/mysql --datadir=/usr/local/mysql/data
5、配置Clustersql2
[root@Clustersql2 ~]# groupadd mysql
[root@Clustersql2 ~]# useradd mysql -g mysql
[root@Clustersql2 ~]# cd /usr/local/mysql
[root@Clustersql2 mysql]# cp support-files/my-medium.cnf /etc/my.cnf
在末尾加上如下内容:
[mysqld]
datadir=/usr/local/mysql/ndbdata
ndbclustter
default-storage-engine=ndbcluster
[mysql_cluster]
ndb-connectstring=222.9.9.161
并修改server id 为5
保存退出
初始化数据库:
[root@Clustersql2 mysql]# /usr/local/mysql/scripts/mysql_install_db --user=mysql --basedir=/usr/local/mysql --datadir=/usr/local/mysql/data
六、启动Mysql集群
正确的启动顺序为:管理节点 → 数据节点 → SQL节点
关闭顺序为:SQL节点 → 数据节点 → 管理节点
1、启动管理节点
[root@ClusterManager ~]# /usr/local/mysql/bin/ndb_mgmd -f /usr/local/mysql/clusterconf/ndb-config.ini
若需要每次开机后自行启动,可将命令加入/etc/rc.d/rc.local配置文件中,如
[root@ClusterManager ~]# gedit /etc/rc.d/rc.local
在touch /var/lock/subsys/local后回车,加入命令行
/usr/local/mysql/bin/ndb_mgmd -f /usr/local/mysql/clusterconf/ndb-config.ini
保存退出即可
2、启动数据节点
首次启动或重新初始化时需添加 --initial选项,之后不用加。
[root@Clusterndb1 ~]# /usr/local/mysql/bin/ndbd --initial
2015-09-22 15:48:17 [ndbd] INFO --Angel connected to ‘222.9.9.161:1186‘
2015-09-22 15:48:17 [ndbd] INFO --Angel allocated nodeid: 2
[root@Clusterndb2 ~]# /usr/local/mysql/bin/ndbd --initial
2015-09-22 15:48:18 [ndbd] INFO --Angel connected to ‘222.9.9.161:1186‘
2015-09-22 15:48:18 [ndbd] INFO --Angel allocated nodeid: 3
同样也可以配置开机启动
在最后加上一行命令即可
/usr/local/mysql/bin/ndbd
3、启动SQL节点
由于配置时已经进行了初始化,直接启动即可
[root@Clustersql1 ~]# /usr/local/mysql/bin/mysqld_safe &
[root@Clustersql2 ~]# /usr/local/mysql/bin/mysqld_safe &
或者将其设置为service启动
[root@Clustersql1 ~]# cp /usr/local/mysql/support-files/mysql.server /etc/init.d/mysqld
[root@Clustersql1 ~]# chmod +x /etc/init.d/mysqld
[root@Clustersql1 ~]# chkconfig mysqld on
Clustersql2配置相同,之后就可以通过以下命令来启动了
[root@Clustersql1 ~]# service mysqld start
4、登录管理节点查看
[root@ClusterManager ~]# /usr/local/mysql/bin/ndb_mgm
-- NDB Cluster -- Management CLient --
ndb_mgm>show
Connected to Management Server at: localhost:1186
Cluster Configuration
------------------------
[ndbd(NDB)]2 node(s)
id=2@222.9.9.162(mysql-5.5.27 ndb-7.2.8, Nodegroup: 0, Master)
id=3@222.9.9.163(mysql-5.5.27 ndb-7.2.8, Nodegroup: 0)
[ndbd_mgmd(MGM)]1 node(s)
id=1@222.9.9.161(mysql-5.5.27 ndb-7.2.8)
[mysqld(API)]4 node(s)
id=4@222.9.9.164(mysql-5.5.27 ndb-7.2.8)
id=5@222.9.9.165(mysql-5.5.27 ndb-7.2.8)
id=6 (not connected, accepting connect from any host)
id=7 (not connected, accepting connect from any host)
ndb_mgm>exit
七、简单高可用性测试
1、数据同步测试
在sql1上执行写数据库、表操作:
首先关联系统mysql链接
[root@Clustersql1 ~]# ln -s /usr/local/mysql/bin/mysql /usr/bin
进入mysql>模式
[root@Clustersql1 ~]# mysql -u root
mysql>
新建test1库并使用
mysql> create database test1;
Query OK , 1 row affected (0.13 sec)
mysql> use test1;
Database changed
新建tab1表
mysql> create table tab1(id int(4));
Query OK , 0 rows affected (1.70 sec)
插入测试记录
mysql> insert into test1.tab1 values(1234).(5678);
Query OK , 2 rows affected (0.01 sec)
Records: 2 Duplicates : 0 Warnings : 0
确认添加的表记录
mysql> select * from tab1;
+---------+
| id |
+---------+
| 1234 |
| 5678 |
+---------+
2 rows in set (0.00 sec)
mysql>
在sql2上确认结果
[root@Clustersql2 ~]# ln -s /usr/local/mysql/bin/mysql /usr/bin
[root@Clustersql2 ~]# mysq -u root
mysql> show databases;
+------------------------+
| Database |
+------------------------+
| information_schema |
| mysql |
| ndbinfo |
| performance_schema |
| test |
| test1 |
+------------------------+
6 rows in set (0.00 sec)
以上信息确认可以看到test1库
切换到test1库
mysql> use test1;
Database changed
mysql> select * from tab1;
+---------+
| id |
+---------+
| 1234 |
| 5678 |
+---------+
2 rows in set (0.04 sec)
确认查看到的表记录也一致
2、数据节点高可用测试
关闭数据节点Clusterndb1
[root@Clusterndb1 ~]# netstat -anpt|grep ndbd
[root@Clusterndb1 ~]# killall -9 ndbd
在管理节点上查看集群状态
[root@ClusterManager ~]# /usr/local/mysql/bin/ndb_mgm
ndb_mgm>show
Connected to Management Server at: localhost:1186
Cluster Configuration
------------------------
[ndbd(NDB)]2 node(s)
id=2 (not connected, accepting connect from 222.9.9.162)
id=3@222.9.9.163(mysql-5.5.27 ndb-7.2.8, Nodegroup: 0, Master)
[ndbd_mgmd(MGM)]1 node(s)
id=1@222.9.9.161(mysql-5.5.27 ndb-7.2.8)
[mysqld(API)]4 node(s)
id=4@222.9.9.164(mysql-5.5.27 ndb-7.2.8)
id=5@222.9.9.165(mysql-5.5.27 ndb-7.2.8)
id=6 (not connected, accepting connect from any host)
id=7 (not connected, accepting connect from any host)
发现Clusterndb1已经断开连接,Clusterndb2接管成Master状态。
从sql1或sql2上进行读写数据库,如在test1.tab1表中再添加1条记录:
mysql> insert into test1.tab1 values(4021);
Query OK , 1 row affected (0.00 sec)
mysql> select * from test1.tab1;
+---------+
| id |
+---------+
| 1234 |
| 4021 |
| 5678 |
+---------+
3 rows in set (0.00 sec)
以上测试说明:只要还有一台数据节点可用,Mysql数据库整体依然可用。
重新启动Clusterndb1节点的ndbd服务。
[root@Clusterndb1 ~]# /usr/local/mysql/bin/ndbd
2015-09-23 09:29:46 [ndbd] INFO --Angel connected to ‘222.9.9.161:1186‘
2015-09-23 09:29:46 [ndbd] INFO --Angel allocated nodeid: 2
在管理节点上查看集群状态
ndb_mgm>show
Connected to Management Server at: localhost:1186
Cluster Configuration
------------------------
[ndbd(NDB)]2 node(s)
id=2@222.9.9.162(mysql-5.5.27 ndb-7.2.8, starting,Nodegroup: 0)
id=3@222.9.9.163(mysql-5.5.27 ndb-7.2.8, Nodegroup: 0, Master)
[ndbd_mgmd(MGM)]1 node(s)
id=1@222.9.9.161(mysql-5.5.27 ndb-7.2.8)
[mysqld(API)]4 node(s)
id=4@222.9.9.164(mysql-5.5.27 ndb-7.2.8)
id=5@222.9.9.165(mysql-5.5.27 ndb-7.2.8)
id=6 (not connected, accepting connect from any host)
id=7 (not connected, accepting connect from any host)
发现Clusterndb1已经重新接入集群,但是没有抢占Clusterndb2的Master状态
接下来关闭Clusterndb2节点
[root@Clusterndb2 ~]# netstat -anpt|grep ndbd
[root@Clusterndb2 ~]# killall -9 ndbd
在管理节点上查看集群状态
ndb_mgm>show
Cluster Configuration
------------------------
[ndbd(NDB)]2 node(s)
id=2@222.9.9.162(mysql-5.5.27 ndb-7.2.8, Nodegroup: 0, Master)
id=3 (not connected, accepting connect from 222.9.9.163)
[ndbd_mgmd(MGM)]1 node(s)
id=1@222.9.9.161(mysql-5.5.27 ndb-7.2.8)
[mysqld(API)]4 node(s)
id=4@222.9.9.164(mysql-5.5.27 ndb-7.2.8)
id=5@222.9.9.165(mysql-5.5.27 ndb-7.2.8)
id=6 (not connected, accepting connect from any host)
id=7 (not connected, accepting connect from any host)
Clusterndb2未连接,Clusterndb1接管了Master
在Clustersql1或Clustersql2上确认结果
mysql> select * from test1.tab1;
+---------+
| id |
+---------+
| 1234 |
| 4021 |
| 5678 |
+---------+
3 rows in set (0.00 sec)
以上测试说明:因故障中断的数据节点(Clusterndb1)恢复后,会立即从正常的数据节点(Clusterndb2)上同步数据。
3、SQL节点高可用测试
关闭Clustersql1节点
[root@Clustersql1 ~]# netstat -anptu |grep mysql
[root@Clustersql1 ~]# service mysqld stop
在管理节点查看集群状态
Cluster Configuration
------------------------
[ndbd(NDB)]2 node(s)
id=2@222.9.9.162(mysql-5.5.27 ndb-7.2.8, Nodegroup: 0, Master)
id=3 (not connected, accepting connect from 222.9.9.163)
[ndbd_mgmd(MGM)]1 node(s)
id=1@222.9.9.161(mysql-5.5.27 ndb-7.2.8)
[mysqld(API)]4 node(s)
id=4 (not connected, accepting connect from 222.9.9.164)
id=5@222.9.9.165(mysql-5.5.27 ndb-7.2.8)
id=6 (not connected, accepting connect from any host)
id=7 (not connected, accepting connect from any host)
由于之前Clusterndb2已关闭,现在群集只剩下管理节点、Clusterndb1及Clustersql2,确认是否可用。
在Clustersql2上确认结果
mysql> select * from test1.tab1;
+---------+
| id |
+---------+
| 1234 |
| 4021 |
| 5678 |
+---------+
3 rows in set (0.01 sec)
结果正常。
同理,再开启Clustersql1,关闭Clustersql2节点,在Clustersql1上确认结果
mysql> select * from test1.tab1;
+---------+
| id |
+---------+
| 1234 |
| 4021 |
| 5678 |
+---------+
3 rows in set (0.01 sec)
结果亦正常。
测试完成。
本文出自 “伊塔吉” 博客,请务必保留此出处http://jlupp1234.blog.51cto.com/3379002/1703555
原文地址:http://jlupp1234.blog.51cto.com/3379002/1703555