标签:高可用分布式存储(corosync+pacemaker+drbd+moosefs)
配置步骤:
(1) 安装与配置DRBD编译安装Master-server
(2)安装配置使用pcs安装corosync+pacemaker
(3)安装crm配置安装mfs+DRBD+corosync+pacemaker的高可用集群
(4)编译安装Chunk-server和Matelogger主机
(5)安装mfs客户端测试高可用集群
(个人觉得还是先安装好drbd,然后安装master-server,最后才安装chunk-server和matelogger主机。因为之前的配置的时候出现过挂载目录写不进数据的情况,后来经过排查最终把drbd的挂载磁盘格式化后重新安装chunk-server和matelogger主机。)
DRBD:
DRBD是一个用软件实现的、无共享的、服务器之间镜像块设备内容的存储复制解决方案。 DRBD Logo数据镜像:实时、透明、同步(所有服务器都成功后返回)、异步(本地服务器成功后返回)。DBRD的核心功能通过Linux的内核实现,最接近系统的IO栈,但它不能神奇地添加上层的功能比如检测到EXT3文件系统的崩溃。DBRD的位置处于文件系统以下,比文件系统更加靠近操作系统内核及IO栈。
MooseFS:
MooseFS(mfs)被称为对象存储,提供了强大的扩展性、高可靠性和持久性。它能够将文件分布存储于不同的物理机器上,对外却提供的是一个透明的接口的存储资源池。它还具有在线扩展(这是个很大的好处)、文件切块存储、读写效率高等特点。
MFS分布式文件系统由元数据服务器(Master Server)、元数据日志服务器(Metalogger Server)、数据存储服务器(Chunk Server)、客户端(Client)组成。
(1)元数据服务器:MFS系统中的核心组成部分,存储每个文件的元数据,负责文件的读写调度、空间回收和在多个chunk server之间的数据拷贝等。目前MFS仅支持一个元数据服务器,因此可能会出现单点故障。针对此问题我们需要用一台性能很稳定的服务器来作为我们的元数据服务器,这样可以降低出现单点故障的概率。
(2) 元数据日志服务器:元数据服务器的备份节点,按照指定的周期从元数据服务器上将保存元数据、更新日志和会话信息的文件下载到本地目录下。当元数据服务器出现故障时,我们可以从该服务器的文件中拿到相关的必要的信息对整个系统进行恢复。
此外,利用元数据进行备份是一种常规的日志备份手段,这种方法在某些情况下并不能完美的接管业务,还是会造成数据丢失。此次将采用通过iSCSI共享磁盘对元数据节点做双机热备。
(3) 数据存储服务器:负责连接元数据管理服务器,听从元数据服务器的调度,提供存储空间,并为客户端提供数据传输,MooseFS提供一个手动指定每个目录的备份个数。假设个数为n,那么我们在向系统写入文件时,系统会将切分好的文件块在不同的chunk server上复制n份。备份数的增加不会影响系统的写性能,但是可以提高系统的读性能和可用性,这可以说是一种以存储容量换取写性能和可用性的策略。
(4) 客户端:使用mfsmount的方式通过FUSE内核接口挂接远程管理服务器上管理的数据存储服务器到本地目录上,然后就可以像使用本地文件一样来使用我们的MFS文件系统了。
个人总结笔记:
分布式存储:要使用源数据做(调度的作用),所以源数据也要做高可用
ceph:云,openstack,kubernats,刚出来,可能不太稳定
glusterfs:存储大文件。支持块设备,FUSE,直接挂载
mogilefs:性能高,海量小文件。但是FUSE性能不太好,需要折腾。支持对象存储,需要编程语言调用API,最大优势是有api
fastDFS:mogilefs的C语言实现形式,国人开发,不支持FUSE..存储内存,也支持海量小文件,都存在内存里面,所以很快(相对缺陷很大)
HDFS:海量大文件。(google的)
moosefs:(这次主要介绍因为国内比较受欢迎)存储海量小文件,支持FUSE.加服务器把ip指向源数据服务器就自动做成ha。
常用高可用集群解决方案:
Heatbeat+peachmaker:已慢慢淘汰
Cman+rgmanager
Cman+pacemaker
Corosync+pacemaker(corosync:提供信息传递、不做任何事情。只做心跳检测。Pacemaker:只作为资源管理器)
cman+clvm(一般做磁盘块的高可用cman:也逐渐淘汰,因为corosync有个优秀的投票机制。)
环境介绍:
系统版本: centos7
Yum源:http://mirrors.aliyun.com/repo/
cml1=Master Server(master):192.168.5.101 (VIP:192.168.5.200)
cml2=Master Server(slave):192.168.5.102
cml5=Chunk server:192.168.5.104
cml4=Chunk server:192.168.5.105
cml5=Metalogger Server:192.168.5.103
cml6=Client:192.168.5.129
1、修改hosts文件保证hosts之间能够互相访问:
[root@cml1 ~]#cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4localhost4.localdomain4
::1 localhost localhost.localdomainlocalhost6 localhost6.localdomain6
192.168.5.101 cml1 mfsmaster
192.168.5.102 cml2
192.168.5.103 cml5
192.168.5.104 cml3
192.168.5.105 cml4
192.168.5.129 cml6
2、修改ssh互信:
[root@cml1 ~]#ssh-keygen [root@cml1 ~]#ssh-copy-id cml2
3、设置时钟同步:
[root@cml1 ~]#crontab -l */5 * * * *ntpdate cn.pool.ntp.org
4、安装derb:
# rpm --importhttps://www.elrepo.org/RPM-GPG-KEY-elrepo.org # rpm -Uvhhttp://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm # yum install-y kmod-drbd84 drbd84-utils
5、主配置文件:
/etc/drbd.conf#主配置文件
/etc/drbd.d/global_common.conf#全局配置文件
6、查看主配置文件:
[root@cml1 ~]#cat /etc/drbd.conf
# You can findan example in /usr/share/doc/drbd.../drbd.conf.example
include"drbd.d/global_common.conf";
include"drbd.d/*.res";
7、配置文件说明:
[root@cml1 ~]#vim /etc/drbd.d/global_common.conf global { usage-count no; #是否参加DRBD使用统计,默认为yes。官方统计drbd的装机量 # minor-count dialog-refreshdisable-ip-verification } common { protocol C; #使用DRBD的同步协议 handlers { pri-on-incon-degr"/usr/lib/drbd/notify-pri-on-incon-degr.sh;/usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ;reboot -f"; pri-lost-after-sb"/usr/lib/drbd/notify-pri-lost-after-sb.sh;/usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ;reboot -f"; local-io-error"/usr/lib/drbd/notify-io-error.sh;/usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ;halt -f"; } startup { # wfc-timeout degr-wfc-timeoutoutdated-wfc-timeout wait-after-sb } options { # cpu-mask on-no-data-accessible } disk { on-io-error detach; #配置I/O错误处理策略为分离 # size max-bio-bvecs on-io-errorfencing disk-barrier disk-flushes # disk-drain md-flushes resync-rateresync-after al-extents # c-plan-ahead c-delay-targetc-fill-target c-max-rate # c-min-rate disk-timeout } net { # protocol timeout max-epoch-sizemax-buffers unplug-watermark # connect-int ping-int sndbuf-size rcvbuf-sizeko-count # allow-two-primaries cram-hmac-algshared-secret after-sb-0pri # after-sb-1pri after-sb-2prialways-asbp rr-conflict # ping-timeout data-integrity-algtcp-cork on-congestion # congestion-fill congestion-extentscsums-alg verify-alg # use-rle } syncer { rate 1024M; #设置主备节点同步时的网络速率 } }
8、创建配置文件:
[root@cml1 ~]#cat /etc/drbd.d/mfs.res resource mfs { protocol C; meta-diskinternal; device/dev/drbd1; syncer { verify-alg sha1; } net { allow-two-primaries; } on cml1 { disk/dev/sdb1; address192.168.5.101:7789; } on cml2 { disk/dev/sdb1; address192.168.5.102:7789; } }
9、然后把配置文件copy到对面的机器上:
scp -rp /etc/drbd.d/* cml2:/etc/drbd.d/
10、在cml1上面启动:
[root@cml1~]# drbdadm create-md mfs initializingactivity log initializingbitmap (160 KB) to all zero Writing metadata... New drbd metadata block successfully created. [root@cml1 ~]#modprobe drbd ##查看内核是否已经加载了模块: [root@cml1drbd.d]# lsmod | grep drbd drbd 396875 1 libcrc32c 12644 4 xfs,drbd,ip_vs,nf_conntrack ### [root@cml1 ~]#drbdadm up mfs [root@cml1 ~]#drbdadm -- --force primary mfs 查看状态: [root@cml1 ~]#cat /proc/drbd version:8.4.10-1 (api:1/proto:86-101) GIT-hash:a4d5de01fffd7e4cde48a080e2c686f9e8cebf4c build by mockbuild@, 2017-09-1514:23:22 1: cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown C r----s ns:0 nr:0 dw:0 dr:912 al:8 bm:0 lo:0 pe:0ua:0 ap:0 ep:1 wo:f oos:5240636
10、在对端(cml2)节点执行:
[root@cml2 ~]# drbdadmcreate-md mfs [root@cml2 ~]# modprobe drbd [root@cml2 ~]# drbdadm up mfs
11、格式化并挂载:
[root@cml1 ~]#mkfs.ext4 /dev/drbd1 [root@cml1 ~]#mkdir /usr/local/mfs [root@cml1 ~]#mount /dev/drbd1 /usr/local/mfs [root@cml1 ~]#df -TH Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/centos-rootxfs 19G 6.8G 13G 36% / devtmpfs devtmpfs 501M 0 501M 0% /dev tmpfs tmpfs 512M 56M 456M 11% /dev/shm tmpfs tmpfs 512M 33M 480M 7% /run tmpfs tmpfs 512M 0 512M 0% /sys/fs/cgroup /dev/sda1 xfs 521M 160M 362M 31% /boot tmpfs tmpfs 103M 0 103M 0% /run/user/0 /dev/drbd1 ext4 5.2G 30M 4.9G 1% /usr/local/mfs
####注意要想使得从可以挂载,我们必须,先把主切换成丛,然后再到从上面挂载:
###查看状态:
[root@cml1 ~]#cat /proc/drbd version:8.4.10-1 (api:1/proto:86-101) GIT-hash:a4d5de01fffd7e4cde48a080e2c686f9e8cebf4c build by mockbuild@, 2017-09-1514:23:22 1: cs:Connected ro:Primary/Secondaryds:UpToDate/UpToDate C r----- ns:520744 nr:0 dw:252228 dr:300898 al:57bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
12、安装与配置Master Server:
##MFS安装:下载3.0包:
[root@cml1src]# yum install zlib-devel -y [root@cml1src]# wget https://github.com/moosefs/moosefs/archive/v3.0.96.tar.gz
(1)安装master:
[root@cml1moosefs-3.0.96]# useradd mfs [root@cml1src]# tar -xf moosefs.3.0.96.tar.gz [root@cml1src]# cd moosefs-3.0.96/ [root@cml1moosefs-3.0.96]# ./configure --prefix=/usr/local/mfs --with-default-user=mfs--with-default-group=mfs --disable-mfschunkserver --disable-mfsmount [root@cml1moosefs-3.0.96]# ls /usr/local/mfs/ bin etc sbin share var
(etc和var目录里面存放的是配置文件和MFS的数据结构信息,因此请及时做好备份,防止灾难损毁。做了Master Server双机之后,就可以解决这个问题。)
##注意:所有主机上的mfs,用户id和组id要一样:
(2)配置master:
[root@cml1mfs]# pwd /usr/local/mfs/etc/mfs [root@cml1mfs]# ls mfsexports.cfg.sample mfsmaster.cfg.sample mfsmetalogger.cfg.sample mfstopology.cfg.sample
##都是sample文件,所以我们要命名成.cfg文件:
[root@cml1mfs]# cp mfsexports.cfg.sample mfsexports.cfg [root@cml1mfs]# cp mfsmaster.cfg.sample mfsmaster.cfg
(3)看一下默认配置的参数:
[root@cml1mfs]# vim mfsmaster.cfg
# WORKING_USER = mfs # 运行 master server 的用户
# WORKING_GROUP = mfs # 运行 master server 的组
# SYSLOG_IDENT = mfsmaster # 是master server在syslog中的标识,也就是说明这是由master serve产生的
# LOCK_MEMORY = 0 # 是否执行mlockall()以避免mfsmaster 进程溢出(默认为0)
# NICE_LEVEL = -19 # 运行的优先级(如果可以默认是 -19; 注意: 进程必须是用root启动)
# EXPORTS_FILENAME = /usr/local/mfs-1.6.27/etc/mfs/mfsexports.cfg # 被挂载目录及其权限控制文件的存放路径
# TOPOLOGY_FILENAME = /usr/local/mfs-1.6.27/etc/mfs/mfstopology.cfg # mfstopology.cfg文件的存放路径
# DATA_PATH = /usr/local/mfs-1.6.27/var/mfs # 数据存放路径,此目录下大致有三类文件,changelog,sessions和stats;
# BACK_LOGS = 50 # metadata的改变log文件数目(默认是 50)
# BACK_META_KEEP_PREVIOUS = 1 # metadata的默认保存份数(默认为1)
# REPLICATIONS_DELAY_INIT = 300 # 延迟复制的时间(默认是300s)
# REPLICATIONS_DELAY_DISCONNECT = 3600 # chunkserver断开的复制延迟(默认是3600)
# MATOML_LISTEN_HOST = * # metalogger监听的IP地址(默认是*,代表任何IP)
# MATOML_LISTEN_PORT = 9419 # metalogger监听的端口地址(默认是9419)
# MATOML_LOG_PRESERVE_SECONDS = 600
# MATOCS_LISTEN_HOST = * # 用于chunkserver连接的IP地址(默认是*,代表任何IP)
# MATOCS_LISTEN_PORT = 9420 # 用于chunkserver连接的端口地址(默认是9420)
# MATOCL_LISTEN_HOST = * # 用于客户端挂接连接的IP地址(默认是*,代表任何IP)
# MATOCL_LISTEN_PORT = 9421 # 用于客户端挂接连接的端口地址(默认是9421)
# CHUNKS_LOOP_MAX_CPS = 100000 # chunks的最大回环频率(默认是:100000秒)
# CHUNKS_LOOP_MIN_TIME = 300 # chunks的最小回环频率(默认是:300秒)
# CHUNKS_SOFT_DEL_LIMIT = 10 # 一个chunkserver中soft最大的可删除数量为10个
# CHUNKS_HARD_DEL_LIMIT = 25 # 一个chuankserver中hard最大的可删除数量为25个
# CHUNKS_WRITE_REP_LIMIT = 2 # 在一个循环里复制到一个chunkserver的最大chunk数目(默认是1)
# CHUNKS_READ_REP_LIMIT = 10 # 在一个循环里从一个chunkserver复制的最大chunk数目(默认是5)
# ACCEPTABLE_DIFFERENCE = 0.1 # 每个chunkserver上空间使用率的最大区别(默认为0.01即1%)
# SESSION_SUSTAIN_TIME = 86400 # 客户端会话超时时间为86400秒,即1天
# REJECT_OLD_CLIENTS = 0 # 弹出低于1.6.0的客户端挂接(0或1,默认是0)
##因为是官方的,默认配置,我们投入即可使用。
(4)修改控制文件:
[root@cml1mfs]# vim mfsexports.cfg * / rw,alldirs,maproot=0,password=cml * . rw
##mfsexports.cfg文件中,每一个条目就是一个配置规则,而每一个条目又分为三个部分,其中第一部分是mfs客户端的ip地址或地址范围,第二部分是被挂载的目录,第三个部分用来设置mfs客户端可以拥有的访问权限。
(5)开启元数据文件默认是empty文件,需要我们手工打开:
[root@cml1mfs]# cp /usr/local/mfs/var/mfs/metadata.mfs.empty/usr/local/mfs/var/mfs/metadata.mfs
(6)启动master:
[root@cml1mfs]# /usr/local/mfs/sbin/mfsmaster start open fileslimit has been set to: 16384 workingdirectory: /usr/local/mfs/var/mfs lockfilecreated and locked initializingmfsmaster modules ... exports filehas been loaded mfstopologyconfiguration file (/usr/local/mfs/etc/mfstopology.cfg) not found - usingdefaults loadingmetadata ... metadata filehas been loaded no charts datafile - initializing empty charts master<-> metaloggers module: listen on *:9419 master<-> chunkservers module: listen on *:9420 main masterserver module: listen on *:9421 mfsmasterdaemon initialized properly
(7)检查进程是否启动:
[root@cml1mfs]# ps -ef | grep mfs
mfs 8109 1 5 18:40 ? 00:00:02/usr/local/mfs/sbin/mfsmaster start
root 8123 1307 0 18:41 pts/0 00:00:00 grep --color=auto mfs
(8)查看端口:
[root@cml1mfs]# netstat -ntlp
Active Internetconnections (only servers)
Proto Recv-QSend-Q Local Address ForeignAddress State PID/Program name
tcp 0 0 0.0.0.0:9419 0.0.0.0:* LISTEN 8109/mfsmaster
tcp 0 0 0.0.0.0:9420 0.0.0.0:* LISTEN 8109/mfsmaster
tcp 0 0 0.0.0.0:9421 0.0.0.0:* LISTEN 8109/mfsmaster
(9)当关闭的时候直接使用:
[root@cml1mfs]# /usr/local/mfs/sbin/mfsmaster stop sendingSIGTERM to lock owner (pid:8109) waiting fortermination terminated
##pcs相关配置:(因为在7版本,所以pcs支持比较好,crmsh比较复杂)
1、两个结点上执行:
[root@cml1corosync]# yum install -y pacemaker pcs psmisc policycoreutils-python
2、启动pcs并且让开机启动:
[root@cml1corosync]# systemctl start pcsd.service [root@cml1corosync]# systemctl enable pcsd
3、修改用户hacluster的密码:
[root@cml1corosync]# echo 123456 | passwd --stdin hacluster
4、注册pcs集群主机(默认注册使用用户名hacluster,和密码):
[root@cml1corosync]# pcs cluster auth cml1 cml2 ##设置注册那个集群节点 cml2: Alreadyauthorized cml1: Alreadyauthorized
5、在集群上注册两台集群:
[root@cml1corosync]# pcs cluster setup --name mycluster cml1 cml2 --force。
##设置集群
6、接下来就在某个节点上已经生成来corosync配置文件:
[root@cml1corosync]# ls corosync.conf corosync.conf.example corosync.conf.example.udpu corosync.xml.example uidgid.d
#我们看到生成来corosync.conf配置文件:
7、我们看一下注册进来的文件:
[root@cml1corosync]# cat corosync.conf totem { version: 2 secauth: off cluster_name: mycluster transport: udpu } nodelist { node { ring0_addr: cml1 nodeid: 1 } node { ring0_addr: cml2 nodeid: 2 } } quorum { provider: corosync_votequorum two_node: 1 } logging { to_logfile: yes logfile: /var/log/cluster/corosync.log to_syslog: yes } 8、启动集群: [root@cml1corosync]# pcs cluster start --all cml1: StartingCluster... cml2: StartingCluster...
##相当于启动来pacemaker和corosync:
9、可以查看集群是否有错:
[root@cml1corosync]# crm_verify -L -V error: unpack_resources: Resource start-up disabled since no STONITHresources have been defined error: unpack_resources: Either configure some or disable STONITHwith the stonith-enabled option error: unpack_resources: NOTE: Clusters with shared data needSTONITH to ensure data integrity Errors found duringcheck: config not valid
##因为我们没有配置STONITH设备,所以我们下面要关闭
10、关闭STONITH设备:
[root@cml1corosync]# pcs property set stonith-enabled=false [root@cml1corosync]# crm_verify -L -V [root@cml1corosync]# pcs property list ClusterProperties: cluster-infrastructure: corosync cluster-name: mycluster dc-version: 1.1.16-12.el7_4.2-94ff4df have-watchdog: false stonith-enabled: false
1、安装crmsh:
集群我们可以下载安装crmsh来操作(从github来下载,然后解压直接安装):只在一个节点安装即可。(但最好选择两节点上安装这样测试时方便点)
[root@cml1 ~]#cd /usr/local/src/ You have newmail in /var/spool/mail/root [root@cml1src]# ls nginx-1.12.0 php-5.5.38.tar.gz crmsh-2.3.2.tar nginx-1.12.0.tar.gz zabbix-3.2.7.tar.gz [root@cml1src]# tar -xf crmsh-2.3.2.tar [root@cml1crmsh-2.3.2]# python setup.py install
2、用crmsh来管理:
[root@cml1 ~]#crm help
Help overview forcrmsh
Available topics:
Overview Help overview for crmsh
Topics Available topics
Description Program description
CommandLine Command line options
Introduction Introduction
Interface User interface
Completion Tab completion
Shorthand Shorthand syntax
Features Features
Shadows Shadow CIB usage
Checks Configuration semantic checks
Templates Configuration templates
Testing Resource testing
Security Access Control Lists (ACL)
Resourcesets Syntax: Resource sets
AttributeListReferences Syntax:Attribute list references
AttributeReferences Syntax: Attributereferences
RuleExpressions Syntax: Ruleexpressions
Lifetime Lifetime parameter format
Reference Command reference
3、借助crm管理工具配置DRBD+nfs+corosync+pacemaker高可用集群:
##先卸载掉挂载点和停掉drbd服务
[root@cml1 ~]#systemctl stop drbd [root@cml1 ~]#umount /usr/local/mfs/ [root@cml2 ~]#systemctl stop drbd
[root@cml1 ~]#crm crm(live)#status Stack:corosync Current DC:cml2 (version 1.1.16-12.el7_4.4-94ff4df) - partition with quorum Last updated:Fri Oct 27 19:15:54 2017 Last change:Fri Oct 27 10:52:35 2017 by root via cibadmin on cml1 2 nodesconfigured 5 resourcesconfigured Online: [ cml25pxl2 ] No resources crm(live)configure#property stonith-enabled=false crm(live)configure#property no-quorum-policy=ignore crm(live)configure#property migration-limit=1 ###表示服务抢占一次不成功就给另一个节点接管 crm(live)#configure
4、写一个mfsmaster的启动脚本:
[root@cml1mfs]# cat /etc/systemd/system/mfsmaster.service [Unit] Description=mfs After=network.target [Service] Type=forking ExecStart=/usr/local/mfs/sbin/mfsmasterstart ExecStop=/usr/local/mfs/sbin/mfsmasterstop PrivateTmp=true [Install] WantedBy=multi-user.target
##开机启动:
[root@cml1mfs]# systemctl enable mfsmaster
##停止mfsmaster服务
[root@cml1mfs]# systemctl stop mfsmaster
5、开启工具:
[root@cml1src]# systemctl start corosync [root@cml1src]# systemctl start pacemaker [root@cml1src]# ssh cml2 systemctl start corosync [root@cml1src]# ssh cml2 systemctl start pacemaker
6、配置资源:
crm(live)configure#primitive mfs_drbd ocf:linbit:drbd params drbd_resource=mfs op monitorrole=Master interval=10 timeout=20 op monitor role=Slave interval=20 timeout=20op start timeout=240 op stop timeout=100 crm(live)configure#verify crm(live)configure#ms ms_mfs_drbd mfs_drbd meta master-max="1"master-node-max="1" clone-max="2"clone-node-max="1" notify="true" crm(live)configure#verify crm(live)configure#commit
7、配置挂载资源:
crm(live)configure#primitive mystore ocf:heartbeat:Filesystem params device=/dev/drbd1directory=/usr/local/mfs fstype=ext4 op start timeout=60 op stop timeout=60 crm(live)configure#verify crm(live)configure#colocation ms_mfs_drbd_with_mystore inf: mystore ms_mfs_drbd crm(live)configure#order ms_mfs_drbd_before_mystore Mandatory: ms_mfs_drbd:promote mystore:start
8、配置mfs资源:
crm(live)configure#primitive mfs systemd:mfsmaster op monitor timeout=100 interval=30 op starttimeout=30 interval=0 op stop timeout=30 interval=0 crm(live)configure#colocation mfs_with_mystore inf: mfs mystore crm(live)configure#order mystor_befor_mfs Mandatory: mystore mfs crm(live)configure#verify crm(live)configure#commit
9、配置VIP:
crm(live)configure#primitive vip ocf:heartbeat:IPaddr params ip=192.168.5.200 crm(live)configure#colocation vip_with_msf inf: vip mfs crm(live)configure#verify crm(live)configure#commit
10、查看配置:
crm(live)configure#show node 1: cml1 attributes standby=off node 2: cml2 attributes standby=off primitive mfssystemd:mfsmaster op monitor timeout=100 interval=30 op start timeout=30 interval=0 op stop timeout=30 interval=0 primitivemfs_drbd ocf:linbit:drbd params drbd_resource=mfs op monitor role=Master interval=10timeout=20 op monitor role=Slave interval=20timeout=20 op start timeout=240 interval=0 op stop timeout=100 interval=0 primitivemystore Filesystem params device="/dev/drbd1"directory="/usr/local/mfs" fstype=ext4 op start timeout=60 interval=0 op stop timeout=60 interval=0 primitive vipIPaddr params ip=192.168.5.200 ms ms_mfs_drbdmfs_drbd meta master-max=1 master-node-max=1clone-max=2 clone-node-max=1 notify=true colocationmfs_with_mystore inf: mfs mystore orderms_mfs_drbd_before_mystore Mandatory: ms_mfs_drbd:promote mystore:start colocationms_mfs_drbd_with_mystore inf: mystore ms_mfs_drbd ordermystor_befor_mfs Mandatory: mystore mfs colocationvip_with_msf inf: vip mfs propertycib-bootstrap-options: have-watchdog=false dc-version=1.1.16-12.el7_4.4-94ff4df cluster-infrastructure=corosync cluster-name=webcluster stonith-enabled=false no-quorum-policy=ignore migration-limit=1 crm(live)configure#commit crm(live)configure#cd crm(live)#status Stack:corosync Current DC:cml2 (version 1.1.16-12.el7_4.4-94ff4df) - partition with quorum Last updated:Fri Oct 27 19:27:23 2017 Last change:Fri Oct 27 10:52:35 2017 by root via cibadmin on cml1 2 nodesconfigured 5 resourcesconfigured Online: [ cml25pxl2 ] Full list ofresources: Master/Slave Set: ms_mfs_drbd [mfs_drbd] Masters: [ cml1 ] Slaves: [ cml2 ] mystore (ocf::heartbeat:Filesystem): Started cml1 mfs (systemd:mfsmaster): Started cml1 vip (ocf::heartbeat:IPaddr): Started cml1
##检查是否已经挂载到cml1主机上
[root@cml1 ~]#df -TH
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/centos-rootxfs 19G 6.8G 13G 36% /
devtmpfs devtmpfs 501M 0 501M 0% /dev
tmpfs tmpfs 512M 41M 472M 8% /dev/shm
tmpfs tmpfs 512M 33M 480M 7% /run
tmpfs tmpfs 512M 0 512M 0% /sys/fs/cgroup
/dev/sda1 xfs 521M 160M 362M 31% /boot
tmpfs tmpfs 103M 0 103M 0% /run/user/0
/dev/drbd1 ext4 5.2G 30M 4.9G 1% /usr/local/mfs
[root@cml1 ~]#ip addr
2: ens34:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen1000
link/ether 00:0c:29:4d:47:ed brdff:ff:ff:ff:ff:ff
inet 192.168.5.101/24 brd 192.168.5.255scope global ens34
valid_lft forever preferred_lft forever
inet 192.168.5.200/24brd 192.168.5.255 scope global secondary ens34
##vip已经被cml1(master)接管了。
一、安装Metalogger Server: (这步骤在cml5上配置,其实做了mfsmaster高可用可以不需要这步骤了。)
前面已经介绍了,MetaloggerServer 是 Master Server 的备份服务器。因此,Metalogger Server 的安装步骤和 Master Server 的安装步骤相同。并且,最好使用和 Master Server 配置一样的服务器来做 Metalogger Server。这样,一旦主服务器master宕机失效,我们只要导入备份信息changelogs到元数据文件,备份服务器可直接接替故障的master继续提供服务。
1、从master把包copy过来:
[root@cml1mfs]# scp /usr/local/src/v3.0.96.tar.gz cml5:/usr/local/src/ v3.0.96.tar.gz [root@cml5src]# tar -xf moosefs.3.0.96.tar.gz [root@cml5moosefs-3.0.96]# useradd mfs [root@cml5moosefs-3.0.96]# yum install zlib-devel -y [root@cml5moosefs-3.0.96]# ./configure --prefix=/usr/local/mfs --with-default-user=mfs--with-default-group=mfs --disable-mfschunkserver --disable-mfsmount [root@cml5moosefs-3.0.96]# make && make install
2、配置Metalogger Server:
[root@cml5mfs]# cd /usr/local/mfs/etc/mfs/ [root@cml5mfs]# ls mfsexports.cfg.sample mfsmaster.cfg.sample mfsmetalogger.cfg.sample mfstopology.cfg.sample [root@cml5mfs]# cp mfsmetalogger.cfg.sample mfsmetalogger.cfg [root@cml5mfs]# vim mfsmetalogger.cfg MASTER_HOST =192.168.5.200 ##指向vip #MASTER_PORT = 9419 ##链接端口 # META_DOWNLOAD_FREQ = 24 # #元数据备份文件下载请求频率,默认为24小时,即每个一天从元数据服务器下载一个metadata.mfs.back文件。当元数据服务器关闭或者出故障时,metadata.mfs.back文件将小时,那么要恢复整个mfs,则需要从metalogger服务器取得该文件。请注意该文件,它与日志文件在一起,才能够恢复整个被损坏的分布式文件系统。
3、启动Metalogger Server:
[root@cml5 ~]#/usr/local/mfs/sbin/mfsmetalogger start open fileslimit has been set to: 4096 workingdirectory: /usr/local/mfs/var/mfs lockfilecreated and locked initializingmfsmetalogger modules ... mfsmetaloggerdaemon initialized properly [root@cml5 ~]#netstat -lantp|grep metalogger tcp 0 0 192.168.113.144:45620 192.168.113.143:9419 ESTABLISHED 1751/mfsmetalogger [root@cml5 ~]#netstat -lantp|grep 9419 tcp 0 0 192.168.113.144:45620 192.168.113.143:9419 ESTABLISHED 1751/mfsmetalogger
4、查看一下生成的日志文件:
[root@cml5 ~]#ls /usr/local/mfs/var/mfs/ changelog_ml_back.0.mfs changelog_ml_back.1.mfs metadata.mfs.empty metadata_ml.mfs.back
二、安装chunk servers(注意在cml5和cml4主机上做相同的配置)
1、下载包编译安装
[root@cml3 ~]#useradd mfs ##注意uid和gid必须整个集群都要相同的 [root@cml3 ~]#yum install zlib-devel -y [root@cml3 ~]#cd /usr/local/src/ [root@cml3src]# tar -xf moosefs.3.0.96.tar.gz [root@cml3moosefs-3.0.96]# ./configure --prefix=/usr/local/mfs --with-default-user=mfs--with-default-group=mfs --disable-mfsmaster --disable-mfsmount [root@cml3moosefs-3.0.96]# make && make install
2、配置check server:
[root@cml3moosefs-3.0.96]# cd /usr/local/mfs/etc/mfs/ You have newmail in /var/spool/mail/root [root@cml3mfs]# mv mfschunkserver.cfg.sample mfschunkserver.cfg [root@cml3mfs]# vim mfschunkserver.cfg MASTER_HOST =192.168.5.200 ##指向vip
3、配置mfshdd.cfg主配置文件
mfshdd.cfg该文件用来设置你将 Chunk Server 的哪个目录共享出去给 Master Server进行管理。当然,虽然这里填写的是共享的目录,但是这个目录后面最好是一个单独的分区。
[root@cml3mfs]# cp /usr/local/mfs/etc/mfs/mfshdd.cfg.sample /usr/local/mfs/etc/mfs/mfshdd.cfg [root@cml3mfs]# vim /usr/local/mfs/etc/mfs/mfshdd.cfg /mfsdata
##自己定义的目录
4、启动check Server:
[root@cml3mfs]# mkdir /mfsdata [root@cml3mfs]# chown mfs:mfs /mfsdata/ [root@cml3mfs]# /usr/local/mfs/sbin/mfschunkserver start open fileslimit has been set to: 16384 workingdirectory: /usr/local/mfs/var/mfs lockfilecreated and locked setting glibcmalloc arena max to 4 setting glibcmalloc arena test to 4 initializingmfschunkserver modules ... hdd spacemanager: path to scan: /mfsdata/ hdd spacemanager: start background hdd scanning (searching for available chunks) main servermodule: listen on *:9422 no charts datafile - initializing empty charts mfschunkserverdaemon initialized properly
###检查监听端口:
[root@cml3mfs]# netstat -lantp|grep 9420 tcp 0 0 192.168.113.145:45904 192.168.113.143:9420 ESTABLISHED 9896/mfschunkserver
###在master上面查看变化:
1、安装FUSE:
[root@cml6mfs]# lsmod|grep fuse [root@cml6mfs]# yum install fuse fuse-devel [root@cml6 ~]#modprobe fuse [root@cml6 ~]#lsmod | grep fuse fuse 91874 0
2、安装挂载客户端
[root@cml6 ~]#yum install zlib-devel -y [root@cml6moosefs-3.0.96]# yum install fuse-devel [root@cml6 ~]#useradd mfs [root@cml6src]# tar -zxvf v3.0.96.tar.gz [root@cml6src]# cd moosefs-3.0.96/ [root@cml6moosefs-3.0.96]# ./configure --prefix=/usr/local/mfs --with-default-user=mfs--with-default-group=mfs --disable-mfsmaster --disable-mfschunkserver--enable-mfsmount [root@cml6moosefs-3.0.96]# make && make install
3、在客户端上挂载文件系统,先创建挂载目录:
[root@cml6moosefs-3.0.96]# mkdir /mfsdata [root@cml6moosefs-3.0.96]# chown -R mfs:mfs /mfsdata/ [root@cml6 ~]#/usr/local/mfs/bin/mfsmount -H 192.168.5.200 /mfsdata/ -p MFS Password: mfsmasteraccepted connection with parameters: read-write,restricted_ip ; root mapped toroot:root [root@cml6 ~]#df -TH Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/vg_cml-lv_root ext4 19G 4.9G 13G 28% / tmpfs tmpfs 977M 0 977M 0% /dev/shm /dev/sda1 ext4 500M 29M 445M 7% /boot 192.168.5.200:9421 fuse.mfs 38G 14G 25G 36% /mfsdata [root@cml6mfsdata]# echo "test" > a.txt [root@cml6mfsdata]# ls a.txt [root@cml6mfsdata]# cat a.txt test
测试master server(master)主机down掉切到(slave)上文件是否还在
crm(live)#node standby crm(live)#status Stack:corosync Current DC:cml2 (version 1.1.16-12.el7_4.4-94ff4df) - partition with quorum Last updated:Fri Oct 27 19:55:15 2017 Last change:Fri Oct 27 19:55:01 2017 by root via crm_attribute on cml1 2 nodesconfigured 5 resourcesconfigured Node cml1:standby Online: [ cml2] Full list ofresources: Master/Slave Set: ms_mfs_drbd [mfs_drbd] Masters: [ cml2 ] Stopped: [ cml1 ] mystore (ocf::heartbeat:Filesystem): Started cml2 mfs (systemd:mfsmaster): Started cml2 vip (ocf::heartbeat:IPaddr): Started cml2
##显示业务已经切到cml2主机上了
[root@cml2 ~]#df -TH Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/centos-rootxfs 19G 6.7G 13G 36% / devtmpfs devtmpfs 501M 0 501M 0% /dev tmpfs tmpfs 512M 56M 456M 11% /dev/shm tmpfs tmpfs 512M 14M 499M 3% /run tmpfs tmpfs 512M 0 512M 0% /sys/fs/cgroup /dev/sda1 xfs 521M 160M 362M 31% /boot tmpfs tmpfs 103M 0 103M 0% /run/user/0 /dev/drbd1 ext4 5.2G 30M 4.9G 1% /usr/local/mfs [root@cml2 ~]#ip addr 2: ens34:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen1000 link/ether 00:0c:29:5a:c5:ee brd ff:ff:ff:ff:ff:ff inet 192.168.5.102/24 brd 192.168.5.255scope global ens34 valid_lft forever preferred_lft forever inet192.168.5.200/24 brd 192.168.5.255 scope global secondary ens34
##挂载点和vip已经切到cml2上面了
##重新挂载看看业务是否正常
[root@cml6 ~]#umount /mfsdata/ [root@cml6 ~]#/usr/local/mfs/bin/mfsmount -H 192.168.5.200 /mfsdata/ -p MFS Password: mfsmasteraccepted connection with parameters: read-write,restricted_ip ; root mapped toroot:root [root@cml6 ~]#cd /mfsdata/ [root@cml6mfsdata]# ls a.txt [root@cml6mfsdata]# cat a.txt test
##刚刚写进去的a.txt文件还在证明业务正常
本文出自 “第一个legehappy51cto博客” 博客,请务必保留此出处http://legehappy.blog.51cto.com/13251607/1977270
高可用分布式存储(Corosync+Pacemaker+DRBD+MooseFS)
标签:高可用分布式存储(corosync+pacemaker+drbd+moosefs)
原文地址:http://legehappy.blog.51cto.com/13251607/1977270