标签:drbd lvm
安装配置DRBD
定义:
DRBD(Distributed Replicated Block Device) 分布式块设备复制:是linux内核存储层中的一个分布式存储系统,可利用DRBD在两台linux服务器之间共享块设备,文件系统和数据,类似于一个网络RAID1的功能
两台服务器间就算某台服务器出现断电或者宕机也不会对数据有任何影响
当将数据写入到本地主节点的文件系统时,这些数据会通过网络发送到远端另一台从节点上;本地节点和远端从节点数据通过TCP/IP协议保持同步,主节点发生故障时,远端从节点保存着相同的数据,可以接替主节点继续对外提供服务;两节点之间使用Heartbeat来检测对方是否存活,且自动切换可以通过Heartbeat方案解决,不需要人工干预(Heartbeat方案这里先忽略)
用途:
DRBD+Heartbeat+Mysql:进行主从(这里的主从并不是mysql的单向或双向复制)结构分离,且实现自动切换
DRBD+HeartBeat+NFS:配置NFS的高可用,作为集群中的底端共享存储
环境:
[root@scj ~]# cat /etc/issue CentOS release 6.4 (Final) Kernel \r on an \m [root@scj ~]# uname -r 2.6.32-358.el6.i686
dbm135 | 192.168.186.135 | dbm135.51.com |
dbm134 | 192.168.186.134 | dbm134.51.com |
准备:
关闭iptables和SELINUX:(dbm135,dbm134)
[root@scj ~]#service iptables stop [root@scj ~]#setenforce 0 [root@scj ~]#vi /etc/sysconfig/selinux --------------- SELINUX=disabled ---------------
修改主机名:
dbm135 [root@scj ~]#hostname dbm135.51.com [root@scj ~]#cat /etc/sysconfig/network NETWORKING=yes HOSTNAME=dbm135.51.com
dbm134 [root@scj ~]#hostname dbm134.51.com [root@scj ~]#cat /etc/sysconfig/network NETWORKING=yes HOSTNAME=dbm134.51.com
修改/etc/hosts文件:(dbm135,dbm134)
192.168.186.135 dbm135.51.com dbm135 192.168.186.134 dbm134.51.com dbm134
时间同步:(dbm135,dbm134)
[root@scj ~]#/usr/sbin/ntpdate pool.ntp.org #可以创建一个计划任务
网络:(dbm135,dbm134)
DRBD同步操作对网络环境要求很高,特别是在写入数据量特别大,同步数据很多时尤为重要;建议将两台dbm服务器放在同一个机房,使用内网进行数据同步
磁盘规划:
两台主机节点的(磁盘)分区大小要一致:
考虑到数据库的大小和未来的增长,这里采用lvm逻辑卷进行分区:
分区:(dbm135)
[root@scj ~]# fdisk -l Disk /dev/sda: 21.5 GB, 21474836480 bytes 255 heads, 63 sectors/track, 2610 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00051b9f Device Boot Start End Blocks Id System /dev/sda1 * 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 64 2611 20458496 8e Linux LVM Disk /dev/sdb: 10.7 GB, 10737418240 bytes 255 heads, 63 sectors/track, 1305 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/VolGroup-lv_root: 19.9 GB, 19872612352 bytes 255 heads, 63 sectors/track, 2416 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/VolGroup-lv_swap: 1073 MB, 1073741824 bytes 255 heads, 63 sectors/track, 130 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000
由fdisk -l可以看出有一块大小为10.7G的设备/dev/sdb,对/dev/sdb来创建逻辑卷:
[root@scj ~]# pvcreate /dev/sdb #创建pv Physical volume "/dev/sdb" successfully created [root@scj ~]# pvs PV VG Fmt Attr PSize PFree /dev/sda2 VolGroup lvm2 a-- 19.51g 0 /dev/sdb lvm2 a-- 10.00g 10.00g [root@scj ~]# vgcreate drbd /dev/sdb #创建卷组drbd,将pv加到卷组中 Volume group "drbd" successfully created [root@scj ~]# vgs VG #PV #LV #SN Attr VSize VFree VolGroup 1 2 0 wz--n- 19.51g 0 drbd 1 0 0 wz--n- 10.00g 10.00g [root@scj ~]# lvcreate -n dbm -L 9G drbd #在卷组drbd中创建lvm逻辑卷 Logical volume "dbm" created [root@scj ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert lv_root VolGroup -wi-ao--- 18.51g lv_swap VolGroup -wi-ao--- 1.00g dbm drbd -wi-a---- 9.00g [root@scj ~]# ls /dev/drbd/dbm #查看创建的逻辑卷 /dev/drbd/dbm
挂载逻辑卷:
[root@scj ~]# mkdir /data #创建数据库存放目录 [root@scj ~]# mkfs.ext4 /dev/drbd/dbm #格式化逻辑卷 mke2fs 1.41.12 (17-May-2010) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 589824 inodes, 2359296 blocks 117964 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=2415919104 72 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632 Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 35 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@scj ~]# mount /dev/drbd/dbm /data/ #挂载逻辑卷 [root@scj ~]# df -h #查看是否挂载 Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-lv_root 19G 1.1G 17G 7% / tmpfs 58M 0 58M 0% /dev/shm /dev/sda1 485M 30M 430M 7% /boot /dev/mapper/drbd-dbm 8.9G 149M 8.3G 2% /data
注意:dbm134采用上面相同的方法对磁盘进行规划操作
安装DRBD:
通过yum安装DRBD(推荐):(dbm135,dbm134)
[root@scj ~]# cd /usr/local/src/ [root@scj src]# wget http://www.elrepo.org/elrepo-release-6-6.el6.elrepo.noarch.rpm [root@scj src]# rpm -ivh elrepo-release-6-6.el6.elrepo.noarch.rpm [root@scj src]# yum -y install drbd83-utils kmod-drbd83 #时间可能会比较长
[root@scj ~]# modprobe drbd #加载drbd模块 FATAL: Module drbd not found.
解决加载drbd模块失败:
因为在执行yum -y install drbd83-utils kmod-drbd83时,对内核kernel进行了update,要重新启动服务器,更新后的内核才会生效
重新启动服务器
检查DRBD是否安装成功:(dbm135,dbm134)
[root@scj ~]# modprobe drbd #加载drbd模块 [root@scj ~]# modprobe -l | grep drbd weak-updates/drbd83/drbd.ko [root@scj ~]# lsmod | grep drbd drbd 311209 0 安装成功后,在/sbin目录下有drbdadm,drbdmeta,drbdsetup命令,以及/etc/init.d/drbd启动脚本
配置DRBD:
本文出自 “见” 博客,请务必保留此出处http://732233048.blog.51cto.com/9323668/1665979
标签:drbd lvm
原文地址:http://732233048.blog.51cto.com/9323668/1665979