码迷,mamicode.com
首页 > 其他好文 > 详细

DRBD的主备安装配置

时间:2018-02-01 13:23:18      阅读:230      评论:0      收藏:0      [点我收藏+]

标签:overwrite   action   body   open   tmpfs   cal   command   count   并且   

drbd软件包链接:https://pan.baidu.com/s/1eUcXVyU 密码:00ul

 

1、使用的资源:
1.1 系统centos6.9 mini
1.2 两台节点主机node1、node2
192.168.1.132 node1
192.168.1.124 node2
1.3 DRBD
disk: /dev/sdb1 10G
DRBD device: /dev/drbd1
DRBD resource: vz1
挂载目录 /vz/vz1


2、设置hostname,ip地址,关闭iptables,selinux

2.1 node1
[root@node1 ~]# hostname
node1
[root@node1 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.132 node1
192.168.1.124 node2
[root@node1 ~]# service iptables status
iptables: Firewall is not running.
[root@node1 ~]# getenforce
Disabled
[root@node1 ~]# ifconfig eth0
eth0 Link encap:Ethernet HWaddr 00:0C:29:BC:22:FE
inet addr:192.168.1.132 Bcast:192.168.1.255 Mask:255.255.255.0


2.2 node2
[root@node2 ~]# hostname
node2
[root@node2 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.132 node1
192.168.1.124 node2
[root@node2 ~]# service iptables status
iptables: Firewall is not running.
[root@node2 ~]# getenforce
Disabled
[root@node2 ~]# ifconfig eth0
eth0 Link encap:Ethernet HWaddr 00:0C:29:D6:96:1C
inet addr:192.168.1.124 Bcast:192.168.1.255 Mask:255.255.255.0

3.配置ssh无密钥通信

3.1 在node1上
[root@node1 ~]# ssh-keygen -t rsa -b 1024
#按enter键
[root@node1 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@node2
#输入node2的密码

3.2 在node2上
[root@node2 ~]# ssh-keygen -t rsa -b 1024
#按enter键
[root@node2 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@node1
#输入node1的密码

4.在node1和node2上建立同样大小的分区/dev/sdb1

4.1 分区
[root@node1 ~]# fdisk /dev/sdb

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-2610, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-2610, default 2610): +10G

Command (m for help): p

Disk /dev/sdb: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x7b1a86d9

Device Boot Start End Blocks Id System
/dev/sdb1 1 1306 10490413+ 83 Linux

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

4.2 在node1和node2 格式分区格式为ext4
[root@node1 ~]# mkfs -t ext4 /dev/sdb1
[root@node1 ~]# tune2fs -c 0 -i -1 /dev/sdb1
[root@node2 ~]# mkfs -t ext4 /dev/sdb1
[root@node2 ~]# tune2fs -c 0 -i -1 /dev/sdb1

5.安装perl* 包和drbd的安装包上传到node1和node2上的/root/tools/vz/目录
[root@node1 vz]#yum install -y perl*
[root@node1 vz]# yum install -y lrzsz
[root@node1 ~]# cd /root/tools/vz/
[root@node1 vz]# ls
[root@node1 vz]# rz
[root@node1 vz]# ls
drbd83-utils-8.3.13-1.el6.elrepo.x86_64.rpm

[root@node2 vz]#yum install -y perl*
[root@node2 vz]# yum install -y lrzsz
[root@node2 ~]# cd /root/tools/hbvz/
[root@node2 vz]# ls
[root@node2 vz]# rz
[root@node2 vz]# ls
drbd83-utils-8.3.13-1.el6.elrepo.x86_64.rpm

6.安装drbd包

6.1检查内核版本,内核版本要和drbd的版本相对应(由于版本不一样,现在升级内核版本)
[root@node1 vz]# uname -r
2.6.32-696.el6.x86_64
[root@node1 vz]# cd /etc/yum.repos.d/
[root@node1 yum.repos.d]# ls
CentOS-Base.repo CentOS-Debuginfo.repo CentOS-fasttrack.repo CentOS-Media.repo CentOS-Vault.repo
[root@node1 yum.repos.d]#cd /etc/yum.repos.d
[root@node1 yum.repos.d]#wget http://download.openvz.org/openvz.repo
[root@node1 yum.repos.d]#rpm --import http://download.openvz.org/RPM-GPG-Key-OpenVZ
[root@node1 yum.repos.d]# yum install -y vzkernel
#配置OS内核参数,进入/etc/sysctl.conf文件,需要重启才会生效
kernel.sysrq = 1

[root@node2 vz]# uname -r
2.6.32-696.el6.x86_64
[root@node2 vz]# cd /etc/yum.repos.d/
[root@node2 yum.repos.d]# ls
CentOS-Base.repo CentOS-Debuginfo.repo CentOS-fasttrack.repo CentOS-Media.repo CentOS-Vault.repo
[root@node2 yum.repos.d]#cd /etc/yum.repos.d
[root@node2 yum.repos.d]#wget http://download.openvz.org/openvz.repo
[root@node2 yum.repos.d]#rpm --import http://download.openvz.org/RPM-GPG-Key-OpenVZ
[root@node2 yum.repos.d]# yum install -y vzkernel

6.2 在node1和node2安装drbd,并且加载模块
[root@node1 vz]# rpm -Uvh drbd83-utils-8.3.13-1.el6.elrepo.x86_64.rpm
warning: drbd83-utils-8.3.13-1.el6.elrepo.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID baadae52: NOKEY
Preparing... ########################################### [100%]
1:drbd83-utils ########################################### [100%]
[root@node1 vz]# modprobe drbd

[root@node2 ~]# cd /root/tools/vz/
[root@node2 vz]# rpm -Uvh drbd83-utils-8.3.13-1.el6.elrepo.x86_64.rpm
warning: drbd83-utils-8.3.13-1.el6.elrepo.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID baadae52: NOKEY
Preparing... ########################################### [100%]
1:drbd83-utils ########################################### [100%]
[root@node2 vz]# modprobe drbd

7.在node1、node2 上测试/dev/sdb1分区的磁盘
[root@node1 vz]# dd if=/dev/zero of=/dev/sdb1 bs=1M count=1
1+0 records in
1+0 records out
1048576 bytes (1.0 MB) copied, 0.0968801 s, 10.8 MB/s
[root@node2 vz]# dd if=/dev/zero of=/dev/sdb1 bs=1M count=1
1+0 records in
1+0 records out
1048576 bytes (1.0 MB) copied, 0.0968801 s, 10.8 MB/s

8.编写drbd的配置文件

8.1 在node1上
[root@node1 vz]# /bin/cp /root/tools/hbvz/global_common.conf -f /etc/drbd.d/
[root@node1 vz]# vi vz1.res
[root@node1 vz]# cat vz1.res
resource vz1 {
on node1 {
device /dev/drbd1;
disk /dev/sdb1;
address 192.168.1.132:7788;
flexible-meta-disk internal;
}
on node2 {
device /dev/drbd1;
disk /dev/sdb1;
address 192.168.1.124:7788;
meta-disk internal;
}
}

[root@node1 vz]# /bin/cp /root/tools/hbvz/vz1.res -f /etc/drbd.d/

8.2 将node1机器上的drbd配置文件都复制到node2机器上
[root@node1 vz]# scp -r /etc/drbd* root@node2:/etc/
drbd.conf 100% 133 0.1KB/s 00:00
vz1.res 100% 293 0.3KB/s 00:00
global_common.conf 100% 1896 1.9KB/s 00:00


9、启动drbd服务

9.1 在node1 上
[root@node1 vz]# drbdadm create-md all
Writing meta data...
initializing activity log
NOT initialized bitmap
New drbd meta data block successfully created.
[root@node1 vz]# drbdadm up all
[root@node1 vz]# drbd-overview
1:vz1 WFConnection Secondary/Unknown Inconsistent/DUnknown C r----s
[root@node1 vz]# /etc/init.d/drbd start
#填入yes

9.2 在node2 上
[root@node2 vz]# drbdadm create-md all
Writing meta data...
initializing activity log
NOT initialized bitmap
New drbd meta data block successfully created.
[root@node2 vz]# drbdadm up all
[root@node2 vz]# drbd-overview
1:vz1 Connected Secondary/Secondary Inconsistent/Inconsistent C r-----
[root@node2 vz]# /etc/init.d/drbd start
Starting DRBD resources: [ ].
#注意红色的字体,现在两个节点都是备份节点

10.将vz1资源的master机器是node1
[root@node1 vz]# drbdadm -- --overwrite-data-of-peer primary vz1
[root@node1 vz]# drbd-overview
1:vz1 SyncSource Primary/Secondary UpToDate/Inconsistent C r-----
[>...................] sync‘ed: 5.2% (9724/10244)M


#现在看到node1为主节点了,并且在同步数据

11.在node1上创建文件系统,并且挂载
[root@node1 vz]# mkfs.ext4 /dev/drbd1
[root@node1 vz]# tune2fs -c 0 -i -1 /dev/drbd1
[root@node1 vz]# mkdir /vz/vz1 -p
[root@node1 vz]# mount /dev/drbd1 /vz/vz1/

12.上传数据到/vz/vz1上
[root@node1 vz1]# ls
lost+found openstack-sdn.wmv


13.测试

13.1 重启node1,在node2 上新建目录/vz/vz1,然后把node2 设置为主节点,将/dev/drbd1挂载到/vz/vz1
[root@node2 vz1]# drbdadm -- --overwrite-data-of-peer primary vz1
[root@node2 vz1]# mount /dev/drbd1 /vz/vz1
[root@node2 vz1]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 9.8G 1.5G 7.8G 16% /
tmpfs 233M 0 233M 0% /dev/shm
/dev/sda1 477M 53M 399M 12% /boot
/dev/drbd1 9.8G 289M 9.0G 4% /vz/vz1
[root@node2 vz1]# cd /vz/vz1
[root@node2 vz1]# ls
lost+found openstack-sdn.wmv
#看到之前上传的东西恢复了


[root@node1 ~]# drbd-overview
1:vz1 Connected Secondary/Secondary UpToDate/UpToDate C r-----
[root@node1 ~]# drbd-overview
1:vz1 Connected Secondary/Primary UpToDate/UpToDate C r-----
#看到node1上的变化从次节点变为主节点

13.2 现在node2为主节点,当它崩溃之后,把node1设置为主节点,看看node2上的数据还在
[root@node2 vz1]# echo c >/proc/sysrq-trigger
#此为测试环境,慎用这个命令,node2 已经崩溃
[root@node1 ~]# ping 192.168.1.124
PING 192.168.1.124 (192.168.1.124) 56(84) bytes of data.
From 192.168.1.132 icmp_seq=2 Destination Host Unreachable
From 192.168.1.132 icmp_seq=3 Destination Host Unreachable
From 192.168.1.132 icmp_seq=5 Destination Host Unreachable

[root@node1 ~]# drbdadm -- --overwrite-data-of-peer primary vz1
[root@node1 ~]# mount /dev/drbd1 /vz/vz1/
[root@node1 ~]# cd /vz/vz1/
[root@node1 vz1]# ls
hadoop脚本.docx iaas平台搭建.mp4 lost+found openstack-sdn.wmv

 

DRBD的主备安装配置

标签:overwrite   action   body   open   tmpfs   cal   command   count   并且   

原文地址:https://www.cnblogs.com/derrickrose/p/8398173.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!