CentOS 7 MDADM Raid1
查看版本
#uname -a
Linuxzzsrv1.bigcloud.local 3.10.0-123.el7.x86_64 #1 SMP Mon Jun 30 12:09:22 UTC 2014x86_64 x86_64 x86_64 GNU/Linux
#cat /etc/redhat-release
CentOSLinux release 7.0.1406 (Core)
挂载硬盘
yum 安装mdadm的支持包
#yum -y install mdadm
Warning:RPMDB altered outside of yum.
Installing :libreport-filesystem-2.1.11-10.el7.centos.x86_64 1/2
Installing : mdadm-3.2.6-31.el7.x86_64 2/2
Verifying : libreport-filesystem-2.1.11-10.el7.centos.x86_64 1/2
Verifying : mdadm-3.2.6-31.el7.x86_64 2/2
Installed:
mdadm.x86_64 0:3.2.6-31.el7
DependencyInstalled:
libreport-filesystem.x86_640:2.1.11-10.el7.centos
Complete!
#fdisk/dec/sdb //分区
#fdisk/dec/sdc
[sdb] Cache data unavailable
[sdb] Assuming drive cache: write through
[sdc] Cache data unavailable
[sdc] Assuming drive cache: write through
# mdadm -C /dev/md0 -l 1 -n 2 /dev/sdb1 /dev/sdc1
mdadm:Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store ‘/boot‘ on this device please ensurethat
your boot-loader understands md/v1.xmetadata, or use
--metadata=0.90
Continuecreating array? yes
mdadm:Defaulting to version 1.2 metadata
mdadm:array /dev/md0 started.
md: bind<sdb1>
md: bind<sdc1>
md: raid1 personality registered for level 1
md/raid1:md0: not clean -- starting backgroundreconstruction
md/raid1:md0: active with 2 out of 2 mirrors
md0: detected capacity change from 0 to10728898560
md0: unknown partition table
md: resync of RAID array md0
md: minimum _guaranteed_ speed: 1000 KB/sec/disk.
md: using maximum available idle IO bandwidth(but not more than 200000 KB/sec) for resync.
md: using 128k window, over a total of10477440k.
md: md0: resync done.
RAID1 conf printout:
--- wd:2 rd:2
disk 0, wo:0, o:1, dev:sdb1
disk 1, wo:0, o:1, dev:sdc1
kernel: md: bind<sdb1>
kernel: md: bind<sdc1>
kernel: md: raid1 personality registered forlevel 1
kernel: md/raid1:md0: not clean -- startingbackground reconstruction
kernel: md/raid1:md0: active with 2 out of 2mirrors
kernel: md0: detected capacity change from 0to 10728898560
kernel: md0: unknown partition table
kernel: md: resync of RAID array md0
kernel: md: minimum _guaranteed_ speed: 1000 KB/sec/disk.
kernel: md: using maximum available idle IObandwidth (but not more than 200000 KB/sec) for resync.
kernel: md: using 128k window, over a total of10477440k.
kernel: md: md0: resync done.
格式化:
#mkfs.ext4 /dev/md0
Aug 1503:51:49 zzsrv1 kernel: EXT4-fs (md0): mounted filesystem with ordered datamode. Opts: (null)
创建目录:
#mkdir /data
挂载光驱到目录下
#mount /dev/md0 /data
# ll /data/ //查看目录下内容
total 16
drwx------.2 root root 16384 Aug 15 03:51 lost+found
编译文件添加挂载目录和RAID1
#vi /etc/fstab //在最后添加
/dev/md0 /data ext4 defaults 0 0
查看最终状态
# mdadm -Ds
ARRAY/dev/md0 metadata=1.2 name=zzsrv1.bigcloud.local:0UUID=bb9031b3:c25b3254:8f306614:f33741b8
写入到配置文件
# mdadm -Ds >>/etc/mdadm.conf
创建测试文件
#mkdir /data/mdadm
#touch/data/a.txt
重新启动验证
# Reboot
查看挂载信息
# cat/proc/mdstat
Personalities: [raid1]
md0 :active raid1 sdb1[0] sdc1[1]
10477440 blocks super 1.2 [2/2] [UU]
unuseddevices: <none>
#reboot
# cat /proc/mdstat
Personalities: [raid1]
md0 :active raid1 sdb1[1]
10477440 blocks super 1.2 [2/1] [_U]
unused devices: <none>
#fdisk /dev/sdb
Disk/dev/sdc: 32.2 GB, 32212254720 bytes, 62914560 sectors
Units =sectors of 1 * 512 = 512 bytes
Sectorsize (logical/physical): 512 bytes / 512 bytes
I/O size(minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Diskidentifier: 0x46134903
Device Boot Start End Blocks Id System
/dev/sdc1 2048 20973567 10485760 83 Linux
# fdisk /dev/sdb
Welcometo fdisk (util-linux 2.23.2).
Changeswill remain in memory only, until you decide to write them.
Becareful before using the write command.
Devicedoes not contain a recognized partition table
Buildinga new DOS disklabel with disk identifier 0xedffc7e6.
Command(m for help): p //查看分区
Disk/dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units =sectors of 1 * 512 = 512 bytes
Sectorsize (logical/physical): 512 bytes / 512 bytes
I/O size(minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Diskidentifier: 0xedffc7e6
Device Boot Start End Blocks Id System
Command(m for help): n //新建分区
Partitiontype:
p primary (0 primary, 0 extended, 4 free)
e extended
Select(default p): p
Partitionnumber (1-4, default 1): 1
Firstsector (2048-41943039, default 2048): //从当前柱面开始
Usingdefault value 2048
Lastsector, +sectors or +size{K,M,G} (2048-41943039, default 41943039): +10G //分区为10G
Partition1 of type Linux and of size 10 GiB is set
Command(m for help): w //保存分区
Thepartition table has been altered!
Callingioctl() to re-read partition table.
Syncingdisks.
# mkfs.ext4 /dev/sdb1 //格式化分区
mke2fs1.42.9 (28-Dec-2013)
Filesystemlabel=
OS type:Linux
Blocksize=4096 (log=2)
Fragmentsize=4096 (log=2)
Stride=0blocks, Stripe width=0 blocks
655360inodes, 2621440 blocks
131072blocks (5.00%) reserved for the super user
Firstdata block=0
Maximumfilesystem blocks=2151677952
80 blockgroups
32768 blocksper group, 32768 fragments per group
8192inodes per group
Superblockbackups stored on blocks:
32768, 98304, 163840, 229376, 294912,819200, 884736, 1605632
Allocatinggroup tables: done
Writinginode tables: done
Creatingjournal (32768 blocks): done
Writingsuperblocks and filesystem accounting information: done //格式化完成
# mdadm /dev/md0 -a /dev/sdb1 //在线添加磁盘到/Dev/md0
mdadm:added /dev/sdb1
# cat /proc/mdstat
Personalities: [raid1]
md0 :active raid1 sdb1[2] sdc1[1]
10477440 blocks super 1.2 [2/1] [_U]
[===========>.........] recovery = 55.2% (5792896/10477440)finish=0.3min speed=206889K/sec
unuseddevices: <none>
查看日志重建过程
#tail -f /var/log/messages
kernel: sdb: sdb1
kernel: md:bind<sdb1>
kernel: md:recovery of RAID array md0
kernel: md:minimum _guaranteed_ speed: 1000KB/sec/disk.
kernel: md:using maximum available idle IO bandwidth (but not more than 200000 KB/sec) forrecovery.
kernel: md:using 128k window, over a total of 10477440k.
kernel: md:md0: recovery done.
mdadm中文man(引用)
基本语法: mdadm [mode][options]
[mode] 有7种:
Assemble:将以前定义的某个阵列加入当前在用阵列。
Build:Build a legacy array ,每个device 没有 superblocks
Create:创建一个新的阵列,每个device 具有 superblocks
Manage:管理阵列,比如 add 或 remove
Misc:允许单独对阵列中的某个 device 做操作,比如抹去superblocks或终止在用的阵列。
Follow or Monitor:监控 raid 1,4,5,6 和 multipath 的状态
Grow:改变raid 容量或阵列中的 device 数目
可用的[options]:
-A, --assemble:加入一个以前定义的阵列
-B, --build:Build a legacy array without superblocks.
-C, --create:创建一个新的阵列
-Q, --query:查看一个device,判断它为一个 mddevice 或是一个 md 阵列的一部分
-D, --detail:打印一个或多个 md device 的详细信息
-E, --examine:打印 device 上的 md superblock 的内容
-F, --follow, --monitor:选择 Monitor 模式
-G, --grow:改变在用阵列的大小或形态
-h, --help:帮助信息,用在以上选项后,则显示该选项信息
--help-options
-V, --version
-v, --verbose:显示细节
-b, --brief:较少的细节。用于 --detail 和 --examine 选项
-f, --force
-c, --config= :指定配置文件,缺省为/etc/mdadm/mdadm.conf
-s, --scan:扫描配置文件或 /proc/mdstat以搜寻丢失的信息。配置文件/etc/mdadm/mdadm.conf
create 或 build 使用的选项:
-c, --chunk=:Specify chunk size of kibibytes. 缺省为 64.
--rounding=: Specify rounding factor for lineararray (==chunk size)
-l, --level=:设定 raid level.
--create可用:linear, raid0, 0, stripe, raid1,1, mirror,raid4, 4, raid5, 5, raid6, 6, multipath, mp.
--build可用:linear, raid0, 0,stripe.
-p, --parity=:设定 raid5 的奇偶校验规则:eft-asymmetric,left-symmetric, right-asymmetric, right-symmetric, la, ra, ls, rs.缺省为left-symmetric
--layout=:类似于--parity
-n, --raid-devices=:指定阵列中可用 device 数目,这个数目只能由 --grow 修改
-x, --spare-devices=:指定初始阵列的富余device 数目
-z, --size=:组建RAID1/4/5/6后从每个device获取的空间总数
--assume-clean:目前仅用于 --build 选项
-R, --run:阵列中的某一部分出现在其他阵列或文件系统中时,mdadm会确认该阵列。此选项将不作确认。
-f, --force:通常mdadm不允许只用一个device 创建阵列,而且创建raid5时会使用一个device作为missing drive。此选项正相反。
-a, --auto{=no,yes,md,mdp,part,p}{NN}:
总结:体会用mdadm创建RAID1的过程
本文出自 “数列求和” 博客,请务必保留此出处http://chenzhou312.blog.51cto.com/8139578/1540192
CentOS mdadm RAID1实训,布布扣,bubuko.com
原文地址:http://chenzhou312.blog.51cto.com/8139578/1540192