标签:
用fdisk 格式化之后如下:
[root@rac1 u01]# fdisk -l
……
Disk /dev/sdk: 536.8 GB, 536870912000 bytes
255 heads, 63 sectors/track, 65270cylinders
Units = cylinders of 16065 * 512 = 8225280bytes
Device Boot Start End Blocks Id System
/dev/sdk1 1 65270 524281243+ 83 Linux
……
[root@rac1u01]#
但是创建文件系统时报错:
[root@rac1 u01]# mkfs -t ext3 /dev/sdk1
mke2fs 1.39 (29-May-2006)
/dev/sdk1 isapparently in use by the system; will not make a filesystem here!
提示/dev/sdk1 正在被使用。 /dev/sdk1 正在被DM管理,所以我们创建文件系统时提示报错,我们手工的移除,就可以正常的创建文件系统,操作如下:
[root@rac1 u01]# dmsetup status
mpath2: 0 2097152 multipath 2 0 1 0 1 1 A 01 0 8:16 A 0
mpath11p1: 0 1048562487 linear
mpath9: 0 209715200 multipath 2 0 1 0 1 1 A0 1 0 8:128 A 0
mpath8: 0 629145600 multipath 2 0 1 0 1 1 A0 1 0 8:112 A 0
mpath7: 0 629145600 multipath 2 0 1 0 1 1 A0 1 0 8:96 A 0
mpath6: 0 2097152 multipath 2 0 1 0 1 1 A 01 0 8:80 A 0
mpath5: 0 2097152 multipath 2 0 1 0 1 1 A 01 0 8:64 A 0
mpath11: 0 1048576000 multipath 2 0 1 0 1 1A 0 1 0 8:160 A 0
mpath4: 0 2097152 multipath 2 0 1 0 1 1 A 01 0 8:48 A 0
mpath10: 0 209715200 multipath 2 0 1 0 1 1A 0 1 0 8:144 A 0
mpath3: 0 2097152 multipath 2 0 1 0 1 1 A 01 0 8:32 A 0
[root@rac1 u01]# dmsetup remove_all
[root@rac1 u01]# dmsetupstatus
No devices found
[root@rac1 u01]# mkfs -text3 /dev/sdk1
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
65536000 inodes, 131070310 blocks
6553515 blocks (5.00%) reserved for thesuper user
First data block=0
Maximum filesystem blocks=4294967296
4000 block groups
32768 blocks per group, 32768 fragments pergroup
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystemaccounting information: done
This filesystem will be automaticallychecked every 36 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
--创建文件系统成功。
--mount 成功:
[root@rac1 u01]# mount /dev/sdk1/u01/backup
[root@rac1 u01]# df -lh
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 59G 22G 35G 39% /
/dev/sda1 996M 51M 894M 6% /boot
tmpfs 32G 0 32G 0% /dev/shm
/dev/sda4 145G 188M 138G 1% /u01/dave
/dev/sdk1 493G 198M 467G 1% /u01/backup
--修改/etc/fstab 文件,让开机自动挂载:
[root@rac2 mapper]# vi /etc/fstab
LABEL=/ / ext3 defaults 1 1
LABEL=/boot /boot ext3 defaults 1 2
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
LABEL=SWAP-sda2 swap swap defaults 0 0
/dev/sdk1 /u01/backup ext3 defaults 0 0
但重启后测试,该文件不能正常挂载,手工挂载也会失败。
所以,这种解决方法不行。
补充内容:
Device mapper 是 Linux 2.6 内核中提供的一种从逻辑设备到物理设备的映射框架机制,在该机制下,用户可以很方便的根据自己的需要制定实现存储资源的管理策略,如条带化,镜像,快照等. 当前比较流行的 Linux 下的逻辑卷管理器如 LVM2(Linux Volume Manager 2 version)、EVMS(EnterpriseVolume Management System)、dmraid(Device Mapper RaidTool)等都是基于该机制实现的. 只要用户在用户空间制定好映射策略,按照自己的需要编写处理具体IO请求的 target driver插件,就可以很方便的实现这些特性.
Device Mapper主要包含内核空间的映射和用户空间的device mapper库及dmsetup工具.
关于Multipath的配置说明,参考:
http://blog.csdn.net/tianlesoftware/article/details/5979061
--获取wwid:
[root@rac1 mapper]# /sbin/scsi_id -g -u -s/block/sdk
3690b11c00022bc0e000003e55105b786
--修改multipath.conf 文件:
[root@rac1 mapper]# vi /etc/multipath.conf
multipaths {
multipath {
wwid 3690b11c00022bc0e000003e55105b786
alias backup
path_grouping_policy multibus
path_checker readsector0
path_selector "round-robin 0"
failback manual
rr_weight priorities
no_path_retry 5
}
# multipath {
# wwid 1DEC_____321816758474
# alias red
# }
}
"/etc/multipath.conf" 177L, 4832Cwritten
--重启multipath:
[root@rac1 mapper]# service multipathdrestart
Stopping multipathd daemon: [ OK ]
Starting multipathd daemon: [ OK ]
--检查文件:
[root@rac1 mapper]# cd /dev/mapper/
[root@rac1 mapper]# ll
total 0
brw-rw---- 1 root disk253, 9 Feb 20 12:35 backup
brw-rw---- 1 root disk253, 10 Feb 20 12:35 backupp1
crw------- 1 root root 10, 60 Feb 20 12:35 control
brw-rw---- 1 root disk 253, 8 Feb 20 12:35 mpath10
brw-rw---- 1 root disk 253, 0 Feb 20 12:35 mpath2
brw-rw---- 1 root disk 253, 1 Feb 20 12:35 mpath3
brw-rw---- 1 root disk 253, 2 Feb 20 12:35 mpath4
brw-rw---- 1 root disk 253, 3 Feb 20 12:35 mpath5
brw-rw---- 1 root disk 253, 4 Feb 20 12:35 mpath6
brw-rw---- 1 root disk 253, 5 Feb 20 12:35 mpath7
brw-rw---- 1 root disk 253, 6 Feb 20 12:35 mpath8
brw-rw---- 1 root disk 253, 7 Feb 20 12:35 mpath9
--mount 文件:
[root@rac1 mapper]# mount/dev/mapper/backupp1 /u01/backup
--检查mount:
[root@rac1 mapper]# df -lh
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 59G 22G 34G 39% /
/dev/sda1 996M 51M 894M 6% /boot
tmpfs 32G 364M 32G 2% /dev/shm
/dev/sda4 145G 188M 138G 1% /u01/dave
/dev/mapper/backupp1 493G 198M 467G 1% /u01/backup
修改/etc/fstab文件后,重启可以正常的自动挂载。但是这里是2个节点,存储配置的也是共享的。 但是在节点1创建的文件,在节点2上不能识别,经测试,只有在重新mount之后,才可以看到另一个节点创建的文件。
测试步骤如下:
[root@rac1 backup]# ll
total 24
-rw-r--r-- 1 root root 0 Feb 20 12:57 bl
drwxr-xr-x 2 root root 4096 Feb 20 12:55 dave
-rw-r--r-- 1 root root 5 Feb 20 12:55 dvd
drwx------ 2 root root 16384 Feb 20 12:10lost+found
--创建文件orcl:
[root@rac1 backup]# touch orcl
--在节点2 umount 目录:
[root@rac2 backup]# umount /u01/backup
umount: /u01/backup: device is busy
umount: /u01/backup: device is busy
[root@rac2 backup]# fuser -km /u01/backup
/u01/backup: 9848c
[root@rac2 ~]# df -lh
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 70G 20G 46G 31% /
/dev/sda1 996M 51M 894M 6% /boot
tmpfs 32G 364M 32G 2% /dev/shm
/dev/mapper/backupp1 493G 198M 467G 1% /u01/backup
[root@rac2 ~]# umount /u01/backup
--确认umount 成功:
[root@rac2 ~]# df -lh
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 70G 20G 46G 31% /
/dev/sda1 996M 51M 894M 6% /boot
tmpfs 32G 364M 32G 2% /dev/shm
--再次mount:
[root@rac2 ~]# mount /dev/mapper/backupp1 /u01/backup
[root@rac2 ~]# cd /u01/backup
[root@rac2 backup]# ll
total 24
-rw-r--r-- 1 root root 0 Feb 20 12:57 bl
drwxr-xr-x 2 root root 4096 Feb 20 12:55 dave
-rw-r--r-- 1 root root 5 Feb 20 12:55 dvd
drwx------ 2 root root 16384 Feb 20 12:10lost+found
-rw-r--r-- 1 root root 0 Feb 20 14:34 orcl
[root@rac2 backup]#
这次就在节点2上看到我们节点1上创建的文件了。
mkfs -t ext3 错误/dev/sdxx is apparently in use by the system; 解决方法
标签:
原文地址:http://blog.csdn.net/mao0514/article/details/43083145