Linux运维 第五阶段(九)iSCSI&cLVM&gfs2
gfs2(global file system version2,全局文件系统,CFS集群文件系统,利用HA的信息层,向各node通告自己所持有锁的信息)
cLVM(cluster logical volume management,集群逻辑卷管理,将共享存储做成逻辑卷,借用HA的心跳传输机制(通信机制,对于脑裂处理的机制),各node要启动clvmd服务(此服务启动前要启动cman和rgmanager),使得各node彼此间通信)
准备四个node(node{1,2,3}为使用共享存储,node4为提供共享存储并作为跳板机)
node{1,2,3}准备好yum源、时间要同步、节点名称、/etc/hosts,node4上要能与node{1,2,3}双机互信
(1)准备共享存储
node4-side:
[root@node4 ~]# vim /etc/tgt/targets.conf
default-driver iscsi
<targetiqn.2015-07.com.magedu:teststore.disk1>
<backing-store /dev/sdb>
vender_id magedu
lun 1
</backing-store>
incominguser iscsi iscsi
initiator-address 192.168.41.131
initiator-address 192.168.41.132
initiator-address 192.168.41.133
</target>
[root@node4 ~]# service tgtd restart
[root@node4 ~]# netstat -tnlp(3260/tcp,tgtd)
[root@node4 ~]# tgtadm --lld iscsi --mode target --op show
……
LUN: 1
……
Account information:
iscsi
ACL information:
192.168.41.131
192.168.41.132
192.168.41.133
[root@node4 ~]# alias ha=‘for I in{1..3};do ssh node$I‘
[root@node4 ~]# ha ‘rm -rf/var/lib/iscsi/send_targets/*’;done
node{1,2,3}-side:
[root@node1 ~]# vim /etc/iscsi/iscsid.conf
node.session.auth.authmethod = CHAP
node.session.auth.username = iscsi
node.session.auth.password = iscsi
node4-side:
[root@node4 ~]# ha ‘service iscsi restart’;done
[root@node4 ~]# ha ‘iscsiadm -m discovery -t st -p 192.168.41.134’;done
[root@node4 ~]# ha ‘iscsiadm -m node -T iqn.2015-07.com.magedu:teststore.disk1 -p 192.168.41.134 -l’;done
[root@node1 ~]# fdisk -l
Disk /dev/sdb: 10.7 GB, 10737418240 bytes
(2)安装cman、rgmanager、gfs2-utils、lvm2-cluster
node4-side:
[root@node4 ~]# for I in {1..3};do scp /root/{cman*,rgmanager*,gfs2-utils*,lvm2-cluster*} node$I:/root/;ssh node$I‘ yum -y --nogpgcheck localinstall /root/*.rpm‘;done
node1-side:
[root@node1 ~]# ccs_tool create tcluster
[root@node1 ~]# ccs_tool addfence meatware fence_manual
[root@node1 ~]# ccs_tool addnode -v 1 -n 1-f meatware node1.magedu.com
[root@node1 ~]# ccs_tool addnode -v 1 -n 2-f meatware node2.magedu.com
[root@node1 ~]# ccs_tool addnode -v 1 -n 3-f meatware node3.magedu.com
[root@node1 ~]# service cman start(第一次启动初始化时,最好使用工具system-config-cluster将组播地址改掉,不要与其它集群用相同的默认组播地址,否则会接收到其它集群传来的同步信息无法正常启动;或在启动前将node1的配置文件/etc/cluster/cluster.conf复制到其它node再启动)
[root@node1 ~]# clustat
node1.magedu.com 1 Online, Local
node2.magedu.com 2 Online
node3.magedu.com 3 Online
node2-side:
[root@node2 ~]# service cman start
node3-side:
[root@node3 ~]# service cman start
(3)cLVM配置:
node1-side:
[root@node1 ~]# rpm -ql lvm2-cluster
/etc/rc.d/init.d/clvmd
/usr/sbin/clvmd
/usr/sbin/lvmconf
[root@node1 ~]# vim /etc/lvm/lvm.conf(每个node都要改此配置文件)
locking_type = 3(Type 3 uses built-in clustered locking,将此项1改为3,1表示本地基于文件锁Defaults to local file-based locking)
node4-side:
[root@node4 ~]# ha ‘service clvmd start‘;done
node1-side:
[root@node1 ~]# pvcreate /dev/sdb
Writing physical volume data to disk "/dev/sdb"
Physical volume "/dev/sdb" successfully created
[root@node1 ~]# pvs(在其它node也可看到)
PV VG Fmt Attr PSize PFree
/dev/sdb lvm2a-- 10.00G 10.00G
[root@node1 ~]# vgcreate clustervg /dev/sdb
Clustered volume group "clustervg" successfully created
[root@node1 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
clustervg 1 0 0wz--nc 10.00G 10.00G
[root@node1 ~]# lvcreate -L 5G -n clusterlv clustervg
Logical volume "clusterlv" created
[root@node1 ~]# lvs
LV VG Attr LSize Origin Snap% Move Log Copy% Convert
clusterlv clustervg -wi-a- 5.00G
(4)gfs2配置:
node1-side:
[root@node1 ~]# rpm -ql gfs2-utils
/etc/rc.d/init.d/gfs2
/sbin/fsck.gfs2
/sbin/gfs2_convert
/sbin/gfs2_edit
/sbin/gfs2_fsck
/sbin/gfs2_grow
/sbin/gfs2_jadd
/sbin/gfs2_quota
/sbin/gfs2_tool
/sbin/mkfs.gfs2
/sbin/mount.gfs2
/sbin/umount.gfs2
[root@node1 ~]# mkfs.gfs2 -h
#mkfs.gfs2 OPTIONS DEVICE
options:
-b #(blocksize指定块大小,默认4096bytes)
-D(Enable debugging output)
-j NUMBER(The number of journals for gfs2_mkfs to create,指定日志区域的个数,有几个node挂载使用就创建几个,默认创建1个)
-J #(The size of the journals in Megabytes,日志区域大小,默认128M)
-p NAME(Lock ProtoName is the name of the locking protocol to use,锁协议名,两种,通常使用lock_dlm,若是一个node用lock_nolock,若仅一个node使用单机FS即可,用不着集群FS)
-t NAME(The lock table field appropriate to the lock module you’re using,锁表名称,格式为CLUSTERNAME:LOCKTABLENAME,clustername为当前node所在的集群名称,locktablename要在当前集群内唯一;一个集群内可以使用多个集群文件系统,锁表名称用于区分哪一个node在哪一个集群文件系统上所持有的锁)
[root@node1 ~]# mkfs.gfs2 -j 3 -p lock_dlm -t tcluster:lktb1 /dev/clustervg/clusterlv(格式化集群文件系统会很慢)
This will destroy any data on/dev/clustervg/clusterlv.
Are you sure you want to proceed? [y/n] y
Device: /dev/clustervg/clusterlv
Blocksize: 4096
Device Size 5.00 GB (1310720 blocks)
Filesystem Size: 5.00 GB (1310718 blocks)
Journals: 3
Resource Groups: 20
Locking Protocol: "lock_dlm"
Lock Table: "tcluster:lktb1"
UUID: D8B10B8F-7EE2-A818-E392-0DF218411F2C
[root@node1 ~]# mkdir /mydata
[root@node1 ~]# mount -t gfs2 /dev/clustervg/clusterlv /mydata
node2-side:
[root@node2 ~]# mkdir /mydata
[root@node2 ~]# mount -t gfs2 /dev/clustervg/clusterlv /mydata
[root@node2 ~]# ls /mydata
[root@node2 ~]# touch /mydata/b.txt
[root@node2 ~]# ls /mydata
b.txt
node3-side:
[root@node3 ~]# mkdir /mydata
[root@node3 ~]# mount -t gfs2 /dev/clustervg/clusterlv /mydata
[root@node3 ~]# touch /mydata/c.txt
[root@node3 ~]# ls /mydata
b.txt c.txt
node1-side:
[root@node1 ~]# ls /mydata
b.txt c.txt
注:每个node对CFS的操作会立即同步到磁盘,并告知其它node,所以严重影响系统性能
(5)调试:
[root@node1 ~]# gsf2_tool -h(interface to gfs2 ioctl/sysfs calls)
#gfs2_tool df|journals|gettune|freeze|unfreeze|getargs MOUNT_POINT
#gfs2_tool list
[root@node1 ~]# gfs2_tool list(List the currently mounted GFS2 filesystems)
253:2 tcluster:lktb1
[root@node1 ~]# gfs2_tool journals /mydata(rint out information about the journals in a mounted filesystem)
journal2 - 128MB
journal1 - 128MB
journal0 - 128MB
3 journal(s) found.
[root@node1 ~]# gfs2_tool df /mydata
/mydata:
SBlock proto = "lock_dlm"
SBlock table = "tcluster:lktb1"
SBondisk format = 1801
SBmultihost format = 1900
Block size = 4096
Journals = 3
Resource Groups = 20
Mounted lock proto = "lock_dlm"
Mounted lock table = "tcluster:lktb1"
Mounted host data = "jid=0:id=196609:first=1"
Journal number = 0
Lock module flags = 0
Local flocks = FALSE
Local caching = FALSE
Type Total Blocks Used Blocks Free Blocks use%
------------------------------------------------------------------------
data 1310564 99293 1211271 8%
inodes 1211294 23 1211271 0%
[root@node1 ~]# gfs2_tool freeze /mydata(Freeze (quiesce)a GFS2 cluster,任意一个node对CFS操作会卡住,直至unfreeze)
[root@node1 ~]# gfs2_tool getargs /mydata
statfs_percent 0
data 2
suiddir 0
quota 0
posix_acl 0
upgrade 0
debug 0
localflocks 0
localcaching 0
ignore_local_fs 0
spectator 0
hostdata jid=0:id=196609:first=1
locktable
lockproto
[root@node1 ~]# gfs2_tool gettune /mydata(Print out the current values of the tuning parameters in a running filesystem,若要调整某一项,使用settune,并在挂载点后直接跟上指令和值,如#gfs2_tool settune /mydata new_files_directio=1)
new_files_directio = 0
new_files_jdata = 0
quota_scale = 1.0000 (1, 1)
logd_secs = 1
recoverd_secs = 60
statfs_quantum = 30
stall_secs = 600
quota_cache_secs = 300
quota_simul_sync = 64
statfs_slow = 0
complain_secs = 10
max_readahead = 262144
quota_quantum = 60
quota_warn_period = 10
jindex_refresh_secs = 60
log_flush_secs = 60
incore_log_blocks = 1024
[root@node1 ~]# gfs2_jadd -j 1 /dev/clustervg/clusterlv(添加日志区域,1表示新增的个数(是新增个数不是总的节点数),若集群中node数增加了,可通过gfs2_jadd增加日志区域)
[root@node1 ~]# lvextend -L 8G /dev/clustervg/clusterlv(extend the size of a logical volume
,扩展逻辑卷大小,可理解为扩展物理边界)
Extending logical volume clusterlv to 8.00 GB
Logical volume clusterlv successfully resized
[root@node1 ~]# gfs2_grow /dev/clustervg/clusterlv(Expand a GFS2 filesystem,扩展集群文件系统,可理解为扩展逻辑边界,注意一定要执行此步骤,重要)
FS: Mount Point: /mydata
FS: Device: /dev/mapper/clustervg-clusterlv
FS: Size: 1310718 (0x13fffe)
FS: RG size: 65533 (0xfffd)
DEV: Size: 2097152 (0x200000)
The file system grew by 3072MB.
Error fallocating extra space : File toolarge
gfs2_grow complete.
[root@node1 ~]# lvresize -L -3G /dev/clustervg/clusterlv(减小逻辑卷大小)
[root@node1 ~]# gfs2_grow /dev/clustervg/clusterlv
[root@node1 ~]# lvs
LV VG Attr LSize Origin Snap% Move Log Copy% Convert
clusterlv clustervg -wi-ao 5.00G
以上是学习《马哥运维课程》做的笔记
本文出自 “Linux运维重难点学习笔记” 博客,谢绝转载!
Linux运维 第五阶段(九)iSCSI & cLVM & gfs2
原文地址:http://jowin.blog.51cto.com/10090021/1726253