ceph很早就已经支持通过iscsi协议来使用rbd,这篇博文对此做下演示,并且使用OSD Server作为iscsi target端。
一、OSD Server side
1、安装支持rbd的TGT软件包
#echo "deb http://ceph.com/packages/ceph-extras/debian $(lsb_release -sc) main" | sudo tee /etc/apt/sources.list.d/ceph-extras.list #apt-get install tgt
# tgtadm --lld iscsi --op show --mode system | grep rbd rbd (bsoflags sync:direct)
#rbd create iscsipool/image1 --size 10240 --image-format 2
<target iqn.2014-04.rbdstore.example.com:iscsi> driver iscsi bs-type rbd backing-store iscsipool/image1 # Format is <iscsi-pool>/<iscsi-rbd-image> initiator-address 10.10.2.49 #client address allowed to map the address </target>
#service tgt reload or #service tgt restart
vim /etc/ceph/ceph.conf [client] rbd_cache = false
1、安装open-scsi
#apt-get install open-iscsi
# service open-iscsi restart * Unmounting iscsi-backed filesystems [ OK ] * Disconnecting iSCSI targets [ OK ] * Stopping iSCSI initiator service [ OK ] * Starting iSCSI initiator service iscsid [ OK ] * Setting up iSCSI targets iscsiadm: No records found [ OK ] * Mounting network filesystems
# iscsiadm -m discovery -t st -p 10.10.2.50 10.10.2.50:3260,1 iqn.2014-04.rbdstore.example.com:iscsi
# iscsiadm -m node --login Logging in to [iface: default, target: iqn.2014-04.rbdstore.example.com:iscsi, portal: 10.10.2.50,3260] (multiple) Login to [iface: default, target: iqn.2014-04.rbdstore.example.com:iscsi, portal: 10.10.2.50,3260] successful.
root@cetune1:~# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 10G 0 disk vda 253:0 0 24G 0 disk ?..vda1 253:1 0 190M 0 part /boot ?..vda2 253:2 0 1K 0 part ?..vda5 253:5 0 23.8G 0 part ?..linux-swap (dm-0) 252:0 0 3.8G 0 lvm [SWAP] ?..linux-root (dm-1) 252:1 0 20G 0 lvm /
原文地址:http://blog.csdn.net/wytdahu/article/details/46545235