标签:clear src rbd direct http cap char sudo 状态
Ceph理论
见我的博客:http://blog.csdn.net/quqi99/article/details/32939509
注意点:
a, 一个ceph cluster至少须要1个mon节点和2个osd节点才干达到active + clean状态(故osd pool default size得>=2, 注意:假设不想复制的话,弄一个osd节点也是能够的。仅仅须要将复制的备份数由默认3改为1就可以,即sudo ceph osd pool set data min_size 1),meta节点仅仅有运行ceph文件系统时才须要。
所以假设仅仅有一个单节点的话,须要在ceph deploy new命令之后紧接着运行下列命令改动ceph.conf配置:
echo "osd crush chooseleaf type = 0" >> ceph.conf
echo "osd pool default size = 1" >> ceph.conf
osd crush chooseleaf type參数非常重要,解释见:https://ceph.com/docs/master/rados/configuration/ceph-conf/
b, 多个网卡的话。可在ceph.conf的[global]段中加入public network = {cidr}參数
c, 一个osd块设备最好大于5G,不然创建日志系统时会空间太小, 或改动:
echo "osd journal size = 100" >> ceph.conf
d, 測试时不想那么涉及权限那么麻烦,能够
echo "auth cluster required = none" >> ceph.conf
echo "auth service required = none" >> ceph.conf
echo "auth client required = none" >> ceph.conf
e, 想使用权限的话。过程例如以下:
一旦 cephx 启用, ceph 会在默认的搜索路径寻找 keyring 。 像 /etc/ceph/ceph.$name.keyring 。能够的 ceph 配置文件的 [global] 段,加入 keyring 配置指定这个路径。但不推荐这样做。
创建 client.admin key 。 并在你的 client host 上保存一份
$ ceph auth get-or-create client.admin mon ‘allow *‘ mds ‘allow *‘ osd ‘allow *‘ -o /etc/ceph/ceph.client.admin.keyring
注意 : 此命令会毁坏己有的 /etc/ceph/ceph.client.admin.keyring 。
为你的 cluster 创建一个 keyring ,创建一个 monitor 安全 key
$ ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon ‘allow *‘
复制上面创建的 monitor keyring 到全部 monitor 的 mon data 文件夹,并命名为 ceph.mon.keyring 。比如。复制它到 cluster ceph 的 mon.a monitor
$ cp /tmp/ceph.mon.keyring /var/lib/ceph/mon/ceph-$(hostname)/keyring
为全部 OSD 生成安全 key , {$id} 指 OSD number
$ ceph auth get-or-create osd.{$id} mon ‘allow rwx‘ osd ‘allow *‘ -o /var/lib/ceph/osd/ceph-{$id}/keyring
为全部 MDS 生成安全 key , {$id} 指 MDS letter
$ ceph auth get-or-create mds.{$id} mon ‘allow rwx‘ osd ‘allow *‘ mds ‘allow *‘ -o /var/lib/ceph/mds/ceph-{$id}/keyring
为 0.51 版本号以上的 ceph 启动 cephx 认证,在配置文件的 [global] 段加入
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
环境准备
单节点node1上同一时候安装osd(一块块设备/dev/ceph-volumes/lv-ceph0),mds, mon, client与admin。
1, 确保/etc/hosts
127.0.0.1 localhost
192.168.99.116 node1
2, 确保安装ceph-deply的机器和其他全部节点的ssh免密码訪问(ssh-keygen && ssh-copy-id othernode)
安装步骤(注意,以下全部的操作均在admin节点进行)
1, 准备两块块设备(块设备能够是硬盘,也能够是LVM卷),我们这里使用文件裸设备模拟
dd if=/dev/zero of=/bak/images/ceph-volumes.img bs=1M count=4096 oflag=direct
sgdisk -g --clear /bak/images/ceph-volumes.img
sudo vgcreate ceph-volumes $(sudo losetup --show -f /bak/images/ceph-volumes.img)
sudo lvcreate -L2G -nceph0 ceph-volumes
sudo lvcreate -L2G -nceph1 ceph-volumes
sudo mkfs.xfs -f /dev/ceph-volumes/ceph0
sudo mkfs.xfs -f /dev/ceph-volumes/ceph1
mkdir -p /srv/ceph/{osd0,osd1,mon0,mds0}
sudo mount /dev/ceph-volumes/ceph0 /srv/ceph/osd0
sudo mount /dev/ceph-volumes/ceph1 /srv/ceph/osd1
若想直接使用裸设备的话,直接用losetup载入就可以: sudo losetup --show -f /bak/images/ceph-volumes.img
2, 安装ceph-deploy
sudo apt-get install ceph ceph-deploy
3, 找一个工作文件夹创建集群, ceph-deploy new {ceph-node} {ceph-other-node}
mkdir ceph-cluster
cd /bak/work/ceph/ceph-cluster
ceph-deploy new node1 #假设是多节点,就将节点都列在后面
它将在当前文件夹生成ceph.conf及ceph.mon.keyring (这个相当于人工运行: ceph-authtool --create-keyring ceph.mon.keyring --gen-key -n mon. --cap mon "allow *‘ )
假设仅仅有一个节点。还须要运行:
echo "osd crush chooseleaf type = 0" >> ceph.conf
echo "osd pool default size = 1" >> ceph.conf
echo "osd journal size = 100" >> ceph.conf
终于ceph.conf的内容例如以下:
[global]
fsid = f1245211-c764-49d3-81cd-b289ca82a96d
mon_initial_members = node1
mon_host = 10.55.61.177
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
osd crush chooseleaf type = 0
osd pool default size = 1
osd journal size = 100
也可继续为ceph指定网络,以下两个參数可配置在每一个段之中:
cluster network = 10.0.0.0/8
public network = 192.168.5.0/24
4, 安装Ceph基本库(ceph, ceph-common, ceph-fs-common, ceph-mds, gdisk), ceph-deploy install {ceph-node}[{ceph-node} ...]
ceph-deploy purgedata node1
ceph-deploy forgetkeys
ceph-deploy install node1 #假设是多节点,就将节点都列在后面
它会运行。sudo env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get -q -o Dpkg::Options::=--force-confnew --no-install-recommends --assume-yes install -- ceph ceph-mds ceph-common ceph-fs-common gdisk
5, 添加一个集群监视器, ceph-deploy mon create {ceph-node}
sudo chown -R hua:root /var/run/ceph/
sudo chown -R hua:root /var/lib/ceph/
ceph-deploy --overwrite-conf mon create node1 #假设是多节点就将节点都列在后面
相当于:
sudo ceph-authtool /var/lib/ceph/tmp/keyring.mon.$(hostname) --create-keyring --name=mon. --add-
key=$(ceph-authtool --gen-print-key) --cap mon ‘allow *‘
sudo ceph-mon -c /etc/ceph/ceph.conf --mkfs -i $(hostname) --keyring /var/lib/ceph/tmp/keyring.mon
.$(hostname)
sudo initctl emit ceph-mon id=$(hostname)
6, 收集远程节点上的密钥到当前文件夹, ceph-deploy gatherkeys {ceph-node}
ceph-deploy gatherkeys node1
7, 添加osd, ceph-deploy osd prepare {ceph-node}:/path/to/directory
ceph-deploy osd prepare node1:/srv/ceph/osd0
ceph-deploy osd prepare node1:/srv/ceph/osd1
若使用了cephx权限的话。能够:
OSD_ID=$(sudo ceph -c /etc/ceph/ceph.conf osd create)
sudo ceph -c /etc/ceph/ceph.conf auth get-or-create osd.${OSD_ID} mon ‘allow profile osd ‘ osd
‘allow *‘ | sudo tee ${CEPH_DATA_DIR}/osd/ceph-${OSD_ID}/keyring
8, 激活OSD, ceph-deploy osd activate {ceph-node}:/path/to/directory
sudo ceph-deploy osd activate node1:/srv/ceph/osd0
sudo ceph-deploy osd activate node1:/srv/ceph/osd1
若出现错误ceph-disk: Error: No cluster conf found。那是须要清空/src/ceph/osd0
9, 复制 admin 密钥到其他节点, 复制 ceph.conf, ceph.client.admin.keyring 到 ceph{1,2,3}:/etc/ceph
ceph-deploy admin node1
10, 验证
sudo ceph -s
sudo ceph osd tree
11, 加入新的mon
多个mon能够高可用,
1)改动/etc/ceph/ceph.conf文件。如改动:mon_initial_members = node1 node2
2) 同步配置到其他节点,ceph-deploy --overwrite-conf config push node1 node2
3) 创建mon, ceph-deploy node1 node2
12, 加入新mds, 仅仅有文件系统仅仅须要mds,眼下官方仅仅推荐在生产环境中使用一个 mds。
13, 作为文件系统使用直接mount就可以,mount -t ceph node1:6789:/ /mnt -o name=admin,secret=<keyring>
14, 作为块设备使用:
sudo modprobe rbd
sudo ceph osd pool set data min_size 2
sudo rbd create --size 1 -p data test1 #创建1M块设备/dev/rbd/{poolname}/imagename
sudo rbd map test1 --pool data
sudo mkfs.ext4 /dev/rbd/data/test1
15, 命令操作
1)默认有3个池
$ sudo rados lspools
data
metadata
rbd
创建池:$ sudo rados mkpool nova
2)将data池的文件副本数设为2, 此值是副本数(总共同拥有2个osd, 假设仅仅有一个osd的话就设置为1),假设不设置这个就命令一直不返回
$ sudo ceph osd pool set data min_size 2
set pool 0 min_size to 1
3)上传一个文件,$ sudo rados put test.txt ./test.txt --pool=data
4)查看文件,
$ sudo rados -p data ls
test.txt
5)查看对象位置
$ sudo ceph osd map data test.txt
osdmap e9 pool ‘data‘ (0) object ‘test.txt‘ -> pg 0.8b0b6108 (0.8) -> up ([0], p0) acting ([0], p0)
$ cat /srv/ceph/osd0/current/0.8_head/test.txt__head_8B0B6108__0
test
6)加入一个新osd后,能够用“sudo ceph -w”命令看到对象在群体内迁移
16, Ceph与Cinder集成, 见:http://ceph.com/docs/master/rbd/rbd-openstack/
1) 集建池
sudo ceph osd pool create volumes 8
sudo ceph osd pool create images 8
sudo ceph osd pool set volumes min_size 2
sudo ceph osd pool set images min_size 2
2) 配置glance-api, cinder-volume, nova-compute的节点作为ceph client。由于我的全部是一台机器就不须要运行下列步骤
a, 都须要ceph.conf, ssh {openstack-server} sudo tee /etc/ceph/ceph.conf < /etc/ceph/ceph.conf
b, 都须要安装ceph client, sudo apt-get install python-ceph ceph-common
c, 为images池创建cinder用户,为images创建glance用户。并给用户赋予权限
sudo ceph auth get-or-create client.cinder mon ‘allow r‘ osd ‘allow class-read object_prefixrbd_children,allow rwx pool=volumes,allow rx pool=images‘
sudo ceph auth get-or-create client.glance mon ‘allow r‘ osd ‘allow class-read object_prefixrbd_children,allow rwx pool=images‘\
假设涉及了权限的话。命令看起来像这样:
ceph --name mon. --keyring /var/lib/ceph/mon/ceph-p01-storage-a1-e1c7g8/keyring auth get-or-create client.nova-compute mon allow rw osd allow rwx
d, 为cinder和glance生成密钥(ceph.client.cinder.keyring与ceph.client.glance.keyring)
sudo chown -R hua:root /etc/ceph
ceph auth get-or-create client.glance | ssh {glance-api-server} sudo tee /etc/ceph/ceph.client.glance.keyring
ssh {glance-api-server} sudo chown hua:root /etc/ceph/ceph.client.glance.keyring
ceph auth get-or-create client.cinder | ssh {volume-server} sudo tee /etc/ceph/ceph.client.cinder.keyring
ssh {cinder-volume-server} sudo chown hua:root /etc/ceph/ceph.client.cinder.keyring
e, 配置glance, /etc/glance/glance-api.conf。注意,是追加,放在后面
default_store=rbd
rbd_store_user=glance
rbd_store_pool=images
show_image_direct_url=True
f, 为nova-compute的libvirt进程也生成它所须要的ceph密钥client.cinder.key
sudo ceph auth get-key client.cinder | ssh {compute-node} tee /etc/ceph/client.cinder.key
$ sudo ceph auth get-key client.cinder | ssh node1 tee /etc/ceph/client.cinder.key
AQAXe6dTsCEkBRAA7MbJdRruSmW9XEYy/3WgQA==
$ uuidgen
e896efb2-1602-42cc-8a0c-c032831eef17
$ cat > secret.xml <<EOF
<secret ephemeral=‘no‘ private=‘no‘>
<uuid>e896efb2-1602-42cc-8a0c-c032831eef17</uuid>
<usage type=‘ceph‘>
<name>client.cinder secret</name>
</usage>
</secret>
EOF
$ sudo virsh secret-define --file secret.xml
Secret e896efb2-1602-42cc-8a0c-c032831eef17 created
$ sudo virsh secret-set-value --secret e896efb2-1602-42cc-8a0c-c032831eef17 --base64 $(cat /etc/ceph/client.cinder.key)
$ rm client.cinder.key secret.xml
vi /etc/nova/nova.conf
libvirt_images_type=rbd
libvirt_images_rbd_pool=volumes
libvirt_images_rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_user=cinder
rbd_secret_uuid=e896efb2-1602-42cc-8a0c-c032831eef17
libvirt_inject_password=false
libvirt_inject_key=false
libvirt_inject_partition=-2
并重新启动nova-compute服务后在计算节点能够运行:
sudo rbd --keyring /etc/ceph/client.cinder.key --id nova-compute -p cinder ls
f,配置cinder.conf并重新启动cinder-volume,
sudo apt-get install librados-dev librados2 librbd-dev python-ceph radosgw radosgw-agent
cinder-volume --config-file /etc/cinder/cinder.conf
volume_driver =cinder.volume.drivers.rbd.RBDDriver
rbd_pool=volumes
glance_api_version= 2
rbd_user = cinder
rbd_secret_uuid = e896efb2-1602-42cc-8a0c-c032831eef17
rbd_ceph_conf=/etc/ceph/ceph.conf
17, 运行一个实例
wget http://download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img
qemu-img convert -f qcow2 -O raw cirros-0.3.2-x86_64-disk.img cirros-0.3.2-x86_64-disk.raw
glance image-create --name cirros --disk-format raw --container-format ovf --file cirros-0.3.2-x86_64-disk.raw --is-public True
$ glance index
ID Name Disk Format Container Format Size
------------------------------------ ------------------------------ -------------------- -------------------- --------------
dbc2b04d-7bf7-4f78-bdc0-859a8a588122 cirros raw ovf 41126400
$ rados -p images ls
rbd_id.dbc2b04d-7bf7-4f78-bdc0-859a8a588122
cinder create --image-id dbc2b04d-7bf7-4f78-bdc0-859a8a588122 --display-name storage1 1
cinder list
cd /bak/work/ceph/ceph-cluster/
ceph-deploy purge node1
ceph-deploy purgedata node1
rm -rf /bak/work/ceph/ceph-cluster/*
sudo umount /srv/ceph/osd0
sudo umount /srv/ceph/osd1
mkdir -p /srv/ceph/{osd0,mon0,mds0}
devstack对ceph的支持见:https://review.openstack.org/#/c/65113/
一些调试经验:
收集数据
ceph status --format=json-pretty, 提供健康状态,monitors, osds和placement groups的状态,当前的epoch
ceph health detail --format=json-pretty, 提供像monitors,placement groups的错误和警告信息等
ceph osd tree --format=json-pretty, 提供了osd的状态,以及osd在哪个cluster上
问诊Placement Groups
ceph health detail
ceph pg dump_stuck --format=json-pretty
ceph pg map <pgNum>
ceph pg <pgNum>
ceph -w
比如:pg 4.63 is stuck unclean for 2303.828665, current state active+degraded, last acting [2,1]
它说明4.63这个placement groups位于pool 4, stuck了2303.828665秒,这个pg里的[2, 1]这些osd受到了影响
a, inactive状态,通常是osd是down状态的,‘ceph pg <pgNum>‘
b, unclean状态。意味着object没有拷贝到期望的备份数量,这通常是recovery有问题
c, Degraded状态。复制数量多于osd数量时可能出现这种情况,‘ceph -w‘可查看复制过程
d, Undersized状态,意味着placement groups和pgnum不匹配,通常是配置错误。像池的pgnum配置的太多, cluster‘s crush map, 或者osd没空间了。总之。是有什么情况阻止了crush算法为pg选择osd
e, Stale状态,pg内没有osd报告状态时是这种,可能osd离线了,重新启动osd去重建PG
替换出错的osd或磁盘
见:http://ceph.com/docs/master/rados/operations/add-or-rm-osds/
总得来说:
Remove the OSD
1, Mark the OSD as out of the cluster
2, Wait for the data migration throughout the cluster (ceph -w)
3, Stop the OSD
4, Remove the OSD from the crushmap (ceph osd crush remove <osd.name>)
5, Delete the OSD’s authentication (ceph auth del <osd.num>)
6, Remove the OSD entry from any of the ceph.conf files.
Adding the OSD
1, Create the new OSD (ceph osd create <cluster-uuid> [osd_num]
2, Create a filesystem on the OSD
3, Mount the disk to the OSD directory
4, Initialize the OSD directory & create auth key
5, Allow the auth key to have access to the cluster
6, Add the OSD to the crushmap
7, Start the OSD
磁盘Hung了无法unmount
echo offline > /sys/block/$DISK/device/state
echo 1 > /sys/block/$DISK/device/delete
恢复incomplete PGs
在ceph集群中。假设有的节点没有空间的话easy造成incomplete PGs。恢复起来非常困难,能够採用osd_find_best_info_ignore_history_les这招(在ceph.conf中设置osd_find_best_info_ignore_history_les选项后。 PG peering进程将忽略last epoch。从头在历史日志中找到和此PG相关的信息回放)。能够採用reweight-by-utilization參数控制不要发生一个节点空间不够的情况。
參考:
1, http://blog.scsorlando.com/post/2013/11/21/Ceph-Install-and-Deployment-in-a-production-environment.aspx
2, http://mathslinux.org/?
p=441
3, http://blog.zhaw.ch/icclab/deploy-ceph-and-start-using-it-end-to-end-tutorial-installation-part-13/
4, http://dachary.org/?
p=1971
5, http://blog.csdn.net/EricGogh/article/details/24348127
6, https://wiki.debian.org/OpenStackCephHowto
7, http://ceph.com/docs/master/rbd/rbd-openstack/#configure-openstack-to-use-ceph
8, http://openstack.redhat.com/Using_Ceph_for_Cinder_with_RDO_Havana
9, http://dachary.org/?
p=2374
Ubuntu 14.04下单节点Ceph安装(by quqi99)
标签:clear src rbd direct http cap char sudo 状态
原文地址:http://www.cnblogs.com/wzzkaifa/p/7224593.html