标签:load ring def static set meta png 步骤 stat
https://my.oschina.net/wangzilong/blog/1549690
ceph集群中允许使用混合类型的磁盘,比如一部分磁盘是SSD,一部分是STAT。如果针对某些业务小高速磁盘SSD,某些业务需要STAT,在创建资源池的时候可以指定创建在某些OSD上。
基本步骤有8步:
当前只有STAT没有SSD,但是不影响实验结果。
1 获取crush map
[root@ceph-admin getcrushmap]# ceph osd getcrushmap -o /opt/getcrushmap/crushmap
got crush map from osdmap epoch 2482
2 反编译crush map
[root@ceph-admin getcrushmap]# crushtool -d crushmap -o decrushmap
3 修改crush map
在root default 后面添加下面两个bucket
root ssd {
id -5
alg straw
hash 0
item osd.0 weight 0.01
}
root stat {
id -6
alg straw
hash 0
item osd.1 weight 0.01
}
在rules部分添加如下规则:
rule ssd{
ruleset 1
type replicated
min_size 1
max_size 10
step take ssd
step chooseleaf firstn 0 type osd
step emit
}
rule stat{
ruleset 2
type replicated
min_size 1
max_size 10
step take stat
step chooseleaf firstn 0 type osd
step emit
}
4 编译crush map
[root@ceph-admin getcrushmap]# crushtool -c decrushmap -o newcrushmap
5 注入crush map
[root@ceph-admin getcrushmap]# ceph osd setcrushmap -i /opt/getcrushmap/newcrushmap
set crush map
[root@ceph-admin getcrushmap]# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-6 0.00999 root stat
1 0.00999 osd.1 up 1.00000 1.00000
-5 0.00999 root ssd
0 0.00999 osd.0 up 1.00000 1.00000
-1 0.58498 root default
-2 0.19499 host ceph-admin
2 0.19499 osd.2 up 1.00000 1.00000
-3 0.19499 host ceph-node1
0 0.19499 osd.0 up 1.00000 1.00000
-4 0.19499 host ceph-node2
1 0.19499 osd.1 up 1.00000 1.00000
# 重新查看osd tree 的时候已经看见这个树已经变了。添加了名称为stat和SSD的两个bucket
6 创建资源池
[root@ceph-admin getcrushmap]# ceph osd pool create ssd_pool 8 8
pool ‘ssd_pool‘ created
[root@ceph-admin getcrushmap]# ceph osd pool create stat_pool 8 8
pool ‘stat_pool‘ created
[root@ceph-admin getcrushmap]# ceph osd dump|grep ssd
pool 28 ‘ssd_pool‘ replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 2484 flags hashpspool stripe_width 0
[root@ceph-admin getcrushmap]# ceph osd dump|grep stat
pool 29 ‘stat_pool‘ replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 2486 flags hashpspool stripe_width 0
注意:刚刚创建的两个资源池ssd_pool 和stat_pool 的 crush_ruleset 都是0,下面需要修改。
7 修改资源池存储规则
[root@ceph-admin getcrushmap]# ceph osd pool set ssd_pool crush_ruleset 1
set pool 28 crush_ruleset to 1
[root@ceph-admin getcrushmap]# ceph osd pool set stat_pool crush_ruleset 2
set pool 29 crush_ruleset to 2
[root@ceph-admin getcrushmap]# ceph osd dump|grep ssd
pool 28 ‘ssd_pool‘ replicated size 3 min_size 2 crush_ruleset 1 object_hash rjenkins pg_num 8 pgp_num 8 last_change 2488 flags hashpspool stripe_width 0
[root@ceph-admin getcrushmap]# ceph osd dump|grep stat
pool 29 ‘stat_pool‘ replicated size 3 min_size 2 crush_ruleset 2 object_hash rjenkins pg_num 8 pgp_num 8 last_change 2491 flags hashpspool stripe_width 0
# luminus 版本设置pool规则的语法是
[root@ceph-admin ceph]# ceph osd pool set ssd crush_rule ssd
set pool 2 crush_rule to ssd
[root@ceph-admin ceph]# ceph osd pool set stat crush_rule stat
set pool 1 crush_rule to stat
8 验证
验证前先看看ssd_pool 和stat_pool 里面是否有对象
[root@ceph-admin getcrushmap]# rados ls -p ssd_pool
[root@ceph-admin getcrushmap]# rados ls -p stat_pool
#这两个资源池中都没有对象
用rados命令 添加对象到两个资源池中
[root@ceph-admin getcrushmap]# rados -p ssd_pool put test_object1 /etc/hosts
[root@ceph-admin getcrushmap]# rados -p stat_pool put test_object2 /etc/hosts
[root@ceph-admin getcrushmap]# rados ls -p ssd_pool
test_object1
[root@ceph-admin getcrushmap]# rados ls -p stat_pool
test_object2
#对象添加成功
[root@ceph-admin getcrushmap]# ceph osd map ssd_pool test_object1
osdmap e2493 pool ‘ssd_pool‘ (28) object ‘test_object1‘ -> pg 28.d5066e42 (28.2) -> up ([0], p0) acting ([0,1,2], p0)
[root@ceph-admin getcrushmap]# ceph osd map stat_pool test_object2
osdmap e2493 pool ‘stat_pool‘ (29) object ‘test_object2‘ -> pg 29.c5cfe5e9 (29.1) -> up ([1], p1) acting ([1,0,2], p1)
上面验证结果可以看出,test_object1 存入osd.0中,test_object2 存入osd.1中。达到预期目的
标签:load ring def static set meta png 步骤 stat
原文地址:https://www.cnblogs.com/wangmo/p/11125697.html