标签:roo code osd too 限制 问题总结 sds style 选择
之前测试用ceph总是警告
health HEALTH_WARN pool cephfs_metadata2 has many more objects per pg than average (too few pgs?) pool cephfs_data2 has many more objects per pg than average (too few pgs?)
查看pg数
[root@node1 ~]# ceph osd pool get cephfs_metadata2 pg_num pg_num: 8 [root@node1 ~]# ceph osd pool get cephfs_metadata2 pgp_num pgp_num: 8
突然想起来当时只是测试安装,而且说pg数可以增加但不能减少,所以只是随便设置一个数。再设置回来即可。
[root@node1 ~]# ceph osd pool set cephfs_metadata2 pg_num 256 Error E2BIG: specified pg_num 256 is too large (creating 248 new PGs on ~3 OSDs exceeds per-OSD max of 32)
结果出现这个错误,参考“http://www.selinuxplus.com/?p=782”,原来是一次增加的数量有限制。最后选择用暴力的方法解决问题:
[root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pg_num 40 [root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pg_num 72 [root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pg_num 104 [root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pg_num 136 [root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pg_num 168 [root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pg_num 200 [root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pg_num 232 [root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pg_num 256 [root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pgp_num 40 [root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pgp_num 72 [root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pgp_num 104 [root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pgp_num 136 [root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pgp_num 168 [root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pgp_num 200 [root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pgp_num 232 [root@node1 my-cluster]# ceph osd pool set cephfs_metadata2 pgp_num 256
过了大概半个小时,集群就正常了。
标签:roo code osd too 限制 问题总结 sds style 选择
原文地址:https://www.cnblogs.com/bugutian/p/9771025.html