码迷,mamicode.com
首页 > 其他好文 > 详细

cephfs 文件空间重建

时间:2018-09-11 13:59:08      阅读:282      评论:0      收藏:0      [点我收藏+]

标签:mon   poc   name   状态   cli   close   :active   inode   def   

重置cephfs

清理现有cephfs 所有文件,重建空间:

清理删除 cephfs

关闭所有mds服务

systemctl stop ceph-mds@$HOSTNAME  
systemctl status ceph-mds@$HOSTNAME  

查看cephfs 信息

## ceph fs ls 
name: leadorfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]

## ceph mds stat
e392: 0/1/1 up, 1 failed

## ceph mon dump
dumped monmap epoch 1

设置mds状态为失败

ceph mds fail 0    

删除mds文件系统

ceph fs rm leadorfs --yes-i-really-mean-it      

删除元数据文件夹

ceph osd pool delete cephfs_metadata cephfs_metadata --yes-i-really-really-mean-it   
ceph osd pool delete cephfs_data cephfs_data --yes-i-really-really-mean-it   

再次查看集群状态

## ceph mds stat
e394:

## eph mds  dump
dumped fsmap epoch 397
fs_name cephfs
epoch   397
flags   0
created 0.000000
modified        0.000000
tableserver     0
root    0
session_timeout 0
session_autoclose       0
max_file_size   0
last_failure    0
last_failure_osd_epoch  0
compat  compat={},rocompat={},incompat={}
max_mds 0
in
up      {}
failed
damaged
stopped
data_pools
metadata_pool   0
inline_data     disabled

重建 cephfs

启动所有mds服务

systemctl start ceph-mds@$HOSTNAME
systemctl status ceph-mds@$HOSTNAME

#验证:
ceph mds stat

e397:, 3 up:standby

重建cephfs

ceph osd pool create cephfs_data 512

ceph osd pool create cephfs_metadata 512

ceph fs new ptcephfs cephfs_metadata cephfs_data

验证集群状态

## ceph mds stat

e400: 1/1/1 up {0=jp33e502-4-13.ptengine.com=up:active}, 2 up:standby

## ceph mds dump

dumped fsmap epoch 400
fs_name ptcephfs
epoch   400
flags   0
created 2018-09-11 12:48:26.300848
modified        2018-09-11 12:48:26.300848
tableserver     0
root    0
session_timeout 60
session_autoclose       300
max_file_size   1099511627776
last_failure    0
last_failure_osd_epoch  0
compat  compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=file layout v2}
max_mds 1
in      0
up      {0=25579}
failed
damaged
stopped
data_pools      3
metadata_pool   4
inline_data     disabled
25579:  172.19.4.13:6800/2414848276 ‘jp33e502-4-13.ptengine.com‘ mds.0.399 up:active seq 834

集群健康状态

ceph -w
    cluster fe946afe-43d0-404c-baed-fb04cd22d20d
     health HEALTH_OK
     monmap e1: 3 mons at {jp33e501-4-11=172.19.4.11:6789/0,jp33e501-4-12=172.19.4.12:6789/0,jp33e502-4-13=172.19.4.13:6789/0}
            election epoch 12, quorum 0,1,2 jp33e501-4-11,jp33e501-4-12,jp33e502-4-13
      fsmap e400: 1/1/1 up {0=jp33e502-4-13.ptengine.com=up:active}, 2 up:standby
     osdmap e2445: 14 osds: 14 up, 14 in
            flags sortbitwise,require_jewel_osds
      pgmap v876685: 1024 pgs, 2 pools, 2068 bytes data, 20 objects
            73366 MB used, 12919 GB / 12990 GB avail
                1024 active+clean

cephfs 文件空间重建

标签:mon   poc   name   状态   cli   close   :active   inode   def   

原文地址:http://blog.51cto.com/michaelkang/2173728

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!