标签:移除 cal adjust base std clu mct boa ice
使用ceph-deploy工具部署ceph
实验环境
172.16.99.131 ceph-host-01
172.16.99.132 ceph-host-02
172.16.99.133 ceph-host-03
172.16.99.134 ceph-host-04
172.16.99.135 ceph-host-05
每个主机上有2块空闲盘
ceph集群节点系统这里采用了centos7.5 64位。总共5台ceph节点机,每台节点机启动2个osd角色,每个osd对应一块物理磁盘。
关闭selinux和防火墙
setenforce 0
sed -i ‘s/SELINUX=.*/SELINUX=disabled/‘ /etc/selinux/config
systemctl stop firewalld
systemctl disable firewalld
1.安装ceph-deloy
1.1配置主机名,配置host文件,本例ceph-deploy安装在其中一个节点上。
[root@ceph-host-01 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.99.131 ceph-host-01
172.16.99.132 ceph-host-02
172.16.99.133 ceph-host-03
172.16.99.134 ceph-host-04
172.16.99.135 ceph-host-05
或者使用ansible给所有节点更改/etc/hosts文件
# ansible ceph -m copy -a ‘src=/tmp/hosts dest=/etc/hosts‘
2.2使用ssh-keygen生成key,并用ssh-copy-id复制key到各节点机。
[root@ceph-host-01 ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory ‘/root/.ssh‘.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:iVPfxuQVphRA8v2//XsM+PxzWjYrx5JnnHTbBdNYwTw root@ceph-host-01
The key‘s randomart image is:
+---[RSA 2048]----+
| ..o.o.=..|
| o o o E.|
| . . + .+.|
| o o = o+ .|
| o S . =..o |
| . .. .oo|
| o=+X|
| +o%X|
| B*X|
+----[SHA256]-----+
以将key复制到ceph-host-02为例
[root@ceph-host-01 ~]# ssh-copy-id ceph-host-02
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host ‘ceph-host-02 (172.16.99.132)‘ can‘t be established.
ECDSA key fingerprint is SHA256:VsMfdmYFzxV1dxKZi2OSp8QluRVQ1m2lT98cJt4nAFU.
ECDSA key fingerprint is MD5:de:07:2f:5c:13:9b:ba:0b:e5:0e:c2:db:3e:b8:ab:bd.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@ceph-host-02‘s password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh ‘ceph-host-02‘"
and check to make sure that only the key(s) you wanted were added.
1.3安装ceph-deploy.
安装前我们配置下yum源,这里使用的是较新的nautilus版本
[root@ceph-host-01 ~]# cat /etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for $basearch
enabled=1
gpgcheck=1
type=rpm-md
[Ceph-noarch]
name=Ceph noarch packages
enabled=1
gpgcheck=1
type=rpm-md
[ceph-source]
name=Ceph source packages
enabled=1
gpgcheck=1
type=rpm-md
注:直接安装官方yum源的命令如下
使用ansible给所有节点导入yum源,这样效率更高
# ansible ceph -m copy -a ‘src=/etc/yum.repos.d/ceph.repo dest=/etc/yum.repos.d/ceph.repo‘
[root@ceph-host-01 ~]# yum install ceph-deploy python-setuptools python2-subprocess32 -y
2.创建ceph monitor角色
2.1在使用ceph-deploy部署的过程中会产生一些配置文件,建议先创建一个目录,例如cpeh-cluster
[root@ceph-host-01 ~]# mkdir -pv ceph-cluster
[root@ceph-host-01 ~]# cd ceph-cluster
2.2初始化mon节点,准备创建集群:
[root@ceph-host-01 ceph-cluster]# ceph-deploy new ceph-host-01 ceph-host-02 ceph-host-03
更改生成的 ceph 集群配置文件
[root@ceph-host-01 ceph-cluster]# cat ceph.conf
[global]
fsid = a480fcef-1c4b-48cb-998d-0caed867b5eb
mon_initial_members = ceph-host-01, ceph-host-02, ceph-host-03
mon_host = 172.16.99.131,172.16.99.132,172.16.99.133
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
public_network = 172.16.99.0/24
cluster_network = 192.168.9.0/24
#更改 osd 个数
osd pool default size = 5
[mon]
#允许 ceph 集群删除 pool
mon_allow_pool_delete = true
[mgr]
mgr modules = dashboard
2.3所有节点安装ceph程序
使用ceph-deploy来安装ceph程序,也可以单独到每个节点上手动安装ceph,根据配置的yum源不同,会安装不同版本的ceph
[root@ceph-host-01 ceph-cluster]# ceph-deploy install --no-adjust-repos ceph-host-01 ceph-host-02 ceph-host-03 ceph-host-04 ceph-host-05
# 不加--no-adjust-repos 会一直使用ceph-deploy提供的默认的源,很坑
提示:若需要在集群各节点独立安装ceph程序包,其方法如下:
# yum install ceph ceph-radosgw -y
2.4配置初始mon节点,并收集所有密钥
[root@ceph-host-01 ceph-cluster]# ceph-deploy mon create-initial
2.5查看启动服务
# ps -ef|grep ceph
ceph 1916 1 0 12:05 ? 00:00:03 /usr/bin/ceph-mon -f --cluster ceph --id ceph-host-01 --setuser ceph --setgroup ceph
2.6在管理节点把配置文件和 admin 密钥拷贝到管理节点和 Ceph 节点
[root@ceph-host-01 ceph-cluster]# ceph-deploy admin ceph-host-01 ceph-host-02 ceph-host-03 ceph-host-04 ceph-host-05
在每个节点上赋予 ceph.client.admin.keyring 有操作权限
# chmod +r /etc/ceph/ceph.client.admin.keyring
或者使用ansible批量给ceph节点添加权限
# ansible ceph -a ‘chmod +r /etc/ceph/ceph.client.admin.keyring‘
3.创建ceph osd角色(osd部署)
新版ceph-deploy直接使用create
相当于prepare,activate,osd create --bluestore
ceph-deploy osd create --data /dev/vdb ceph-host-01
ceph-deploy osd create --data /dev/vdb ceph-host-02
ceph-deploy osd create --data /dev/vdb ceph-host-03
ceph-deploy osd create --data /dev/vdb ceph-host-04
ceph-deploy osd create --data /dev/vdb ceph-host-05
4.创建mgr角色
自从ceph 12开始,manager是必须的。应该为每个运行monitor的机器添加一个mgr,否则集群处于WARN状态。
[root@ceph-host-01 ceph-cluster]# ceph-deploy mgr create ceph-host-01 ceph-host-02 ceph-host-03
5.查看集群健康状态
[root@ceph-host-03 ~]# ceph health
HEALTH_OK
[root@ceph-host-03 ~]# ceph -s
cluster:
id: 02e63c58-5200-45c9-b592-07624f4893a5
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph-host-01,ceph-host-02,ceph-host-03 (age 59m)
mgr: ceph-host-01(active, since 4m), standbys: ceph-host-02, ceph-host-03
osd: 5 osds: 5 up (since 87m), 5 in (since 87m)
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 5.0 GiB used, 90 GiB / 95 GiB avail
pgs:
再添加osd
ceph-deploy osd create --data /dev/vdc ceph-host-01
ceph-deploy osd create --data /dev/vdc ceph-host-02
ceph-deploy osd create --data /dev/vdc ceph-host-03
ceph-deploy osd create --data /dev/vdc ceph-host-04
ceph-deploy osd create --data /dev/vdc ceph-host-05
查看状态
[root@ceph-host-01 ceph-cluster]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.18585 root default
-3 0.03717 host ceph-host-01
0 hdd 0.01859 osd.0 up 1.00000 1.00000
5 hdd 0.01859 osd.5 up 1.00000 1.00000
-5 0.03717 host ceph-host-02
1 hdd 0.01859 osd.1 up 1.00000 1.00000
6 hdd 0.01859 osd.6 up 1.00000 1.00000
-7 0.03717 host ceph-host-03
2 hdd 0.01859 osd.2 up 1.00000 1.00000
7 hdd 0.01859 osd.7 up 1.00000 1.00000
-9 0.03717 host ceph-host-04
3 hdd 0.01859 osd.3 up 1.00000 1.00000
8 hdd 0.01859 osd.8 up 1.00000 1.00000
-11 0.03717 host ceph-host-05
4 hdd 0.01859 osd.4 up 1.00000 1.00000
9 hdd 0.01859 osd.9 up 1.00000 1.00000
查看挂载
[root@ceph-host-02 ~]# df -hT
Filesystem Type Size Used Avail Use% Mounted on
/dev/vda1 xfs 20G 1.5G 19G 8% /
devtmpfs devtmpfs 475M 0 475M 0% /dev
tmpfs tmpfs 496M 0 496M 0% /dev/shm
tmpfs tmpfs 496M 13M 483M 3% /run
tmpfs tmpfs 496M 0 496M 0% /sys/fs/cgroup
tmpfs tmpfs 100M 0 100M 0% /run/user/0
tmpfs tmpfs 496M 52K 496M 1% /var/lib/ceph/osd/ceph-1
tmpfs tmpfs 496M 52K 496M 1% /var/lib/ceph/osd/ceph-6
注:mon和mgr角色其实各一个节点就行,但是为了保障高可用,推荐多节点,示范添加命令如下
# ceph-deploy mon add ceph-host-03
# ceph-deploy mgr create ceph-host-03
5.创建和删除ceph存储池
5.1创建
[root@ceph-host-01 ceph-cluster]# ceph osd pool create volumes 128
pool ‘volumes‘ created
5.2删除
[root@ceph-host-02 ~]# ceph osd pool rm volumes volumes --yes-i-really-really-mean-it
pool ‘volumes‘ removed
6.部署ceph-fs
http://docs.ceph.com/docs/master/rados/operations/placement-groups/
[root@ceph-host-01 ceph-cluster]# ceph-deploy mds create ceph-host-01 ceph-host-02 ceph-host-03
[root@ceph-host-01 ceph-cluster]# ceph osd pool create data 128
[root@ceph-host-01 ceph-cluster]# ceph osd pool create metadata 64
[root@ceph-host-01 ceph-cluster]# ceph fs new cephfs metadata data
[root@ceph-host-01 ceph-cluster]# ceph fs ls
name: cephfs, metadata pool: metadata, data pools: [data ]
[root@ceph-host-01 ceph-cluster]# ceph mds stat
cephfs:1 {0=ceph-host-02=up:active} 2 up:standby
虽然支持多 active mds并行运行,但官方文档建议保持一个active mds,其他mds作为standby
7.挂载fs
client是规划在ceph-host-02上的
在物理机上挂载cephfs可以使用mount命令、mount.ceph(yum install ceph-common)或ceph-fuse(yum install ceph-fuse),我们先用mount命令挂载
[root@ceph-host-02 ~]# mkdir -p /data/ceph-storage/
[root@ceph-host-02 ~]# chown -R ceph.ceph /data/ceph-storage
[root@ceph-host-02 ~]# ceph-authtool -l /etc/ceph/ceph.client.admin.keyring
[client.admin]
key = AQClSjZeDRiXGRAAdPMZHwAglZE6fe/NPstd9A==
caps mds = "allow *"
caps mgr = "allow *"
caps mon = "allow *"
caps osd = "allow *"
[root@ceph-host-02 ~]# mount -t ceph 172.19.99.131:6789:/ /data/ceph-storage/ -o name=admin,secret=AQClSjZeDRiXGRAAdPMZHwAglZE6fe/NPstd9A==
[root@ceph-host-02 ~]# df -Th
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/centos-root xfs 17G 1.5G 16G 9% /
devtmpfs devtmpfs 478M 0 478M 0% /dev
tmpfs tmpfs 488M 0 488M 0% /dev/shm
tmpfs tmpfs 488M 6.7M 481M 2% /run
tmpfs tmpfs 488M 0 488M 0% /sys/fs/cgroup
/dev/sda1 xfs 1014M 153M 862M 16% /boot
tmpfs tmpfs 98M 0 98M 0% /run/user/0
tmpfs tmpfs 488M 48K 488M 1% /var/lib/ceph/osd/ceph-1
tmpfs tmpfs 98M 0 98M 0% /run/user/1000
172.24.10.21:6789:/ ceph 120G 6.3G 114G 6% /data/ceph-storage
关于环境的清理
$ ceph-deploy purge ceph-host-01 ceph-host-02 ceph-host-03 ceph-host-04 ceph-host-05 // 会移除所有与ceph相关的
$ ceph-deploy purgedata ceph-host-01 ceph-host-02 ceph-host-03 ceph-host-04 ceph-host-05
$ ceph-deploy forgetkeys
关于报错:
health问题解决
health: HEALTH_WARN
clock skew detected on mon.ceph-host-02, mon.ceph-host-03
这个是时间同步造成的
# ansible ceph -a ‘yum install ntpdate -y‘
# ansible ceph -a ‘systemctl stop ntpdate‘
# ansible ceph -a ‘ntpdate time.windows.com‘
ceph部署
标签:移除 cal adjust base std clu mct boa ice
原文地址:https://www.cnblogs.com/dexter-wang/p/12256490.html