标签:ceph repos 执行 for class service sea node ble
1、添加镜像
所有server上执行
官方的镜像源较慢,这里使用阿里提供的yum源
[root@node2 ceph]# vim /etc/yum.repos.d/ceph.repo [Ceph] name=Ceph packages for $basearch baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/$basearch enabled=1 gpgcheck=0 type=rpm-md priority=1 [Ceph-noarch] name=Ceph noarch packages baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch enabled=1 gpgcheck=0 type=rpm-md priority=1 [ceph-source] name=Ceph source packages baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/SRPMS enabled=1 gpgcheck=0 type=rpm-md priority=1
2、安装epel
所有server上执行
yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
3、安装ntp,确保时间同步
所有server上执行
4、免密登录
所有server上执行
(1)/etc/hosts中加入集群所有server
(2)ssh-keygen
ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: ef:3e:32:a2:c1:2d:48:7b:f5:db:16:98:f5:ff:ed:24 root@barrie The key‘s randomart image is: +--[ RSA 2048]----+ | | | | | | | . | | . . S+ . | | . + o .o.. . | | o = . . .. .E .| | . o. o+o .o.| | .. ..=+. o+| +-----------------+
(3)拷贝本地密钥至其它所有server
[root@node2 ceph]#ssh-copy-id node1
检查:ssh node1可直接登录,不需要密码
5、安装ceph
所有节点执行
[root@node2 ceph]# yum install ceph -y [root@node2 ceph]# ceph -v ceph version 10.2.11 (e4b061b47f07f583c92a050d9e84b1813a35671e)
6、安装ceph-deploy
仅在ceph-deploy server(node2)上执行
[root@node2 ceph]# yum install ceph-deploy -y [root@node2 ceph]# ceph-deploy --version 1.5.39
7、部署ceph cluster
仅在ceph-deploy server(node2)上执行
[root@node2 ceph]# ceph-deploy new node2 #会覆盖/etc/ceph/ceph.conf,文件中会指定monitor,只在node2上创建monitor
或
[root@node2 ceph]# ceph-deploy new node2 node1 master1 #创建三个monitor的集群,注意:不能分成三行写
8、修改ceph.conf
[root@node2 ceph]# vim /etc/ceph/ceph.conf [global] fsid = 4c137c64-9e09-410e-aee4-c04b0f46294e mon_initial_members = node2 mon_host = 172.16.18.22 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx osd pool default size = 3 public network = 172.16.18.22/24 [mon] mon allow pool delete = true
9、创建monitor
仅deploy server(node2)上执行
[root@node2 ceph]# mkdir /etc/ceph/ceph-cluster/
[root@node2 ceph]# cd /etc/ceph/
[root@node2 ceph]# cp ceph.conf ceph-cluster/
[root@node2 ceph]# cd ceph-cluster/
[root@node2 ceph]# ceph-deploy mon create-initial # ceph-cluster目录下生成各个key文件
[root@node2 ceph]# cp /etc/ceph/ceph-cluster/ceph.client.admin.keyring /etc/ceph/
此时会生成key
10、将配置文件和key推到其它server上
仅deploy server(node2)上执行
在node1和master1上先创建/etc/ceph/目录
[root@node2 ceph]# ceph-deploy --overwrite-conf admin node1 #Pushing: /etc/ceph/ceph.conf & ceph.client.admin.keyring
[root@node2 ceph]# ceph-deploy --overwrite-conf admin master1
备注:ceph-deploy --overwrite-conf config push node1 # Pushing: /etc/ceph/ceph.conf)
11、启动mon,并检查状态
[root@node2 ceph]# systemctl start ceph-mon@node2 [root@node2 ceph]# ceph -s cluster 4c137c64-9e09-410e-aee4-c04b0f46294e health HEALTH_ERR 64 pgs are stuck inactive for more than 300 seconds 64 pgs stuck inactive 64 pgs stuck unclean no osds monmap e3: 1 mons at {node2=172.16.18.22:6789/0} election epoch 33, quorum 0 node2 osdmap e1: 0 osds: 0 up, 0 in flags sortbitwise,require_jewel_osds pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects 0 kB used, 0 kB / 0 kB avail 64 creating [root@node2 ceph]#
标签:ceph repos 执行 for class service sea node ble
原文地址:https://www.cnblogs.com/yajun2019/p/11636813.html