码迷,mamicode.com
首页 > 其他好文 > 详细

基于Centos7.4搭建Ceph

时间:2017-09-18 11:09:02      阅读:315      评论:0      收藏:0      [点我收藏+]

标签:ceph


 本文使用ceph-deploy工具,能快速搭建出一个ceph集群。


一、环境准备

技术分享

  •  修改主机名

   

  1. [root@admin-node ~]# cat /etc/redhat-release 

  2. CentOS Linux release 7.4.1708 (Core) 


IP
主机名角色

10.10.10.20

admin-nodeceph-deploy
10.10.10.21node1mon
10.10.10.22node2osd
10.10.10.23node3osd


  • 设置DNS解析(我们这里修改/etc/hosts文件)

  • 每个节点都要配置

   

  1. [root@admin-node ~]# cat /etc/hosts

  2. 127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

  3. ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

  4. 10.10.10.20 admin-node

  5. 10.10.10.21 node1

  6. 10.10.10.22 node2

  7. 10.10.10.23 node3 


  • 配置yum源

  • 每个节点都要配置


  1. [root@admin-node ~]# mv /etc/yum.repos.d{,.bak}

  2. [root@admin-node ~]# mkdir /etc/yum.repos.d

  3. [root@admin-node ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

  4. [root@node3 ceph]# cat /etc/yum.repos.d/ceph.repo

  5. [Ceph]

  6. name=Ceph 

  7. baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/

  8. enabled=1

  9. gpgcheck=0


  • 关闭防火墙和Selinux

  • 每个节点都要配置


  1. [root@admin-node ~]# systemctl stop firewalld.service

  2. [root@admin-node ~]# systemctl disable firewalld.service

  3. [root@admin-node ~]# setenforce 0

  4. [root@admin-node ~]# sed -i ‘s/SELINUX=enforcing/SELINUX=disabled/‘ /etc/selinux/config


  • 设置节点之间面秘钥登入

  • 每个节点都要配置

  1. [root@admin-node ~]# ssh-keygen 

  2. [root@admin-node ~]# ssh-copy-id 10.10.10.21

  3. [root@admin-node ~]# ssh-copy-id 10.10.10.22

  4. [root@admin-node ~]# ssh-copy-id 10.10.10.23


  • 使用chrony同步时间

  • 每个节点都要配置


  1. [root@admin-node ~]# yum install chrony -y 

  2. [root@admin-node ~]# systemctl restart  chronyd 

  3. [root@admin-node ~]# systemctl enable  chronyd

  4. [root@admin-node ~]# chronyc source -v (查看时间是否同步,*表示同步完成)


二、安装ceph-luminous


  • 安装ceph-deploy

  • 只在admin-node节点安装


  1. [root@admin-node ~]# yum install ceph-deploy -y 


  • 在管理节点上创建一个目录,用于保存 ceph-deploy 生成的配置文件和密钥对

  • 只在admin-node节点安装


  1. [root@admin-node ~]# mkdir /etc/ceph

  2. [root@admin-node ~]# cd /etc/ceph/


  • 清除配置(若想从新安装可以执行以下命令)

  • 只在admin-node节点安装


  1. [root@admin-node ceph]# ceph-deploy purgedata node1 node2 node3 

  2. [root@admin-node ceph]# ceph-deploy forgetkeys


  • 创建集群

  • 只在admin-node节点安装


  1. [root@admin-node ceph]# ceph-deploy new node1


  • 修改ceph的配置,将副本数改为2

  • 只在admin-node节点安装


  1. [root@admin-node ceph]# vi ceph.conf

  2. [global]

  3. fsid = 183e441b-c8cd-40fa-9b1a-0387cb8e8735

  4. mon_initial_members = node1

  5. mon_host = 10.10.10.21

  6. auth_cluster_required = cephx

  7. auth_service_required = cephx

  8. auth_client_required = cephx

  9. filestore_xattr_use_omap = true


  10. osd pool default size = 2

    

  • 安装ceph

  • 只在admin-node节点安装


  1. [root@admin-node ceph]# ceph-deploy install admin-node node1 node2 node3


    [node3][DEBUG ] Configure Yum priorities to include obsoletes

    [node3][WARNIN] check_obsoletes has been enabled for Yum priorities plugin

    [node3][INFO  ] Running command: rpm --import https://download.ceph.com/keys/release.asc

    [node3][INFO  ] Running command: rpm -Uvh --replacepkgs https://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm

    [node3][DEBUG ] 获取https://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm

    [node3][WARNIN] 警告:/etc/yum.repos.d/ceph.repo 已建立为 /etc/yum.repos.d/ceph.repo.rpmnew 

    [node3][DEBUG ] 准备中...                          ########################################

    [node3][DEBUG ] 正在升级/安装...

    [node3][DEBUG ] ceph-release-1-1.el7                  ########################################

    [node3][WARNIN] ensuring that /etc/yum.repos.d/ceph.repo contains a high priority

    [ceph_deploy][ERROR ] RuntimeError: NoSectionError: No section: ‘ceph-noarch‘

  2. 这个地方报错了,安装了一个高版本的ceph-release

    解决方法:yum remove ceph-release

    每个节点删除ceph-release后再次重新执行上一次的命令

        

  • 配置初始 monitor(s)、并收集所有密钥

  • 只在admin-node节点安装


  1. [root@admin-node ceph]# ceph-deploy mon create-initial

  2. [root@admin-node ceph]# ls

  3. ceph.bootstrap-mds.keyring  ceph.bootstrap-rgw.keyring  ceph-deploy-ceph.log

  4. ceph.bootstrap-mgr.keyring  ceph.client.admin.keyring   ceph.mon.keyring

  5. ceph.bootstrap-osd.keyring  ceph.conf                   rbdmap

  6. [root@admin-node ceph]# ceph -s (查看集群状态)

  7.     cluster 8d395c8f-6ac5-4bca-bbb9-2e0120159ed9

  8.      health HEALTH_ERR

  9.             no osds

  10.      monmap e1: 1 mons at {node1=10.10.10.21:6789/0}

  11.             election epoch 3, quorum 0 node1

  12.      osdmap e1: 0 osds: 0 up, 0 in

  13.             flags sortbitwise,require_jewel_osds

  14.       pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects

  15.             0 kB used, 0 kB / 0 kB avail

  16.                   64 creating


  • 创建OSD


  1. [root@node2 ~]# lsblk    (node2,node3做osd)

  2. NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT

  3. fd0           2:0    1    4K  0 disk 

  4. sda           8:0    0   20G  0 disk 

  5. ├─sda1        8:1    0    1G  0 part /boot

  6. └─sda2        8:2    0   19G  0 part 

  7.   ├─cl-root 253:0    0   17G  0 lvm  /

  8.   └─cl-swap 253:1    0    2G  0 lvm  [SWAP]

  9. sdb           8:16   0   50G  0 disk /var/local/osd0

  10. sdc           8:32   0    5G  0 disk 

  11. sr0          11:0    1  4.1G  0 rom  

  12. [root@node2 ~]# mkfs.xfs /dev/sdb

  13. [root@node2 ~]#  mkdir /var/local/osd0

  14. [root@node2 ~]#  mount /dev/sdb /var/local/osd0

  15. [root@node2 ~]# chown ceph:ceph  /var/local/osd0

  16. [root@node3 ~]# mkdir /var/local/osd1

  17. [root@node3 ~]# mkfs.xfs /dev/sdb

  18. [root@node3 ~]# mount /dev/sdb /var/local/osd1/

  19. [root@node3 ~]# chown ceph:ceph  /var/local/osd1

  20. [root@admin-node ceph]# ceph-deploy osd prepare node2:/var/local/osd0 node3:/var/local/osd1    (在admin-node节点执行)


  • 将admin-node上的密钥和配合文件拷贝到各个节点

  • 只在admin-node节点安装


  1. [root@admin-node ceph]# ceph-deploy admin admin-node node1 node2 node3


  • 确保对 ceph.client.admin.keyring 有正确的操作权限

  • 只在OSD节点执行


  1. [root@node2 ~]# chmod +r /etc/ceph/ceph.client.admin.keyring


  • 管理节点执行 ceph-deploy 来准备 OSD


  1. [root@admin-node ceph]# ceph-deploy osd prepare node2:/var/local/osd0 node3:/var/local/osd1


  • 激活 OSD


  1. [root@admin-node ceph]# ceph-deploy osd activate node2:/var/local/osd0 node3:/var/local/osd1


  • 检查集群的健康状况


  1. [root@admin-node ceph]# ceph health

  2. HEALTH_OK


  1. [root@admin-node ceph]# ceph health

  2. HEALTH_OK

  3. [root@admin-node ceph]# ceph -s 

  4.     cluster 69f64f6d-f084-4b5e-8ba8-7ba3cec9d927

  5.      health HEALTH_OK

  6.      monmap e1: 1 mons at {node1=10.10.10.21:6789/0}

  7.             election epoch 3, quorum 0 node1

  8.      osdmap e14: 3 osds: 3 up, 3 in

  9.             flags sortbitwise,require_jewel_osds

  10.       pgmap v29: 64 pgs, 1 pools, 0 bytes data, 0 objects

  11.             15459 MB used, 45950 MB / 61410 MB avail

  12.                   64 active+clean




本文出自 “若不奋斗,何以称王” 博客,请务必保留此出处http://wangzc.blog.51cto.com/12875919/1966109

基于Centos7.4搭建Ceph

标签:ceph

原文地址:http://wangzc.blog.51cto.com/12875919/1966109

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!