码迷,mamicode.com
首页 > 其他好文 > 详细

centos7部署ceph集群(正确)

时间:2016-12-08 00:16:26      阅读:506      评论:0      收藏:0      [点我收藏+]

标签:ceph 分布式共享存储 存储

环境介绍

主机名

系统

ip地址

ceph版本

ceph-node1

CentOS Linux release 7.2.1511

192.168.1.120

jewel

ceph-node2

CentOS Linux release 7.2.1511

192.168.1.121

jewel

ceph-node3

CentOS Linux release 7.2.1511

192.168.1.128

jewel

准备工作

    1-7在三台ceph节点上都需要进行操作

    ◆ 8只在ceph1操作即可

1:修改主机名

[root@ceph-node1 ~]# hostname ceph-node1

[root@ceph-node1 ~]# vim /etc/sysconfig/network

HOSTNAME=ceph-node1

2:配置ip地址,子网掩码,网关

[root@ceph-node1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eno16777736

TYPE="Ethernet"

BOOTPROTO="none"

DEFROUTE="yes"

IPV4_FAILURE_FATAL="no"

IPV6INIT="no"

IPV6_AUTOCONF="yes"

IPV6_DEFROUTE="yes"

IPV6_PEERDNS="yes"

IPV6_PEERROUTES="yes"

IPV6_FAILURE_FATAL="no"

NAME="eth1"

DEVICE="eno16777736"

ONBOOT="yes"

IPADDR="192.168.1.120"

PREFIX="24"

GATEWAY="192.168.1.1"

DNS1="192.168.0.220"

3:配置hosts文件

[root@ceph-node1 ~]# vim /etc/hosts

ceph-node1 192.168.1.120

ceph-node2 192.168.1.121

ceph-node3 192.168.1.128

4:配置防火墙

[root@ceph-node1 ~]# firewall-cmd --zone=public --add-port=6789/tcp --permanent

success

[root@ceph-node1 ~]# firewall-cmd --zone=public --add-port=6800-7100/tcp --permanent

success

[root@ceph-node1 ~]# firewall-cmd --reload

success

[root@ceph-node1 ~]# firewall-cmd --zone=public --list-all

public (default, active)

  interfaces: eno16777736

  sources:

  services: dhcpv6-client ssh

  ports: 6789/tcp 6800-7100/tcp

  masquerade: no

  forward-ports:

  icmp-blocks:

  rich rules:

5:禁用selinux

[root@ceph-node1 ~]# setenforce 0

[root@ceph-node1 ~]# sed -i "s/enforcing/permissive/g" /etc/selinux/config

6:配置时间同步

[root@ceph-node1 ~]# yum -y install ntp ntpdate

[root@ceph-node1 ~]# systemctl restart ntpd.service

[root@ceph-node1 ~]# systemctl enable ntpd.service

Created symlink from /etc/systemd/system/multi-user.target.wants/ntpd.service to /usr/lib/systemd/system/ntpd.service.

[root@ceph-node1 ~]# systemctl enable ntpdate.service

Created symlink from /etc/systemd/system/multi-user.target.wants/ntpdate.service to /usr/lib/systemd/system/ntpdate.service.

7:添加ceph jewel版本并更新yum

[root@ceph-node1 ceph]# rpm -Uvh https://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm

[root@ceph-node1 ~]# yum -y update

8:配置ceph1的免秘钥登录

[root@ceph-node1 ~]# ssh-keygen

[root@ceph-node1 ~]# ssh-copy-id root@ceph-node2

[root@ceph-node1 ~]# ssh-copy-id root@ceph-node3

ceph-node1上创建集群

1:安装ceph-deploy

[root@ceph-node1 ~]# yum -y install ceph-deploy

2:使用ceph-deploy创建一个ceph集群

[root@ceph-node1 ~]# mkdir /etc/ceph

[root@ceph-node1 ~]# cd /etc/ceph/

[root@ceph-node1 ceph]# ceph-deploy new ceph-node1

[root@ceph-node1 ceph]# ls

ceph.conf  ceph.log  ceph.mon.keyring

3:使用ceph-deploy在所有节点安装ceph二进制包

[root@ceph-node1 ceph]# ceph-deploy install ceph-node1 ceph-node2 ceph-node3

[root@ceph-node1 ceph]# ceph -v

ceph version 9.2.1 (752b6a3020c3de74e07d2a8b4c5e48dab5a6b6fd)

4:在ceph-node1上创建第一个ceph monitor

[root@ceph-node1 ceph]# ceph-deploy mon create-initial

5:在ceph-node1上创建osd

[root@ceph-node1 ceph]# ceph-deploy disk list ceph-node1

[root@ceph-node1 ceph]# ceph-deploy disk zap ceph-node1:sdb ceph-node1:sdc ceph-node1:sdd

[root@ceph-node1 ~]# ceph-deploy disk zap ceph-node1:sdb ceph-node1:sdc ceph-node1:sdd

[root@ceph-node1 ~]# ceph-deploy osd create ceph-node1:sdb ceph-node1:sdc ceph-node1:sdd

[root@ceph-node1 ~]# ceph -s

扩展ceph集群

1:添加公共网络地址到配置文件

[root@ceph-node1 ~]# cd /etc/ceph/

[root@ceph-node1 ceph]# vim /etc/ceph/ceph.conf

2:在ceph-node2ceph-node3上创建一个monitor

[root@ceph-node1 ceph]# ceph-deploy mon create ceph-node2

[root@ceph-node1 ceph]# ceph-deploy mon create ceph-node3

3:从ceph-node2ceph-node3中添加osd

[root@ceph-node1 ceph]# ceph-deploy disk list ceph-node2 ceph-node3

[root@ceph-node1 ceph]# ceph-deploy disk zap ceph-node2:sdb ceph-node2:sdc ceph-node2:sdd

[root@ceph-node1 ceph]# ceph-deploy disk zap ceph-node3:sdb ceph-node3:sdc ceph-node3:sdd

[root@ceph-node1 ceph]# ceph-deploy osd create ceph-node2:sdb ceph-node2:sdc ceph-node2:sdd

[root@ceph-node1 ceph]# ceph-deploy osd create ceph-node3:sdb ceph-node3:sdc ceph-node3:sdd

4:调整pgpgp的数量

[root@ceph-node1 ceph]# ceph osd pool set rbd pg_num 256

[root@ceph-node1 ceph]# ceph osd pool set rbd pgp_num 256

5:查看集群状态,此时集群应该是health状态

[root@ceph-node1 ceph]# ceph -s

    cluster 266bfddf-7f45-416d-95df-4e6487e8eb20

     health HEALTH_OK

     monmap e3: 3 mons at {ceph-node1=192.168.1.120:6789/0,ceph-node2=192.168.1.121:6789/0,ceph-node3=192.168.1.128:6789/0}

            election epoch 8, quorum 0,1,2 ceph-node1,ceph-node2,ceph-node3

     osdmap e53: 9 osds: 9 up, 9 in

            flags sortbitwise,require_jewel_osds

      pgmap v158: 256 pgs, 1 pools, 0 bytes data, 0 objects

            320 MB used, 134 GB / 134 GB avail

                 256 active+clean

技术分享


本文出自 “庭中有奇树” 博客,请务必保留此出处http://zhangdl.blog.51cto.com/11050780/1880340

centos7部署ceph集群(正确)

标签:ceph 分布式共享存储 存储

原文地址:http://zhangdl.blog.51cto.com/11050780/1880340

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!