标签:ceph
ceph 快速安装
架构
ceph-deploy/ceph-admin: 192.168.1.214
ceph node 192.168.1.215/216/217
mon 215
osd 216/217
一、操作1.214
ceph-deploy : 192.168.1.214
安装rpm源
yum install -y yum-utils && sudo yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/ && sudo yum install --nogpgcheck -y epel-release && sudo rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 && sudo rm /etc/yum.repos.d/dl.fedoraproject.org*
vim /etc/yum.repos.d/ceph.repo ceph-release 是稳定版的版本名, distro 是发行版名字
举例
baseurl=http://download.ceph.com/rpm-jewel/el7/noarch [ceph-noarch] name=Ceph noarch packages baseurl=http://download.ceph.com/rpm-{ceph-release}/{distro}/noarch enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://download.ceph.com/keys/release.asc
安装ceph-deploy
yum install ceph-deploy -y
useradd ceph-admin
ssh-kengen
把生成的公钥拷贝到每个节点
ssh-copy-id cephcluster@192.168.1.215/216/217
简化 ssh 和 scp 的用法,不用每次执行ceph-deploy的时候指定username
vi /home/ceph-admin/.ssh/config Host host215 Hostname 192.168.1.215 User cephcluster Host host216 Hostname 192.168.1.216 User cephcluster Host host217 Hostname 192.168.1.217 User cephcluster chmod 600 config
ceph-node
因为 ceph-deploy 不支持输入密码,所以在所有节点,创建用户,有sudo权限,
useradd cephcluster pssh -h testhost -i "echo cephcluster|sudo passwd --stdin cephcluster" sed -i ‘/wheel:x/s/$/,cephcluster/‘ /etc/group
mkdir -p /home/cephcluster/.ssh
集群命令
清除配置
ceph-deploy purgedata {ceph-node} [{ceph-node}] ceph-deploy forgetkeys
用下列命令可以连 Ceph 安装包一起清除
ceph-deploy purge {ceph-node} [{ceph-node}]
1、创建集群
ceph-deploy new host215
2、配置文件设置
vi ceph.conf
[global] fsid = 9f08666b-6725-4593-90c0-b361ca17e924 mon_initial_members = host215 mon_host = 192.168.1.215 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx #修改osd 默认数量为2,之前是3 osd pool default size = 2 #公共网 public network = 192.168.1.0/24 #集群网 OSD 将把心跳、对象复制和恢复流量路由到集群网,与单个网络相比这会提升性能 cluster network = 192.168.6.0/22
3、安装集群
ceph-deploy install host214 host215 host216 host217
4、安装mon节点
ceph-deploy mon create-initial
有如下报错是因为 systemd版本太低,不支持enable service@写法,需要升级mon节点的systemd,
顺便把所有node节点也升级下,否则后面的激活osd 也会报错
yum install systemd -y [host215][INFO ] Running command: sudo systemctl enable ceph.target [host215][INFO ] Running command: sudo systemctl enable ceph-mon@host215 [host215][WARNIN] Failed to issue method call: No such file or directory [host215][ERROR ] RuntimeError: command returned non-zero exit status: 1 [ceph_deploy.mon][ERROR ] Failed to execute command: systemctl enable ceph-mon@host215 [ceph_deploy][ERROR ] GenericError: Failed to create 1 monitors
5、磁盘管理(osd)
查看磁盘
ceph-deploy disk list host216 host217
擦净磁盘(这是裸盘配置,一定要确实磁盘名称,这会删除所有数据)
ceph-deploy disk zap host217:sdb ceph-deploy disk zap host216:sdb
准备osd:
原始命令:
ceph-deploy osd prepare {node-name}:{data-disk-partition}[:{journal-disk-partition}]
data-disk-partition:是数据存储的地方
journal-disk-partition:是日志存储的地方
优化:可以把日志存储指定另外的驱动器,当然最好是另一块ssd,目前是把日志放到osd数据盘,但是有性能损耗
ceph-deploy osd prepare host216:sdb:/dev/ssd ceph-deploy osd prepare host217:sdb:/dev/ssd
激活osd:
ceph-deploy osd activate host216:sdb1:/dev/ssd ceph-deploy osd activate host217:sdb1:/dev/ssd
创建osd:
可以用 create 命令一次完成准备 OSD 、部署到 OSD 节点、并激活它。 create 命令是依次执行 prepare 和 activate 命令的集合
ceph-deploy osd create {node-name}:{disk}[:{path/to/journal}] ceph-deploy osd create host216:sdb:/dev/ssd ceph-deploy osd create host217:sdb:/dev/ssd
用 ceph-deploy 把配置文件和 admin 密钥拷贝到管理节点和 Ceph 节点,这样你每次执行 Ceph 命令行时就无需指定 monitor 地址和 ceph.client.admin.keyring 了。
ceph-deploy admin host214 host215 host216 host217
确保对 ceph.client.admin.keyring 有正确的操作权限
chmod +r /etc/ceph/ceph.client.admin.keyring
检查集群的健康状况
ceph health
扩展集群(将218 加入osd,216/217 添加mon)
218 加入集群,添加osd
安装ceph步骤同上
ceph-deploy install host218 ceph-deploy osd prepare host218:sdb:/dev/ssd ceph-deploy osd activate host218:sdb1:/dev/ssd
216/217 添加mon
ceph-deploy mon add host216 ceph-deploy mon add host217
查看mon状态
ceph quorum_status --format json-pretty
本文出自 “银狐” 博客,请务必保留此出处http://foxhound.blog.51cto.com/1167932/1793335
标签:ceph
原文地址:http://foxhound.blog.51cto.com/1167932/1793335