ceph-admin ceph-mon 为同一台服务器 ceph-osd1 为一台服务器 ceph-osd2 为另一台服务器
# systemctl stop firewalld.service # systemctl disable firewalld.service
# sed -i ‘s/SELINUX=enforcing/SELINUX=disabled/‘ /etc/selinux/config # setenforce 0
重启服务器
# yum install wget vim curl -y # yum clean all # mkdir /etc/yum.repos/repo # cd /etc/yum.repos/ # mv *.repo repo/ 下载阿里云的Base源 # wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo 下载阿里云的epel源 # wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo # sed -i ‘/aliyuncs/d‘ /etc/yum.repos.d/CentOS-Base.repo # sed -i ‘/aliyuncs/d‘ /etc/yum.repos.d/epel.repo 添加ceph源 # vim /etc/yum.repos.d/ceph.repo [ceph] name=ceph baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/ gpgcheck=0 priority=1 [ceph-noarch] name=cephnoarch baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/ gpgcheck=0 priority=1 [ceph-source] name=Ceph source packages baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/SRPMS enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc priority=1 缓存yum 元数据 # yum makecache
将所有节点的时间进行校对
# yum install ntp ntpdate 配置方式比较简单,略过...
# cat /etc/hosts 192.168.203.100 ceph-admin 192.168.203.150 ceph-osd1 192.168.203.200 ceph-osd2
# ssh-keygen -t rsa
一路回车,直到完成
将 密码拷贝到其他各个服务器
# ssh-copy-id ceph-admin # ssh-copy-id ceph-osd1 # ssh-copy-id ceph-osd2
# mkdir ceph-cluster # cd ceph-cluster # yum install ceph ceph-deploy 注:如果在安装过程中遇到问题,需要重新开始安装,执行以下命令来清空配置(新安装的不需要操作) 下面的命令会将安装的包卸载掉 # ceph-deploy purge ceph-admin ceph-osd1 ceph-osd2 下面的命令会清除数据 # ceph-deploy purgedata ceph-admin ceph-osd1 ceph-osd2 下面的命令清除key # ceph-deploy forgetkeys
# ceph-deploy install ceph-admin ceph-osd1 ceph-osd2
# ceph-deploy new ceph-admin
命令执行之后会在当前目录生成ceph.conf文件,打开文件增加一行内容(表示有两个osd)
osd pool default size = 2 # ceph-deploy --overwrite-conf mon create ceph-admin
注:如果监控节点比较多,请注意查看显示的信息是否正确
# ceph-deploy mon create-initial
# ceph daemon mon.`hostname` mon_status { "name": "adm", "rank": 0, "state": "leader", "election_epoch": 3, "quorum": [ 0 ], "outside_quorum": [], "extra_probe_peers": [], "sync_provider": [], "monmap": { "epoch": 1, "fsid": "7fe7736b-3ea6-4c8a-b3bd-81f9355a51c6", "modified": "2017-08-27 15:25:30.486560", "created": "2017-08-27 15:25:30.486560", "mons": [ { "rank": 0, "name": "adm", "addr": "192.168.203.153:6789\/0" } ] } }
为存储节点osd分配磁盘空间(在osd1和osd2 分别创建文件夹,并给予权限) # mkdir /data # chwon ceph.ceph -R /data
通过ceph-admin 节点的ceph-deploy 开启osd进程,并激活
# ceph-deploy gatherkeys ceph-admin ceph-mon1 ceph-mon2 ceph-mon3 ceph-osd1 ceph-osd2 # ceph-deploy --overwrite-conf osd prepare ceph-osd1:/data ceph-osd2:/data # ceph-deploy osd activate ceph-osd1:/data ceph-osd2:/data
把ceph-admin节点的配置文件与keying同步至其他节点
# ceph-deploy admin ceph-admin ceph-osd1 ceph-osd2 # chmod +r /etc/ceph/ceph.client.admin.keyring
如果以上步骤没有报错误,那么基本上ceph就安装完了。
# ceph -s cluster 7fe7736b-3ea6-4c8a-b3bd-81f9355a51c6 health HEALTH_OK monmap e1: 1 mons at {adm=192.168.203.153:6789/0} election epoch 3, quorum 0 adm osdmap e27: 2 osds: 2 up, 2 in flags sortbitwise,require_jewel_osds pgmap v4466: 120 pgs, 8 pools, 105 MB data, 173 objects 13743 MB used, 22012 MB / 35756 MB avail 120 active+clean # ceph health HEALTH_OK
mon-1为各个monitor所在节点的主机名。 # systemctl start ceph-mon@mon-1.service # systemctl restart ceph-mon@mon-1.service # systemctl stop ceph-mon@mon-1.service 0为该节点的OSD的id,可以通过`ceph osd tree`查看 # systemctl start/stop/restart ceph-osd@0.service
查看osd 信息 # ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 0.03400 root default -2 0.01700 host ceph-osd1 4 0.01700 osd.4 up 1.00000 1.00000 -3 0.01700 host ceph-osd2 3 0.01700 osd.3 up 1.00000 1.00000 1 0 osd.1 down 0 1.00000 2 0 osd.2 down 0 1.00000 将down的转台设置为out # ceph osd out osd.1 osd.1 is already out. # ceph osd out osd.2 osd.2 is already out. 将osd从集群中删除 # ceph osd rm osd.2 removed osd.2 # ceph osd rm osd.1 removed osd.1 # ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 0.03400 root default -2 0.01700 host ceph-osd1 4 0.01700 osd.4 up 1.00000 1.00000 -3 0.01700 host ceph-osd2 3 0.01700 osd.3 up 1.00000 1.00000 从CRUSH 中删除 关于CRUSH介绍 (http://www.cnblogs.com/chenxianpao/p/5568207.html) # ceph osd crush rm osd.3 删除osd.3的认证信息 # ceph auth del osd.3
如果使用ceph 的对象存储,需要部署rgw网关。执行以下步骤创建一个新的rgw实例 (下面还是以ceph-admin为例)
# ceph-deploy rgw create ceph-admin
写入数据并查看数据
创建一个普通文件,并向其写入数据 创建一个pool。格式为 rados mkpool pool-name
# rados mkpool data
将文件写入pool。格式:rados put object-name filename --pool=pool-name
# rados put test-object-0 /tmp/aaa --pool=data
查看文件是否在pool中。格式为rados -p pool-name ls
# rados -p data ls
确定文件位置。 格式为ceph osd map pool-name object-name
# ceph osd map data test-object-2 osdmap e27 pool ‘data‘ (7) object ‘test-object-2‘ -> pg 7.cbbef8c8 (7.0) -> up ([1,0], p1) acting ([1,0], p1)
从pool 中读取文件。格式为rados get object-name --pool=pool-name filename (filename是你要保存文件)
# rados get test-object-0 --pool=data /tmp/myfile
从pool中删除文件。 格式为 rados rm object-name --pool=pool-name
# rados rm test-object-0 --pool=data
在ceph-admin ceph-osd1 ceph-osd2 节点上安装
# yum localinstall salt-2015.8.1-1.el7.noarch.rpm # rpm -ivh salt-minion-2015.8.1-1.el7.noarch.rpm
在ceph-admin 安装salt-master
# rpm -ivh salt-master-2015.8.1-1.el7.noarch.rpm
# yum localinstall calamari-server-1.3.3-jewel.el7.centos.x86_64.rpm # yum install mod_wsgi -y
初始化 calamari
# calamari-ctl initialize
需要填写账户、Email、密码
修改calamari密码方式
格式 :# calamari-ctl change_password --password {password} {user-name} # calamari-ctl change_password --password 1234567 root
# rpm -ivh diamond-3.4.68-jewel.noarch.rpm # mv /etc/diamond/diamond.conf.example /etc/diamond/diamond.conf
可以修改数据的刷新频率。下面两个文件控制刷新频率 修改文件 /etc/graphite/storage-schemas.conf(默认60s)
[calamari] pattern = .* retentions = 60s:1d,15m:7d 可以将 retentions = 60s:1d,15m:7d 修改为 retentions = 3 ---------- ---------- 0s:1d,15m:7d
修改文件 /etc/diamond/diamond.conf
默认是注释 #interval = 300 修改为 interval = 120
如果在初始化前,可以修改模板,注意初始化会用模板文件覆盖 /opt/calamari/salt/salt/base/diamond.conf
修改diamond配置文件 /etc/diamond/diamond.conf
# Graphite server host host = adm
这个host要填写你的calamari的管理平台服务器的主机名,这个地方是用diamond收集集群数据和硬件的数据发送到管理平台的机器的carbon进程,然后存储在whisper这个数据库当中的,所有的需要收集数据的机器都需要修改。 修改完成后 ,重启diamond
# /etc/init.d/diamond restart
修改salt-minion配置文件 /etc/salt/minion
master:adm
下面命令在每一个节点都执行以下。最后一个是节点主机名
# ceph-deploy calamari connect ceph-admin ceph-osd1 ceph-osd2 # cat /etc/salt/minion.d/calamari.conf master: ceph-admin
重启服务
# systemctl restart salt-minion.service
在salt-master上执行认证(也就是安装calamari-server的服务器上)查询当前的认证请求
# salt-key -L
批准认证请求
# salt-key -A
查询是否正常通过,随便测试一下
# salt-key -L # salt ‘*‘ test.ping # salt ‘*‘ ceph.get_heartbeats
配置calamari-server 文件权限
# cd /var/log/calamari # chmod 777 -R * # service supervisord restart
romana是集群的web管理界面,在calamari-server上安装
# rpm -ivh romana-1.2.2-36_gc62bb5b.el7.centos.x86_64.rpm
访问web管理平台,输入当前机器的IP地址接口,默认端口是80
从部署流程到测试文件写入,监控界面来看以及使用感受来看,这个可以弃用 ,太TM烂了
本文出自 “bamboo” 博客,请务必保留此出处http://wuyebamboo.blog.51cto.com/3344855/1963793
calamari + ceph + saltstack 安装部署
原文地址:http://wuyebamboo.blog.51cto.com/3344855/1963793