标签:SUSE
SUSE Enterprise Storage 是软件定义的储存解决方案,高度可伸缩和可恢复,由 Ceph 技术提供支持。它使组织能够采用行业标准、现成的服务器和磁盘驱动器来构建经济高效和高度可伸缩的储存。目前最新的版本为SUSE Enterprise Storage 5,详情请参考SUSE Enterprise Storage功能介绍。1.1 主机信息
1.2 主机配置
1.2.1 ssh免密码设置
配置各个节点root用户的免密码登录。通过ssh-keygen创建rsa密钥,最后使用ssh-copy-id将public密钥复制到各个节点即可。
1.2.2 软件仓库设置
在一台装有vsftpd服务的主机上,将suse linux系统镜像和suse enterprise storage 5镜像(官方免费下载)挂载到指定目录,如下图所示:
在每个节点的/etc/zypp/repos.d目录下,创建如下文件:
[root@ceph01 ~]#cd /etc/zypp/repos.d/
[root@ceph01 repos.d]#ls
SES5.repo SLES12-SP3-12.3-0_1.repo
[root@ceph01 repos.d]#cat SES5.repo
[SES5]
name=SES5
enabled=1
autorefresh=1
baseurl=ftp://hdp01/pub/ses
path=/
type=yast2
keeppackages=0
[root@ceph01 repos.d]#cat SLES12-SP3-12.3-0_1.repo
[SLES12-SP3-12.3-0_1]
name=SLES12-SP3-12.3-0
enabled=1
autorefresh=1
baseurl=ftp://hdp01/pub/sls
path=/
type=yast2
keeppackages=0
--执行下面的命令刷新下软件仓库
[root@ceph01 repos.d]#zypper ref
[root@ceph01 repos.d]#zypper lr
Repository priorities are without effect. All enabled repositories share the same priority.
# | Alias | Name | Enabled | GPG Check | Refresh
--+---------------------+-------------------+---------+-----------+--------
1 | SES5 | SES5 | Yes | (r ) Yes | Yes
2 | SLES12-SP3-12.3-0_1 | SLES12-SP3-12.3-0 | Yes | (r ) Yes | Yes
同步以上软件仓库配置文件到其他节点:
[root@ceph01 repos.d]#for i in 2 3 4;do scp *.repo ceph0$i:/etc/zypp/repos.d;done
[root@ceph01 repos.d]#for i in 2 3 4;do ssh ceph0$i "zypper ref";done
1.2.3 软件包依赖
这里的软件包依赖是指python-boto-2.42.0-11.3.1依赖的python-simplejson(OS和SES镜像中未提供,需要从openSUSE社区下载)版本必须大于等于3.6.5,否则在安装SES的第一步骤出出现报错,从而导致整个环境安装失败。
[root@ceph01 ~]#rpm -Uvh python-simplejson-3.13.2-2.6.x86_64.rpm
warning: python-simplejson-3.13.2-2.6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID 226c7528: NOKEY
Preparing... ################################# [100%]
Updating / installing...
1:python-simplejson-3.13.2-2.6 ################################# [100%]
[root@ceph01 ~]#scp python-simplejson-3.13.2-2.6.x86_64.rpm ceph02:/root/
[root@ceph01 ~]#scp python-simplejson-3.13.2-2.6.x86_64.rpm ceph03:/root/
[root@ceph01 ~]#scp python-simplejson-3.13.2-2.6.x86_64.rpm ceph04:/root/
[root@ceph02 ~]#rpm -Uvh python-simplejson-3.13.2-2.6.x86_64.rpm
[root@ceph03 ~]#rpm -Uvh python-simplejson-3.13.2-2.6.x86_64.rpm
[root@ceph04 ~]#rpm -Uvh python-simplejson-3.13.2-2.6.x86_64.rpm
1.2.4 NTP服务配置
[root@ceph01 ~]# echo "server ntp1.aliyun.com" >>/etc/ntp.conf
[root@ceph01 ~]# echo "server ntp2.aliyun.com" >>/etc/ntp.conf
[root@ceph01 ~]# echo "server ntp3.aliyun.com" >>/etc/ntp.conf
[root@ceph01 ~]# systemctl enable ntpd
[root@ceph01 ~]# systemctl start ntpd
[root@ceph01 ~]# ntpdate -u ntp1.aliyun.com
[root@ceph01 ~]# for i in 2 3 4;do scp /etc/ntp.conf ceph0$i:/etc/;done
1.2.5 禁用防火墙
[root@ceph01 ~]#systemctl disable SuSEfirewall2
[root@ceph02 ~]#systemctl disable SuSEfirewall2
[root@ceph03 ~]#systemctl disable SuSEfirewall2
[root@ceph04 ~]#systemctl disable SuSEfirewall2
2.1 Salt服务配置
以下为配置master和minion节点步骤。
2.1.1 Master节点配置
这里将ceph01作为salt的master节点,其他节点全部为minion节点(包括master)。
[root@ceph01 ~]#zypper in salt-master salt-minion
[root@ceph01 ~]#systemctl enable salt-master.service
[root@ceph01 ~]#systemctl start salt-master.service
[root@ceph01 ~]#systemctl status salt-master.service
[root@ceph01 ~]#netstat -antpl|grep 4505 4506
[root@ceph01 ~]#vi /etc/salt/minion
master: ceph01.xzxj.edu.cn
[root@ceph01 ~]#systemctl enable salt-minion.service
[root@ceph01 ~]#systemctl start salt-minion.service
2.1.2 Minion节点配置
[root@ceph0[2..4] ~]#zypper in salt-minion
[root@ceph0[2..4] ~]#scp ceph01:/etc/salt/minion /etc/salt
[root@ceph0[2..4] ~]#systemctl enable salt-minion
[root@ceph0[2..4] ~]#systemctl start salt-minion
2.1.3 节点认证
[root@ceph01 ~]#salt-key -F
Local Keys:
master.pem: 78:2f:29:24:6b:a3:ef:25:51:18:b9:b8:59:88:28:8f:bb:3d:a7:f8:30:dd:d8:5e:20:17:16:25:23:43:f8:d0
master.pub: 5a:10:f9:4f:84:e4:a3:b4:ef:09:8a:44:1b:e2:0d:32:2d:cc:ca:c2:b2:55:83:d6:a8:56:84:cd:fe:d6:1d:67
Unaccepted Keys:
ceph01.thinkjoy.tt: a3:32:0e:03:4a:62:90:ea:40:d4:b6:7f:0b:51:fb:43:7a:3f:2f:af:f5:d0:a9:0f:8a:5c:70:b3:55:02:9a:b1
ceph02.thinkjoy.tt: 8a:38:cd:cd:6f:12:39:53:bf:04:f0:93:60:bb:4f:2e:77:1a:ed:21:91:65:30:77:0d:f4:50:21:f5:a9:8e:a1
ceph03.thinkjoy.tt: 38:93:3d:4c:21:95:6e:6a:ca:56:ce:3e:65:ab:b0:f4:a1:4f:c5:52:7e:62:57:7f:b4:7e:11:d3:7e:37:ed:9b
ceph04.thinkjoy.tt: 27:7a:c9:e1:2b:88:30:6e:23:50:9a:71:43:e5:60:92:27:a9:d2:d3:71:2b:69:a9:8a:9d:0a:74:13:d2:8f:39
[root@ceph01 ~]#salt-key --accept-all
The following keys are going to be accepted:
Unaccepted Keys:
ceph01.thinkjoy.tt
ceph02.thinkjoy.tt
ceph03.thinkjoy.tt
ceph04.thinkjoy.tt
Proceed? [n/Y] Y
Key for minion ceph01.thinkjoy.tt accepted.
Key for minion ceph02.thinkjoy.tt accepted.
Key for minion ceph03.thinkjoy.tt accepted.
Key for minion ceph04.thinkjoy.tt accepted.
[root@ceph01 ~]#salt-key --list-all
Accepted Keys:
ceph01.thinkjoy.tt
ceph02.thinkjoy.tt
ceph03.thinkjoy.tt
ceph04.thinkjoy.tt
Denied Keys:
Unaccepted Keys:
Rejected Keys:
2.2 安装deepsea
SUSE Enterprise Storage 5的安装配置过程,主要是通过deepsea来完成的。在salt的master节点安装,如下:
[root@ceph01 ~]#zypper in deepsea
--安装完成后,会修改/srv/pillar/ceph/master_minion.sls 文件的内容为:
[root@ceph01 ~]#cat /srv/pillar/ceph/master_minion.sls
master_minion: ceph01.thinkjoy.tt
默认情况下,各个minion节点不属于deepsea的任何组,所以要执行下面的命令将其加入到默认的default组:
[root@ceph01 ~]#salt ceph01.thinkjoy.tt grains.append deepsea default
ceph01.thinkjoy.tt:
----------
deepsea:
- default
[root@ceph01 ~]#salt ceph02.thinkjoy.tt grains.append deepsea default
ceph02.thinkjoy.tt:
----------
deepsea:
- default
[root@ceph01 ~]#salt ceph03.thinkjoy.tt grains.append deepsea default
ceph03.thinkjoy.tt:
----------
deepsea:
- default
[root@ceph01 ~]#salt ceph04.thinkjoy.tt grains.append deepsea default
ceph04.thinkjoy.tt:
----------
deepsea:
- default
[root@ceph01 ~]#salt -G ‘deepsea:*‘ test.ping
ceph01.thinkjoy.tt:
True
ceph04.thinkjoy.tt:
True
ceph02.thinkjoy.tt:
True
ceph03.thinkjoy.tt:
True
如果不做此步骤,在安装配置过程中会出现下面的报错信息:
No minions matched the target. No command was sent, no jid was assigned.
ERROR: No return received
2.3 正式安装
以下操作在salt的master节点运行,如果要监控每个步骤执行了那些操作,可以在另外一个窗口执行deepsea monitor进行监控。
2.3.1 Provisioning(Stage 0)
此步骤主要针对所有minion节点进行操作系统更新操作。
[root@ceph01 ~]#export DEV_ENV=true
[root@ceph01 ~]#salt-run state.orch ceph.stage.0
2.3.2 Discovery(Stage 1)
此步骤主要询问所有的minion节点并在/srv/pillar/ceph/proposals目录下创建pillar配置文件。
[root@ceph01 ~]#salt-run state.orch ceph.stage.1
2.3.3 Configure(Stage 2)
在执行此步骤之前。必须创建policy.cfg文件。此文件主要设置各个节点的角色,内容如下:
[root@ceph01 ~]#vi /srv/pillar/ceph/proposals/policy.cfg
# Cluster assignment
cluster-ceph/cluster/ceph0[1-4].thinkjoy.tt.sls
# Hardware Profile
profile-default/cluster/ceph0[1-4].thinkjoy.tt.sls
profile-default/stack/default/ceph/minions/ceph0[1-4].thinkjoy.tt.yml
# Common configuration
config/stack/default/global.yml
config/stack/default/ceph/cluster.yml
# Role assignment
role-master/cluster/ceph01.thinkjoy.tt.sls
role-admin/cluster/ceph0[1-4].thinkjoy.tt.sls
role-mon/cluster/ceph0[2-4].thinkjoy.tt.sls
role-mon/stack/default/ceph/minions/ceph0[2-4].thinkjoy.tt.yml
#For MGR
role-mgr/cluster/ceph0[1-3].thinkjoy.tt.sls
#For Openattic
role-openattic/cluster/ceph01.thinkjoy.tt.sls
#For Iscsi Service
role-igw/cluster/ceph0[1-4].thinkjoy.tt.sls
#For RGW
role-igw/cluster/ceph0[2-4].thinkjoy.tt.sls
#For MDS
role-mds/cluster/ceph0[2-4].thinkjoy.tt.sls
#For NFS
role-ganesha/cluster/ceph0[2-4].thinkjoy.tt.sls
[root@ceph01 ~]#salt-run state.orch ceph.stage.2
2.3.4 Deploy(Stage 3)
此步骤主要检查pillar的配置信息并MONs和OSDs。
[root@ceph01 ~]#salt-run state.orch ceph.stage.3
执行完成后,整个ceph集群就配置完成,如下:
2.3.5 Services(Stage 4)
此步骤主要创建(iSCSI gateway, CephFS, RadosGW, openATTIC)服务。
[root@ceph01 ~]#salt-run state.orch ceph.stage.4
OpenATTIC是一个管理监控ceph的web界面工具,默认用户是openattic,密码为openattic。
参考文献:
1、SUSE Enterprise Storage 5
2、Hello Salty Goodness
SUSE Enterprise Storage 5 Installation Guide
标签:SUSE
原文地址:http://blog.51cto.com/candon123/2121506