码迷,mamicode.com
首页 > 其他好文 > 详细

CEPH安装教程(中)

时间:2018-05-25 13:34:50      阅读:497      评论:0      收藏:0      [点我收藏+]

标签:group   href   方式   err   load   node节点   uid   eating   entity   

NTP服务配置

NTP客户端配置

# vim /etc/ntp.conf
server 92.0.0.250
### 手动同步下时间
# ntpdate -u 92.0.0.250
### 启动服务
# systemctl start ntpd
# systemctl enable ntpd
### 检查同步
# ntpq -p

监控节点配置(方式一)

以下指令在anode节点执行

为CEPH集群生成UUID

# uuidgen
cb9321ef-c7b4-48f7-a1bf-5c75deede6ee
# tee /etc/ceph/ceph.conf << EOF
[global]
fsid = cb9321ef-c7b4-48f7-a1bf-5c75deede6ee

[mon.anode]
host = anode
mon addr = 92.0.0.11:6789
EOF

创建集群密钥环以及监视器密钥

# ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon ‘allow *‘

创建管理员密钥环,生成client.admin用户,然后添加用户到环上

# ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon ‘allow *‘ --cap osd ‘allow *‘ --cap mds ‘allow *‘ --cap mgr ‘allow *‘

创建一个bootstrap-osd密钥环,生成client.bootstrap-osd用户,然后添加用户到环上

# ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon ‘profile bootstrap-osd‘

将生成的密钥添加到ceph.mon.keyring

# ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
# ceph-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring

创建监视器映射

# monmaptool --create --add anode 92.0.0.11 --fsid cb9321ef-c7b4-48f7-a1bf-5c75deede6ee /tmp/monmap

创建默认数据目录

# mkdir -p /var/lib/ceph/mon/ceph-anode

使用监视器映射和密钥环填充监视器服务

# ceph-mon --mkfs -i anode --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring

更新配置文件

# tee /etc/ceph/ceph.conf << EOF
[global]
fsid = cb9321ef-c7b4-48f7-a1bf-5c75deede6ee
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
osd journal size = 1024
osd pool default size = 2
osd pool default min size = 1
osd pool default pg num = 128
osd pool default pgp num = 128
osd crush chooseleaf type = 1

[mon.anode]
host = anode
mon addr = 92.0.0.11:6789
EOF

标记监视器服务就绪

# touch /var/lib/ceph/mon/ceph-anode/done

启动监视器服务

### 有两种方式启动服务

### 方式一:修改数据目录权限
# chown -R ceph:ceph /var/lib/ceph

### 方式二:修改服务启动脚本
# vim /usr/lib/systemd/system/ceph-mon@.service
ExecStart=/usr/bin/ceph-mon -f --cluster ${CLUSTER} --id %i --setuser root --setgroup root
# systemctl daemon-reload

### 启动服务
# systemctl start ceph-mon@anode
# systemctl enable ceph-mon@anode

### 检测服务
# ceph -s
  services:
    mon: 1 daemons, quorum anode
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in

添加监控节点

以下指令在bnode节点执行

拷贝配置文件和客户端密钥

# scp root@92.0.0.11:/etc/ceph/ceph.conf /etc/ceph/
# scp root@92.0.0.11:/etc/ceph/ceph.client.admin.keyring /etc/ceph/

获取监控集群密钥环

# ceph auth get mon. -o /tmp/ceph.mon.keyring

获取监控集群映射文件

# ceph mon getmap -o /tmp/monmap

使用监视器映射和密钥环填充新的监视器服务

# ceph-mon --mkfs -i bnode --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring

修改配置文件

### 配置文件的mon.anode是让你在没有配置bnode的时候让ceph mon命令知道怎么去连接ceph monitor集群(删除后会发现命令报错),在配置好bnode后,可以删除anode的配置也可以不删除,后面配置cnode的时候可以使用bnode的配置也可以使用anode的配置
# vim /etc/ceph/ceph.conf
[global]
fsid = cb9321ef-c7b4-48f7-a1bf-5c75deede6ee
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
osd journal size = 1024
osd pool default size = 2
osd pool default min size = 1
osd pool default pg num = 128
osd pool default pgp num = 128
osd crush chooseleaf type = 1

[mon.bnode]
host = bnode
mon addr = 92.0.0.12:6789

标记监视器服务就绪

# touch /var/lib/ceph/mon/ceph-bnode/done

启动监视器服务

# chown -R ceph:ceph /var/lib/ceph

### 启动服务
# systemctl start ceph-mon@bnode
# systemctl enable ceph-mon@bnode

### 检测服务
# ceph -s
  services:
    mon: 2 daemons, quorum anode,bnode
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in

同理添加cnode节点(配置文件可以拷贝anode节点的也可以拷贝bnode节点的)

# ceph -s
  services:
    mon: 3 daemons, quorum anode,bnode,cnode
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in

配置管理服务

以下指令在anode节点执行

创建密钥文件

# name="kolla"
# mkdir -p /var/lib/ceph/mgr/ceph-$name
# ceph auth get-or-create mgr.$name mon ‘allow profile mgr‘ osd ‘allow *‘ mds ‘allow *‘ -o /var/lib/ceph/mgr/ceph-$name/keyring

启动服务

# chown -R ceph:ceph /var/lib/ceph
# systemctl start ceph-mgr@$name
# systemctl status ceph-mgr@$name
# systemctl enable ceph-mgr@$name

查询服务

# ceph -s
  services:
    mon: 3 daemons, quorum anode,bnode,cnode
    mgr: kolla(active)
    osd: 2 osds: 2 up, 2 in

同理配置好bnode和cnode节点

# ceph -s
  services:
    mon: 3 daemons, quorum anode,bnode,cnode
    mgr: kolla(active, starting)
    osd: 2 osds: 2 up, 2 in

查询命令帮助

# ceph tell mgr help

监控节点配置(方式二)

以下指令在anode节点执行

为CEPH集群生成UUID

# uuidgen
cb9321ef-c7b4-48f7-a1bf-5c75deede6ee
# tee /etc/ceph/ceph.conf << EOF
[global]
fsid = cb9321ef-c7b4-48f7-a1bf-5c75deede6ee
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
osd journal size = 1024
osd pool default size = 2
osd pool default min size = 1
osd pool default pg num = 128
osd pool default pgp num = 128
osd crush chooseleaf type = 1
mon initial members = anode, bnode, cnode
mon host = 92.0.0.11, 92.0.0.12, 92.0.0.13
EOF

创建集群密钥环以及监视器密钥

# ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon ‘allow *‘

创建管理员密钥环,生成client.admin用户,然后添加用户到环上

# ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon ‘allow *‘ --cap osd ‘allow *‘ --cap mds ‘allow *‘ --cap mgr ‘allow *‘

创建一个bootstrap-osd密钥环,生成client.bootstrap-osd用户,然后添加用户到环上

# ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon ‘profile bootstrap-osd‘

将生成的密钥添加到ceph.mon.keyring

# ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
# ceph-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring

创建监视器映射

# monmaptool --create --add anode 92.0.0.11 --add bnode 92.0.0.12 --add cnode 92.0.0.13 --fsid cb9321ef-c7b4-48f7-a1bf-5c75deede6ee --clobber /tmp/monmap
# monmaptool --print /tmp/monmap

创建默认数据目录

# mkdir -p /var/lib/ceph/mon/ceph-anode

使用监视器映射和密钥环填充监视器服务

# ceph-mon --mkfs -i anode --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring

标记监视器服务就绪

# touch /var/lib/ceph/mon/ceph-anode/done

启动监视器服务

### 启动服务
# chown -R ceph:ceph /var/lib/ceph
# systemctl start ceph-mon@anode
# systemctl enable ceph-mon@anode

### 检测服务
# ceph daemon mon.anode mon_status

添加监控节点

以下指令在bnode节点执行

拷贝配置文件、密钥和监视器映射

# scp root@92.0.0.11:/etc/ceph/ceph.conf /etc/ceph/
# scp root@92.0.0.11:/tmp/ceph.mon.keyring /tmp
# scp root@92.0.0.11:/tmp/monmap /tmp

创建默认数据目录

# mkdir -p /var/lib/ceph/mon/ceph-bnode

使用监视器映射和密钥环填充监视器服务

# ceph-mon --mkfs -i bnode --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring

标记监视器服务就绪

# touch /var/lib/ceph/mon/ceph-bnode/done

启动监视器服务

### 启动服务
# chown -R ceph:ceph /var/lib/ceph
# systemctl start ceph-mon@banode
# systemctl enable ceph-mon@bnode

### 检测服务
# ceph daemon mon.bnode mon_status

同理添加cnode节点

添加OSDS

方式一:脚本(dnode)

以下指令在dnode节点执行

拷贝配置文件和密钥环

# scp root@92.0.0.11:/etc/ceph/ceph.conf /etc/ceph/
# scp root@92.0.0.11:/etc/ceph/ceph.client.admin.keyring /etc/ceph/
# scp root@92.0.0.11:/var/lib/ceph/bootstrap-osd/ceph.keyring /var/lib/ceph/bootstrap-osd/

创建OSD

# ceph-volume lvm create --data /dev/vdb
Running command: ceph-authtool --gen-print-key
Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 2d6a968f-9c10-4c57-adb6-be0a1341581b
Running command: vgcreate --force --yes ceph-cb9321ef-c7b4-48f7-a1bf-5c75deede6ee /dev/vdb
 stdout: Physical volume "/dev/vdb" successfully created
 stdout: Volume group "ceph-cb9321ef-c7b4-48f7-a1bf-5c75deede6ee" successfully created
Running command: lvcreate --yes -l 100%FREE -n osd-block-2d6a968f-9c10-4c57-adb6-be0a1341581b ceph-cb9321ef-c7b4-48f7-a1bf-5c75deede6ee
 stdout: Logical volume "osd-block-2d6a968f-9c10-4c57-adb6-be0a1341581b" created.
Running command: ceph-authtool --gen-print-key
Running command: mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Running command: chown -R ceph:ceph /dev/dm-4
Running command: ln -s /dev/ceph-cb9321ef-c7b4-48f7-a1bf-5c75deede6ee/osd-block-2d6a968f-9c10-4c57-adb6-be0a1341581b /var/lib/ceph/osd/ceph-0/block
Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
 stderr: got monmap epoch 5
Running command: ceph-authtool /var/lib/ceph/osd/ceph-0/keyring --create-keyring --name osd.0 --add-key AQDR0sFaI6fMAxAAhoZKkXR29nUPbWCeAAibkg==
 stdout: creating /var/lib/ceph/osd/ceph-0/keyring
 stdout: added entity osd.0 auth auth(auid = 18446744073709551615 key=AQDR0sFaI6fMAxAAhoZKkXR29nUPbWCeAAibkg== with 0 caps)
Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Running command: ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 2d6a968f-9c10-4c57-adb6-be0a1341581b --setuser ceph --setgroup ceph
--> ceph-volume lvm prepare successful for: /dev/vdb
Running command: ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-cb9321ef-c7b4-48f7-a1bf-5c75deede6ee/osd-block-2d6a968f-9c10-4c57-adb6-be0a1341581b --path /var/lib/ceph/osd/ceph-0
Running command: ln -snf /dev/ceph-cb9321ef-c7b4-48f7-a1bf-5c75deede6ee/osd-block-2d6a968f-9c10-4c57-adb6-be0a1341581b /var/lib/ceph/osd/ceph-0/block
Running command: chown -R ceph:ceph /dev/dm-4
Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Running command: systemctl enable ceph-volume@lvm-0-2d6a968f-9c10-4c57-adb6-be0a1341581b
 stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-2d6a968f-9c10-4c57-adb6-be0a1341581b.service to /usr/lib/systemd/system/ceph-volume@.service.
Running command: systemctl start ceph-osd@0
--> ceph-volume lvm activate successful for osd ID: 0
--> ceph-volume lvm activate successful for osd ID: None
--> ceph-volume lvm create successful for: /dev/vdb

查询集群状态

# ceph -s
  services:
    mon: 3 daemons, quorum anode,bnode,cnode
    mgr: no daemons active
    osd: 1 osds: 1 up, 1 in

方式二:手动(enode)

以下指令在enode节点执行

拷贝配置文件和密钥环

# scp root@92.0.0.11:/etc/ceph/ceph.conf /etc/ceph/
# scp root@92.0.0.11:/etc/ceph/ceph.client.admin.keyring /etc/ceph/
# scp root@92.0.0.11:/var/lib/ceph/bootstrap-osd/ceph.keyring /var/lib/ceph/bootstrap-osd/

为OSD生成UUID

# UUID=$(uuidgen)

为OSD创建cephx密钥

# OSD_SECRET=$(ceph-authtool --gen-print-key)

创建OSD

### ceph命令-i原本是要指定一个json文件,这边使用管道操作中的"-"将前面echo的输出结果作为文件
# ID=$(echo "{\"cephx_secret\": \"$OSD_SECRET\"}" | ceph osd new $UUID -i - -n client.bootstrap-osd -k /var/lib/ceph/bootstrap-osd/ceph.keyring)

为OSD创建数据目录

# mkdir /var/lib/ceph/osd/ceph-$ID

挂载磁盘到数据目录上

# mkfs.xfs /dev/vdb
# mount /dev/vdb /var/lib/ceph/osd/ceph-$ID

修改fstab配置

### 在配置文件末尾添加
# vim /etc/fstab
/dev/vdb /var/lib/ceph/osd/ceph-1               xfs     defaults        0 0

创建OSD密钥文件

# ceph-authtool --create-keyring /var/lib/ceph/osd/ceph-$ID/keyring --name osd.$ID --add-key $OSD_SECRET

初始化OSD数据目录

# ceph-osd -i $ID --mkfs --osd-uuid $UUID

启动服务

# chown -R ceph:ceph /var/lib/ceph/osd/ceph-$ID
# systemctl enable ceph-osd@$ID
# systemctl start ceph-osd@$ID

查询集群状态

# ceph -s
  services:
    mon: 3 daemons, quorum anode,bnode,cnode
    mgr: no daemons active
    osd: 2 osds: 2 up, 2 in

添加MDS节点

以下指令在anode节点执行

创建 MDS 数据目录

# mkdir -p /var/lib/ceph/mds/ceph-anode

创建密钥环

# ceph-authtool --create-keyring /var/lib/ceph/mds/ceph-anode/keyring --gen-key -n mds.anode

导入密钥环并设置权限

# ceph auth add mds.anode osd "allow rwx" mds "allow" mon "allow profile mds" -i /var/lib/ceph/mds/ceph-anode/keyring

修改配置文件

# vim /etc/ceph/ceph.conf
[global]
fsid = cb9321ef-c7b4-48f7-a1bf-5c75deede6ee
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
osd journal size = 1024
osd pool default size = 2
osd pool default min size = 1
osd pool default pg num = 128
osd pool default pgp num = 128
osd crush chooseleaf type = 1

[mon.anode]
host = anode
mon addr = 92.0.0.11:6789

[mds.anode]
host = anode

启动服务

# chown -R ceph:ceph /var/lib/ceph
# systemctl start ceph-mds@anode
# systemctl status ceph-mds@anode
# systemctl enable ceph-mds@anode

参考文档

ceph集群的部署
ceph集群之-添加删除Monitor

CEPH安装教程(中)

标签:group   href   方式   err   load   node节点   uid   eating   entity   

原文地址:https://www.cnblogs.com/silvermagic/p/9087213.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!