码迷,mamicode.com
首页 > 其他好文 > 详细

The Manual to deploy Strom/Hurricane

时间:2015-07-21 23:37:29      阅读:268      评论:0      收藏:0      [点我收藏+]

标签:

OS: Ubuntu 14.04

1. Install dependencies and hurricane on the nodes of cluster.

Dependencies:

技术分享
1 $ sudo apt-get install binutils libaio1 libboost-system1.54.0 libboost-thread1.54.0 libcrypto++9 libgoogle-perftools4 libjs-jquery libleveldb1 libreadline5 libsnappy1 libtcmalloc-minimal4 libunwind8 python-blinker python-flask python-itsdangerous python-jinja2 python-markupsafe python-pyinotify python-werkzeug xfsprogs libfcgi0ldbl gdebi-core python3-chardet python3-debian python3-six gdisk cryptsetup-bin cryptsetup syslinux liblz4-dev libevent1-dev libsnappy-dev libaio-dev python-setuptools python-boto syslinux
View Code

 

The Debs of Hurricane:
Copy the debs from some servers, like:
ems@rack3-client-6:~/sndk-ifos-2.0.0.07/
Notes:
Can‘t get the Location of release on network on Release notes.

Install all debs under main/x86_64/ceph/
$ sudo dpkg -i *.deb
Install radosgw*.deb under main/x86_64/client/
$ sudo dpkg -i radosgw*.deb

2. Use ceph-deploy to deploy cluster
a) Create ceph monitor on monitor server(rack3-client-6):
$ mdkir ceph-admin & cd ceph-admin
$ ceph-deploy new rack3-client-6
$ ceph-deploy mon create rack3-client-6
$ ceph-deploy gatherkeys rack3-client-6
b) Enable ZS backend:
Install ZS-Shim, need copy it from some server, like:
ems@rack3-client-6:~/sndk-ifos-2.0.0.06/sndk-ifos-2.0.0.06/shim/zs-shim_1.0.0_amd64.deb
Than install this deb:
$ sudo dkpg -i zs-shim_1.0.0_amd64.deb
Next, add following configurations on osd part on ceph.conf:
osd_objectstore = keyvaluestore
enable_experimental_unrecoverable_data_corrupting_features = keyvaluestore
filestore_omap_backend = propdb
keyvaluestore_backend = propdb
keyvaluestore_default_strip_size = 65536
keyvaluestore_backend_library = /opt/sandisk/libzsstore.so
c) Update ceph.conf on cluster
$ ceph-deploy --overwrite-conf config push rack3-client-6

3. Zap previous osds and create new osds on monitor node
$ ceph-deploy disk zap rack6-storage-4:/dev/sdb
$ ceph-deploy osd create rack6-storage-4:/dev/sdb
On this cases, need zap and create 16 osds on 3 osd servers(rack6-storage-4, rack6-storage-5, storage-storage-6)

4. Create pools and rbd
Create replicate pool and set replicate size to 3:
$ sudo ceph osd pool create hcn950 1600 1600
$ sudo ceph osd pool set hcn950 size 3
Create ec pools:
$ sudo ceph osd pool create ec1 1400 1400 erasure
$ sudo ceph osd pool create ec2 1400 1400 erasure
Create 4TB image and map to pool:
$ sudo rbd create image --size 4194304 -p hcn950
$ sudo rbd map image -p hcn950
Than can check the rbd:
$ sudo rbd showmapped

How to calculate pg_num for replicate pool and ec pool?
For replcate pool:
pg_num = (osd_num * 100) / pool size
For ec pool:
pg_num = (osd_num * 100) / (k+m)
k=2, m=1 by default.
Use below cmd to get the defaults :
$ sudo ceph osd erasute-code-profile get default

5. Start IO on Hurricane cluster:
Firstly, need IO tool, like fio:
$ sudo apt-get install fio
Config the configurations for fio:
$ cat fio1.fio
ioengine=libaio
iodepth=4
rw=randwrite
bs=32k
direct=0
size=4096G
numjobs=4
filename=/dev/rbd0
Finally, start fio:
$ sudo fio fio1.fio

6. Check the status of hurricane cluster:
Fio console log:
$ sudo fio fio1.fio
random-writer: (g=0): rw=randwrite, bs=32K-32K/32K-32K/32K-32K, ioengine=libaio, iodepth=4
...
random-writer: (g=0): rw=randwrite, bs=32K-32K/32K-32K/32K-32K, ioengine=libaio, iodepth=4
fio-2.1.3
Starting 4 processes
Jobs: 4 (f=4): [wwww] [1.6% done] [0KB/28224KB/0KB /s] [0/882/0 iops] [eta 07d:23h:50m:57s] ]m:48s]

Cluster Status:
$ sudo ceph -s
    cluster 2946b6e6-2948-4b3f-ad77-0f1c5af8eed6
     health HEALTH_OK
     monmap e1: 1 mons at {rack3-client-6=10.242.43.1:6789/0}
            election epoch 2, quorum 0 rack3-client-6
     osdmap e334: 48 osds: 48 up, 48 in
      pgmap v12208: 4464 pgs, 4 pools, 3622 GB data, 1023 kobjects
            1609 GB used, 16650 GB / 18260 GB avail
                4464 active+clean
  client io 20949 kB/s wr, 1305 op/s

$ sudo rados df
pool name                 KB      objects       clones     degraded      unfound           rd        rd KB           wr        wr KB
ec1                        0            0            0            0           0            0            0            0            0
ec2                        0            0            0            0           0            0            0            0            0
hcn950            3799132673      1048329            0            0           0            2            1     17493652    280696934
rbd                        0            0            0            0           0            0            0            0            0
  total used      1691025408      1048329
  total avail    17455972352
  total space    19146997760

$ sudo ceph osd tree

7. Remove osds
Firstly, mark osd.0 out of the distribution.
$ sudo ceph osd out 0
Than, mark osd.0 down.
$ sudo ceph osd down 0
Finally, remove osd.0
$ sudo ceph osd crush remove osd.0
$ sudo ceph auth del osd.0
$ sudo ceph osd rm 0

8. Restart osd
Refer to: http://ceph.com/docs/master/rados/operations/operating/
To start all daemons of a particular type on a Ceph Node, execute one of the following:
$ sudo start ceph-osd-all
To start a specific daemon instance on a Ceph Node, execute one of the following:
$ sudo start ceph-osd id=0 # start osd.0 for example

To stop all daemons of a particular type on a Ceph Node, execute one of the following:
$ sudo stop ceph-osd-all
To stop a specific daemon instance on a Ceph Node, execute one of the following:
$ sudo stop ceph-osd id=0 # sto osd.0 for example

9. Rebalance the data when some osd hit the warning "near full osd"
$ sudo ceph osd reweight-by-utilization
Than check the status of cluster
$ sudo ceph -s
or:
$ sudo ceph -w
recovery io will be found when rebalance.

 

The Manual to deploy Strom/Hurricane

标签:

原文地址:http://www.cnblogs.com/AlfredChen/p/4665771.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!