码迷,mamicode.com
首页 > 系统相关 > 详细

Linux与云计算——第二阶段 第五章:存储Storage服务器架设—分布式存储Ceph

时间:2016-08-01 23:25:09      阅读:384      评论:0      收藏:0      [点我收藏+]

标签:云计算   服务器   配置ceph集群   linux   block设备   

Linux与云计算——第二阶段Linux服务器架设

第五章:存储Storage服务器架设—分布式存储Ceph

技术分享

1 Ceph 配置Ceph集群

Install Distributed File System "Ceph" to Configure Storage Cluster.

For example on here, Configure Cluster with 1 admin Node and 3 Storage Node like follows.

                                |

        +--------------------+           |           +-------------------+

        |   [dlp.srv.world]  |10.0.0.30  |   10.0.0.x|   [   Client  ]   |

        |    Ceph-Deploy     +-----------+-----------+                   |

        |                    |           |           |                   |

        +--------------------+           |           +-------------------+

            +----------------------------+----------------------------+

            |                            |                            |

            |10.0.0.51                   |10.0.0.52                   |10.0.0.53

+-----------+-----------+    +-----------+-----------+    +-----------+-----------+

|   [node01.srv.world]  |    |  [node02.srv.world]   |    |   [node03.srv.world]  |

|     Object Storage    +----+     Object Storage    +----+     Object Storage    |

|     Monitor Daemon    |    |                       |    |                       |

|                       |    |                       |    |                       |

+-----------------------+    +-----------------------+    +-----------------------+

[1] Add a user for Ceph admin on all Nodes.

It adds "cent" user on this exmaple.

[2] Grant root priviledge to Ceph admin user just added above with sudo settings.

And also install required packages.

Furthermore, If Firewalld is running on all Nodes, allow SSH service.

Set all of above on all Nodes.

[root@dlp ~]# echo -e ‘Defaults:cent !requiretty\ncent ALL = (root) NOPASSWD:ALL‘ | tee /etc/sudoers.d/ceph

[root@dlp ~]# chmod 440 /etc/sudoers.d/ceph

[root@dlp ~]# yum -y install centos-release-ceph-hammer epel-release yum-plugin-priorities

[root@dlp ~]# sed -i -e "s/enabled=1/enabled=1\npriority=1/g" /etc/yum.repos.d/CentOS-Ceph-Hammer.repo

[root@dlp ~]# firewall-cmd --add-service=ssh --permanent

[root@dlp ~]# firewall-cmd –reload

[3] On Monitor Node (Monitor Daemon), If Firewalld is running, allow required port.

[root@dlp ~]# firewall-cmd --add-port=6789/tcp --permanent

[root@dlp ~]# firewall-cmd –reload

[4] On Storage Nodes (Object Storage), If Firewalld is running, allow required ports.

[root@dlp ~]# firewall-cmd --add-port=6800-7100/tcp --permanent

[root@dlp ~]# firewall-cmd –reload

[4] Login as a Ceph admin user and configure Ceph.

Set SSH key-pair from Ceph Admin Node (it‘s "dlp.srv.world" on this example) to all storage Nodes.

[cent@dlp ~]$ ssh-keygen

Generating public/private rsa key pair.

Enter file in which to save the key (/home/cent/.ssh/id_rsa):

Created directory ‘/home/cent/.ssh‘.

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/cent/.ssh/id_rsa.

Your public key has been saved in /home/cent/.ssh/id_rsa.pub.

The key fingerprint is:

54:c3:12:0e:d3:65:11:49:11:73:35:1b:e3:e8:63:5a cent@dlp.srv.world

The key‘s randomart image is:

 

[cent@dlp ~]$ vi ~/.ssh/config

# create new ( define all nodes and users )

Host dlp

    Hostname dlp.srv.world

    User cent

Host node01

    Hostname node01.srv.world

    User cent

Host node02

    Hostname node02.srv.world

    User cent

Host node03

    Hostname node03.srv.world

    User cent

 

[cent@dlp ~]$ chmod 600 ~/.ssh/config

# transfer key file

[cent@dlp ~]$ ssh-copy-id node01

cent@node01.srv.world‘s password:

 

Number of key(s) added: 1

 

Now try logging into the machine, with:   "ssh ‘node01‘"

and check to make sure that only the key(s) you wanted were added.

 

[cent@dlp ~]$ ssh-copy-id node02

[cent@dlp ~]$ ssh-copy-id node03

[5] Install Ceph to all Nodes from Admin Node.

[cent@dlp ~]$ sudo yum -y install ceph-deploy

[cent@dlp ~]$ mkdir ceph

[cent@dlp ~]$ cd ceph

[cent@dlp ceph]$ ceph-deploy new node01

[cent@dlp ceph]$ vi ./ceph.conf

# add to the end

osd pool default size = 2

# Install Ceph on each Node

[cent@dlp ceph]$ ceph-deploy install dlp node01 node02 node03

# settings for monitoring and keys

[cent@dlp ceph]$ ceph-deploy mon create-initial

[6] Configure Ceph Cluster from Admin Node.

Beforeit, Create a directory /storage01 on Node01, /storage02 on Node02, /storage03 on node03 on this example.

# prepare Object Storage Daemon

[cent@dlp ceph]$ ceph-deploy osd prepare node01:/storage01 node02:/storage02 node03:/storage03

# activate Object Storage Daemon

[cent@dlp ceph]$ ceph-deploy osd activate node01:/storage01 node02:/storage02 node03:/storage03

# transfer config files

[cent@dlp ceph]$ ceph-deploy admin dlp node01 node02 node03

[cent@dlp ceph]$ sudo chmod 644 /etc/ceph/ceph.client.admin.keyring

# show status (display like follows if no ploblem)

[cent@dlp ceph]$ ceph health

HEALTH_OK

[7] By the way, if you‘d like to clean settings and re-configure again, do like follows.

# remove packages

[cent@dlp ceph]$ ceph-deploy purge dlp node01 node02 node03

# remove settings

[cent@dlp ceph]$ ceph-deploy purgedata dlp node01 node02 node03

[cent@dlp ceph]$ ceph-deploy forgetkeys

2 作为Block设备使用

Configure Clients to use Ceph Storage like follows.

                                         |

        +--------------------+           |           +-------------------+

        |   [dlp.srv.world]  |10.0.0.30  |   10.0.0.x|   [   Client  ]   |

        |    Ceph-Deploy     +-----------+-----------+                   |

        |                    |           |           |                   |

        +--------------------+           |           +-------------------+

            +----------------------------+----------------------------+

            |                            |                            |

            |10.0.0.51                   |10.0.0.52                   |10.0.0.53

+-----------+-----------+    +-----------+-----------+    +-----------+-----------+

|   [node01.srv.world]  |    |   [node02.srv.world]  |    |   [node03.srv.world]  |

|     Object Storage    +----+     Object Storage    +----+     Object Storage    |

|     Monitor Daemon    |    |                       |    |                       |

|                       |    |                       |    |                       |

+-----------------------+    +-----------------------+    +-----------------------+  

For exmaple, Create a block device and mount it on a Client.

[1] First, Configure Sudo and SSH key-pair for a user on a Client and next, Install Ceph from Ceph admin Node like follows.

[cent@dlp ceph]$ ceph-deploy install client

[cent@dlp ceph]$ ceph-deploy admin client

[2] Create a Block device and mount it on a Client.

[cent@client ~]$ sudo chmod 644 /etc/ceph/ceph.client.admin.keyring

# create a disk with 10G

[cent@client ~]$ rbd create disk01 --size 10240

# show list

[cent@client ~]$ rbd ls -l

NAME     SIZE PARENT FMT PROT LOCK

disk01 10240M          2

 

# map the image to device

[cent@client ~]$ sudo rbd map disk01

/dev/rbd0

# show mapping

[cent@client ~]$ rbd showmapped

id pool image  snap device

0  rbd  disk01 -    /dev/rbd0

 

# format with XFS

[cent@client ~]$ sudo mkfs.xfs /dev/rbd0

# mount device

[cent@client ~]$ sudo mount /dev/rbd0 /mnt

[cent@client ~]$ df -hT

Filesystem              Type      Size  Used Avail Use% Mounted on

/dev/mapper/centos-root xfs        27G  1.3G   26G   5% /

devtmpfs                devtmpfs  2.0G     0  2.0G   0% /dev

tmpfs                   tmpfs     2.0G     0  2.0G   0% /dev/shm

tmpfs                   tmpfs     2.0G  8.4M  2.0G   1% /run

tmpfs                   tmpfs     2.0G     0  2.0G   0% /sys/fs/cgroup

/dev/vda1               xfs       497M  151M  347M  31% /boot

/dev/rbd0               xfs        10G   33M   10G   1% /mnt

3 作为文件系统使用

Configure Clients to use Ceph Storage like follows.

                                         |

        +--------------------+           |           +-------------------+

        |   [dlp.srv.world]  |10.0.0.30  |   10.0.0.x|   [   Client  ]   |

        |    Ceph-Deploy     +-----------+-----------+                   |

        |                    |           |           |                   |

        +--------------------+           |           +-------------------+

            +----------------------------+----------------------------+

            |                            |                            |

            |10.0.0.51                   |10.0.0.52                   |10.0.0.53

+-----------+-----------+    +-----------+-----------+    +-----------+-----------+

|   [node01.srv.world]  |    |   [node02.srv.world]  |    |   [node03.srv.world]  |

|     Object Storage    +----+     Object Storage    +----+     Object Storage    |

|     Monitor Daemon    |    |                       |    |                       |

|                       |    |                       |    |                       |

+-----------------------+    +-----------------------+    +-----------------------+

For example, mount as Filesystem on a Client.

[1] Create MDS (MetaData Server) on a Node which you‘d like to set MDS. It sets to node01 on this exmaple.

[cent@dlp ceph]$ ceph-deploy mds create node01

[2] Create at least 2 RADOS pools on MDS Node and activate MetaData Server.

For pg_num which is specified at the end of a creating command, refer to official document and decide appropriate value.

 http://docs.ceph.com/docs/master/rados/operations/placement-groups/

[cent@node01 ~]$ sudo chmod 644 /etc/ceph/ceph.client.admin.keyring

# create pools

[cent@node01 ~]$ ceph osd pool create cephfs_data 128

pool ‘cephfs_data‘ created

[cent@node01 ~]$ ceph osd pool create cephfs_metadata 128

pool ‘cephfs_metadata‘ created

# enable pools

[cent@node01 ~]$ ceph fs new cephfs cephfs_metadata cephfs_data

new fs with metadata pool 2 and data pool 1

# show list

[cent@node01 ~]$ ceph fs ls

name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]

[cent@node01 ~]$ ceph mds stat

e5: 1/1/1 up {0=node01=up:active}

[3] Mount CephFS on a Client.

[root@client ~]# yum -y install ceph-fuse

# get admin key

[root@client ~]# ssh cent@node01.srv.world "sudo ceph-authtool -p /etc/ceph/ceph.client.admin.keyring" > admin.key

cent@node01.srv.world‘s password:

[root@client ~]# chmod 600 admin.key

[root@client ~]# mount -t ceph node01.srv.world:6789:/ /mnt -o name=admin,secretfile=admin.key

[root@client ~]# df -hT

Filesystem              Type      Size  Used Avail Use% Mounted on

/dev/mapper/centos-root xfs        27G  1.3G   26G   5% /

devtmpfs                devtmpfs  2.0G     0  2.0G   0% /dev

tmpfs                   tmpfs     2.0G     0  2.0G   0% /dev/shm

tmpfs                   tmpfs     2.0G  8.3M  2.0G   1% /run

tmpfs                   tmpfs     2.0G     0  2.0G   0% /sys/fs/cgroup

/dev/vda1               xfs       497M  151M  347M  31% /boot

10.0.0.51:6789:/        ceph       80G   19G   61G  24% /mnt



详细视频课程请戳—→ http://edu.51cto.com/course/course_id-6574.html



本文出自 “11830455” 博客,请务必保留此出处http://11840455.blog.51cto.com/11830455/1833048

Linux与云计算——第二阶段 第五章:存储Storage服务器架设—分布式存储Ceph

标签:云计算   服务器   配置ceph集群   linux   block设备   

原文地址:http://11840455.blog.51cto.com/11830455/1833048

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!