码迷,mamicode.com
首页 > 系统相关 > 详细

Linux与云计算——第二阶段 第五章:存储Storage服务器架设—分布式存储GlusterFS基础

时间:2016-08-01 23:23:37      阅读:380      评论:0      收藏:0      [点我收藏+]

标签:glusterfs安装   设置distributed   设置replication   设置striping   clients settings   striping+replication   

Linux与云计算——第二阶段Linux服务器架设

第五章:存储Storage服务器架设—分布式存储GlusterFS基础

技术分享

1 GlusterFS GlusterFS安装

Install GlusterFS to Configure Storage Cluster.

It is recommended to use partitions for GlusterFS volumes which are different from the / partition.

The environment on this example is set as that sdb1 is mounted on /glusterfs directory for GlusterFS Configuration on all Nodes.

[1] Install GlusterFS Server on all Nodes in Cluster.

[root@node01 ~]# curl http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo -o /etc/yum.repos.d/glusterfs-epel.repo

# enable EPEL, too

[root@node01 ~]# yum --enablerepo=epel -y install glusterfs-server

[root@node01 ~]# systemctl start glusterd

[root@node01 ~]# systemctl enable glusterd

[2] If Firewalld is running, allow GlusterFS service on all nodes.

[root@node01 ~]# firewall-cmd --add-service=glusterfs --permanent

success

[root@node01 ~]# firewall-cmd --reload

Success

It‘s OK if you mount GlusterFS volumes from clients with GlusterFS Native Client.

[3] GlusterFS supports NFS (v3), so if you mount GlusterFS volumes from clients with NFS, Configure additinally like follows.

[root@node01 ~]# yum -y install rpcbind

[root@node01 ~]# systemctl start rpcbind

[root@node01 ~]# systemctl enable rpcbind

[root@node01 ~]# systemctl restart glusterd

[4] Installing and Basic Settings of GlusterFS are OK. Refer to next section for settings of clustering.

2 设置Distributed

Configure Storage Clustering.

For example, create a distributed volume with 2 servers.

This example shows to use 2 servers but it‘s possible to use more than 3 servers.

                                  |

+----------------------+          |          +----------------------+

| [GlusterFS Server#1] |10.0.0.51 | 10.0.0.52| [GlusterFS Server#2] |

|   node01.srv.world   +----------+----------+    node02.srv.world  |

|                      |                     |                      |

+----------------------+                     +----------------------+

It is recommended to use partitions for GlusterFS volumes which are different from the / partition.

The environment on this example is set as that sdb1 is mounted on /glusterfs directory for GlusterFS Configuration on all Nodes.

[1] Install GlusterFS Server on All Nodes, refer to here.

[2] Create a Directory for GlusterFS Volume on all Nodes.

[root@node01 ~]# mkdir /glusterfs/distributed

[3] Configure Clustering like follows on a node. (it‘s OK on any node)

# probe the node

[root@node01 ~]# gluster peer probe node02

peer probe: success.

# show status

[root@node01 ~]# gluster peer status

Number of Peers: 1

 

Hostname: node02

Uuid: 2ca22769-28a1-4204-9957-886579db2231

State: Peer in Cluster (Connected)

 

# create volume

[root@node01 ~]# gluster volume create vol_distributed transport tcp \

node01:/glusterfs/distributed \

node02:/glusterfs/distributed

volume create: vol_distributed: success: please start the volume to access data

# start volume

[root@node01 ~]# gluster volume start vol_distributed

volume start: vol_distributed: success

# show volume info

[root@node01 ~]# gluster volume info

 

Volume Name: vol_distributed

Type: Distribute

Volume ID: 6677caa9-9aab-4c1a-83e5-2921ee78150d

Status: Started

Number of Bricks: 2

Transport-type: tcp

Bricks:

Brick1: node01:/glusterfs/distributed

Brick2: node02:/glusterfs/distributed

Options Reconfigured:

performance.readdir-ahead: on

[4] To mount GlusterFS volume on clients, refer to here.

3 设置Replication

Configure Storage Clustering.

For example, create a Replication volume with 2 servers.

This example shows to use 2 servers but it‘s possible to use more than 3 servers.

                                  |

+----------------------+          |          +----------------------+

| [GlusterFS Server#1] |10.0.0.51 | 10.0.0.52| [GlusterFS Server#2] |

|   node01.srv.world   +----------+----------+   node02.srv.world   |

|                      |                     |                      |

+----------------------+                     +----------------------+

It is recommended to use partitions for GlusterFS volumes which are different from the / partition.

The environment on this example is set as that sdb1 is mounted on /glusterfs directory for GlusterFS Configuration on all Nodes.

[1] Install GlusterFS Server on All Nodes, refer to here.

[2] Create a Directory for GlusterFS Volume on all Nodes.

[root@node01 ~]# mkdir /glusterfs/replica

[3] Configure Clustering like follows on a node. (it‘s OK on any node)

# probe the node

[root@node01 ~]# gluster peer probe node02

peer probe: success.

# show status

[root@node01 ~]# gluster peer status

Number of Peers: 1

 

Hostname: node02

Uuid: 2ca22769-28a1-4204-9957-886579db2231

State: Peer in Cluster (Connected)

 

# create volume

[root@node01 ~]# gluster volume create vol_replica replica 2 transport tcp \

node01:/glusterfs/replica \

node02:/glusterfs/replica

volume create: vol_replica: success: please start the volume to access data

# start volume

[root@node01 ~]# gluster volume start vol_replica

volume start: vol_replica: success

# show volume info

[root@node01 ~]# gluster volume info

 

Volume Name: vol_replica

Type: Replicate

Volume ID: 0d5d5ef7-bdfa-416c-8046-205c4d9766e6

Status: Started

Number of Bricks: 1 x 2 = 2

Transport-type: tcp

Bricks:

Brick1: node01:/glusterfs/replica

Brick2: node02:/glusterfs/replica

Options Reconfigured:

performance.readdir-ahead: on

[4] To mount GlusterFS volume on clients, refer to here.

4 设置Striping

Configure Storage Clustering.

For example, create a Striping volume with 2 servers.

This example shows to use 2 servers but it‘s possible to use more than 3 servers.

 

                                  |

+----------------------+          |          +----------------------+

| [GlusterFS Server#1] |10.0.0.51 | 10.0.0.52| [GlusterFS Server#2] |

|   node01.srv.world   +----------+----------+   node02.srv.world   |

|                      |                     |                      |

+----------------------+                     +----------------------+

It is recommended to use partitions for GlusterFS volumes which are different from the / partition.

The environment on this example is set as that sdb1 is mounted on /glusterfs directory for GlusterFS Configuration on all Nodes.

[1] Install GlusterFS Server on All Nodes, refer to here.

[2] Create a Directory for GlusterFS Volume on all Nodes.

[root@node01 ~]# mkdir /glusterfs/striped

[3] Configure Clustering like follows on a node. (it‘s OK on any node)

# probe the node

[root@node01 ~]# gluster peer probe node02

peer probe: success.

# show status

[root@node01 ~]# gluster peer status

Number of Peers: 1

 

Hostname: node02

Uuid: 2ca22769-28a1-4204-9957-886579db2231

State: Peer in Cluster (Connected)

 

# create volume

[root@node01 ~]# gluster volume create vol_striped stripe 2 transport tcp \

node01:/glusterfs/striped \

node02:/glusterfs/striped

volume create: vol_striped: success: please start the volume to access data

# start volume

[root@node01 ~]# gluster volume start vol_striped

volume start: vol_replica: success

# show volume info

[root@node01 ~]# gluster volume info

 

Volume Name: vol_striped

Type: Stripe

Volume ID: b6f6b090-3856-418c-aed3-bc430db91dc6

Status: Started

Number of Bricks: 1 x 2 = 2

Transport-type: tcp

Bricks:

Brick1: node01:/glusterfs/striped

Brick2: node02:/glusterfs/striped

Options Reconfigured:

performance.readdir-ahead: on

[4] To mount GlusterFS volume on clients, refer to here.

5 Distributed+Replication

Configure Storage Clustering.

For example, create a Distributed + Replication volume with 4 servers.

                                  |

+----------------------+          |          +----------------------+

| [GlusterFS Server#1] |10.0.0.51 | 10.0.0.52| [GlusterFS Server#2] |

|   node01.srv.world   +----------+----------+   node02.srv.world   |

|                      |          |          |                      |

+----------------------+          |          +----------------------+

                                  |

+----------------------+          |          +----------------------+

| [GlusterFS Server#3] |10.0.0.53 | 10.0.0.54| [GlusterFS Server#4] |

|   node03.srv.world   +----------+----------+   node04.srv.world   |

|                      |                     |                      |

+----------------------+                     +----------------------+

It is recommended to use partitions for GlusterFS volumes which are different from the / partition.

The environment on this example is set as that sdb1 is mounted on /glusterfs directory for GlusterFS Configuration on all Nodes.

[1] Install GlusterFS Server on All Nodes, refer to here.

[2] Create a Directory for GlusterFS Volume on all Nodes.

[root@node01 ~]# mkdir /glusterfs/dist-replica

[3] Configure Clustering like follows on a node. (it‘s OK on any node)

# probe the node

[root@node01 ~]# gluster peer probe node02

peer probe: success.

[root@node01 ~]# gluster peer probe node03

peer probe: success.

[root@node01 ~]# gluster peer probe node04

peer probe: success.

# show status

[root@node01 ~]# gluster peer status

Number of Peers: 3

 

Hostname: node02

Uuid: 2ca22769-28a1-4204-9957-886579db2231

State: Peer in Cluster (Connected)

 

Hostname: node03

Uuid: 79cff591-1e98-4617-953c-0d3e334cf96a

State: Peer in Cluster (Connected)

 

Hostname: node04

Uuid: 779ab1b3-fda9-46da-af95-ba56477bf638

State: Peer in Cluster (Connected)

 

# create volume

[root@node01 ~]# gluster volume create vol_dist-replica replica 2 transport tcp \

node01:/glusterfs/dist-replica \

node02:/glusterfs/dist-replica \

node03:/glusterfs/dist-replica \

node04:/glusterfs/dist-replica

volume create: vol_dist-replica: success: please start the volume to access data

# start volume

[root@node01 ~]# gluster volume start vol_dist-replica

volume start: vol_dist-replica: success

# show volume info

[root@node01 ~]# gluster volume info

 

Volume Name: vol_dist-replica

Type: Distributed-Replicate

Volume ID: 784d2953-6599-4102-afc2-9069932894cc

Status: Started

Number of Bricks: 2 x 2 = 4

Transport-type: tcp

Bricks:

Brick1: node01:/glusterfs/dist-replica

Brick2: node02:/glusterfs/dist-replica

Brick3: node03:/glusterfs/dist-replica

Brick4: node04:/glusterfs/dist-replica

Options Reconfigured:

performance.readdir-ahead: on

[4] To mount GlusterFS volume on clients, refer to here.

6 Striping+Replication

Configure Storage Clustering.

For example, create a Striping + Replication volume with 4 servers.

                                  |

+----------------------+          |          +----------------------+

| [GlusterFS Server#1] |10.0.0.51 | 10.0.0.52| [GlusterFS Server#2] |

|   node01.srv.world   +----------+----------+   node02.srv.world   |

|                      |          |          |                      |

+----------------------+          |          +----------------------+

                                  |

+----------------------+          |          +----------------------+

| [GlusterFS Server#3] |10.0.0.53 | 10.0.0.54| [GlusterFS Server#4] |

|   node03.srv.world   +----------+----------+   node04.srv.world   |

|                      |                     |                      |

+----------------------+                     +----------------------+

It is recommended to use partitions for GlusterFS volumes which are different from the / partition.

The environment on this example is set as that sdb1 is mounted on /glusterfs directory for GlusterFS Configuration on all Nodes.

[1] Install GlusterFS Server on All Nodes, refer to here.

[2] Create a Directory for GlusterFS Volume on all Nodes.

[root@node01 ~]# mkdir /glusterfs/strip-replica

[3] Configure Clustering like follows on a node. (it‘s OK on any node)

# probe the node

[root@node01 ~]# gluster peer probe node02

peer probe: success.

[root@node01 ~]# gluster peer probe node03

peer probe: success.

[root@node01 ~]# gluster peer probe node04

peer probe: success.

# show status

[root@node01 ~]# gluster peer status

Number of Peers: 3

 

Hostname: node02

Uuid: 2ca22769-28a1-4204-9957-886579db2231

State: Peer in Cluster (Connected)

 

Hostname: node03

Uuid: 79cff591-1e98-4617-953c-0d3e334cf96a

State: Peer in Cluster (Connected)

 

Hostname: node04

Uuid: 779ab1b3-fda9-46da-af95-ba56477bf638

State: Peer in Cluster (Connected)

 

# create volume

[root@node01 ~]# gluster volume create vol_strip-replica stripe 2 replica 2 transport tcp \

node01:/glusterfs/strip-replica \

node02:/glusterfs/strip-replica \

node03:/glusterfs/strip-replica \

node04:/glusterfs/strip-replica

volume create: vol_strip-replica: success: please start the volume to access data

# start volume

[root@node01 ~]# gluster volume start vol_strip-replica

volume start: vol_strip-replica: success

# show volume info

[root@node01 ~]# gluster volume info

 

Volume Name: vol_strip-replica

Type: Striped-Replicate

Volume ID: ec36b0d3-8467-47f6-aa83-1020555f58b6

Status: Started

Number of Bricks: 1 x 2 x 2 = 4

Transport-type: tcp

Bricks:

Brick1: node01:/glusterfs/strip-replica

Brick2: node02:/glusterfs/strip-replica

Brick3: node03:/glusterfs/strip-replica

Brick4: node04:/glusterfs/strip-replica

Options Reconfigured:

performance.readdir-ahead: on

[4] To mount GlusterFS volume on clients, refer to here.

7 Clients Settings

It‘s the settings for GlusterFS clients to mount GlusterFS volumes.

[1] For mounting with GlusterFS Native Client, Configure like follows.

[root@client ~]# curl http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo -o /etc/yum.repos.d/glusterfs-epel.repo

[root@client ~]# yum -y install glusterfs glusterfs-fuse

# mount vol_distributed volume on /mnt

[root@client ~]# mount -t glusterfs node01.srv.world:/vol_distributed /mnt

[root@client ~]# df -hT

Filesystem                           Type            Size  Used Avail Use% Mounted on

/dev/mapper/centos-root              xfs              27G  1.1G   26G   5% /

devtmpfs                             devtmpfs        2.0G     0  2.0G   0% /dev

tmpfs                                tmpfs           2.0G     0  2.0G   0% /dev/shm

tmpfs                                tmpfs           2.0G  8.3M  2.0G   1% /run

tmpfs                                tmpfs           2.0G     0  2.0G   0% /sys/fs/cgroup

/dev/vda1                            xfs             497M  151M  347M  31% /boot

node01.srv.world:/vol_distributed fuse.glusterfs   40G   65M   40G   1% /mnt

[2] NFS (v3) is also supported, so it‘s possible to mount with NFS.

Configure for it on GlusterFS Servers first, refer to here.

[root@client ~]# yum -y install nfs-utils

[root@client ~]# systemctl start rpcbind rpc-statd

[root@client ~]# systemctl enable rpcbind rpc-statd

[root@client ~]# mount -t nfs -o mountvers=3 node01.srv.world:/vol_distributed /mnt

[root@client ~]# df -hT

Filesystem                           Type      Size  Used Avail Use% Mounted on

/dev/mapper/centos-root              xfs        27G  1.1G   26G   5% /

devtmpfs                             devtmpfs  2.0G     0  2.0G   0% /dev

tmpfs                                tmpfs     2.0G     0  2.0G   0% /dev/shm

tmpfs                                tmpfs     2.0G  8.3M  2.0G   1% /run

tmpfs                                tmpfs     2.0G     0  2.0G   0% /sys/fs/cgroup

/dev/vda1                            xfs       497M  151M  347M  31% /boot

node01.srv.world:/vol_distributed nfs        40G   64M   40G   1% /mnt




详细视频课程请戳—→ http://edu.51cto.com/course/course_id-6574.html



本文出自 “11830455” 博客,转载请与作者联系!

Linux与云计算——第二阶段 第五章:存储Storage服务器架设—分布式存储GlusterFS基础

标签:glusterfs安装   设置distributed   设置replication   设置striping   clients settings   striping+replication   

原文地址:http://11840455.blog.51cto.com/11830455/1833056

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!