标签:gluster
glusterfs相对于本极端的文件系统而言,分布式文件系统DFS,伙食网络文件系统NFS,是一种允许文件通过网络在多台主机上分享的文件系统你那个,可以让多机器上的多用户分享指定问加你和存储空间
在这样的文件系统中,客户端并非直接访问底层的数据存储区块,而是通过网络,以特定的通信协议和服务器沟通,借由通信协议的设计,可以让客户端和服务端都根据访问控制清单或者授权,来限制对于文件系统的访问
glusterfs是一个开源的分布式文件系统,具有强大的横向扩展能力,通过扩展能够支持数PB存储容量和处理数千客户端,glusterfs借助tcp/ip或者infinband RDMA(是一种支持多并发连接的转换线缆技术)网络将物理分布的存储资源聚集在一起,使用单一全局命名空间来管理数据,glusterfs基于可堆砌的用户空间设计,可为各种不同的数据负载提供优异的性能
[root@lb01 ~]# cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core)
[root@lb01 ~]# systemctl status firewalld.service
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Mar 18 00:27:08 lb01 systemd[1]: Starting firewalld - dynamic firewall daemon...
Mar 18 00:27:12 lb01 systemd[1]: Started firewalld - dynamic firewall daemon.
Mar 18 01:29:20 lb01 systemd[1]: Stopping firewalld - dynamic firewall daemon...
Mar 18 01:29:22 lb01 systemd[1]: Stopped firewalld - dynamic firewall daemon.
[root@lb01 ~]# getenforce
Disabled
[root@lb01 ~]# hostname -I
10.0.0.5 172.16.1.5
[root@lb02 ~]# cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core)
[root@lb02 ~]# systemctl status firewalld.service
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
Active: inactive (dead)
[root@lb02 ~]# getenforce
Disabled
[root@lb02 ~]# hostname -I
10.0.0.6 172.16.1.6
注意:配置hosts文件这一步很关键,所有机器都要有
[root@web03 gv0]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.1.41 backup gluster服务端
172.16.1.17 web03 gluster服务端
172.16.1.18 web04
172.16.1.21 cache01 进行挂载的客户端
backup和web03相同操作:
yum install centos-release-gluster -y glusterfs仓库
yum -y install glusterfs-server
systemctl start glusterd
说明:可以修改镜像源进行加速
sed -i 's#http://mirror.centos.org#https://mirrors.shuosc.org#g' /etc/yum.repos.d/CentOS-Gluster-3.12.repo
[root@backup ~]# gluster peer probe web03
peer probe: success.
gluster peer datach
查看信任池建立情况
[root@web03 gv0]# gluster peer status
Number of Peers: 1
Hostname: backup
Uuid: 55a0673a-fc85-45de-92f4-ffe4c6918806
State: Peer in Cluster (Connected)
[root@backup ~]# gluster peer status
Number of Peers: 1
Hostname: web03
Uuid: f89a8665-f94a-4246-a626-4a9e0dc6be49
State: Peer in Cluster (Connected)
服务端创建数据存放目录:
[root@web03 ~]# mkdir -p /data/gv0
[root@backup ~]# mkdir –p /data/gv0
使用命令创建分布式卷,命令为test,可随意命名
[root@lb01 ~]# gluster volume create test 172.16.1.5:/data/exp1/ 172.16.1.6:/data/exp1/ force
volume create: test: success: please start the volume to access data
说明:这条命令最后的force表示将文件存储在系统盘上,因为默认是不允许在系统盘上进行覆盖的
查看卷信息:服务端出入信息应为相同
[root@lb01 ~]# gluster volume info test
Volume Name: test
Type: Distribute
Volume ID: 03f9176c-4e7d-49dc-a8f5-9f9f8cc08867
Status: Created
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 172.16.1.5:/data/exp1
Brick2: 172.16.1.6:/data/exp1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
启动卷
[root@lb01 ~]# gluster volume start test
volume start: test: success
[root@cache01 ~]# yum install centos-release-gluster –y
[root@cache01 ~]# yum install -y glusterfs glusterfs-fuse
[root@cache01 ~]# mount -t glusterfs 172.16.1.17:/test /mnt/
在客户端执行挂载命令时,ip或者主机名的冒号后面接的是卷名称,不是目录!
反反复复试验了两天的时间,全都错在挂载命令上,如果不接卷名接的目录,就会遇到如下报错:
[root@cache01 ~]# mount.glusterfs 172.16.1.17:/data/gv0 /mnt/
Mount failed. Please check the log file for more details.
查看/var/log/glusterfs/mnt.log后发现如下信息:
客户端创建文件
[root@cache01 mnt]# mkdir nihao
[root@cache01 mnt]# ll
total 4
drwxr-xr-x 2 root root 4096 Mar 27 2018 nihao
[root@backup gv0]# ll backup服务端正常
total 0
drwxr-xr-x 2 root root 6 Mar 27 03:18 nihao
[root@web03 gv0]# ll web03服务端正常
total 0
drwxr-xr-x 2 root root 6 Mar 18 00:43 nihao
至此,简单的glusterfs服务实现
创建数据存放目录
[root@lb01 ~]# mkdir /data/exp3
[root@lb02 ~]# mkdir /data/exp4
使用命令创建复制卷,命令为repl
[root@lb01 data]# gluster volume create repl replica 2 transport tcp 172.16.1.5:/data/exp3/ 172.16.1.6:/data/exp4 force
volume create: repl: success: please start the volume to access data
查看卷信息
[root@lb01 data]# gluster volume info repl
Volume Name: repl
Type: Replicate
Volume ID: 98182696-e065-4bdb-9a11-f6a086092983
Status: Created
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 172.16.1.5:/data/exp3
Brick2: 172.16.1.6:/data/exp4
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
启动卷
[root@lb01 data]# gluster volume start repl
volume start: repl: success
创建数据存放目录
[root@lb01 data]# mkdir /data/exp5
[root@lb02 data]# mkdir /data/exp6
使用命令创建条带卷,命名为raid0
[root@lb01 data]# gluster volume create raid0 stripe 2 transport tcp 172.16.1.5:/data/exp5/ 172.16.1.6:/data/exp6 force
volume create: raid0: success: please start the volume to access data
查看卷信息
[root@lb01 data]# gluster volume info raid0
Volume Name: raid0
Type: Stripe
Volume ID: f8578fab-11c7-4aaa-b58f-b1825233cfd6
Status: Created
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 172.16.1.5:/data/exp5
Brick2: 172.16.1.6:/data/exp6
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
启动卷
[root@lb01 data]# gluster volume start raid0
volume start: raid0: success
标签:gluster
原文地址:http://blog.51cto.com/13520772/2091142