码迷,mamicode.com
首页 > 其他好文 > 详细

RHCS集群安装与配置

时间:2014-08-07 19:23:01      阅读:430      评论:0      收藏:0      [点我收藏+]

标签:rhcs ricci luci gfs2

集群或者多个集群指的是运行红帽高可用性附加组件的一组计算机。

实验环境:rhel6.5 iptables&selinux disabled

实验主机:192.168.2.251(luci节点)

                192.168.2.137 192.168.2.138(ricci节点)

三台主机都必须配置高可用yum源:

[base]
name=Instructor Server Repository
baseurl=http://192.168.2.251/pub/rhel6.5
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

# HighAvailability rhel6.5
[HighAvailability]
name=Instructor HighAvailability Repository
baseurl=http://192.168.2.251/pub/rhel6.5/HighAvailability
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

# LoadBalancer packages
[LoadBalancer]
name=Instructor LoadBalancer Repository
baseurl=http://192.168.2.251/pub/rhel6.5/LoadBalancer
# ResilientStorage
[ResilientStorage]
name=Instructor ResilientStorage Repository
baseurl=http://192.168.2.251/pub/rhel6.5/ResilientStorage
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

# ScalableFileSystem
[ScalableFileSystem]
name=Instructor ScalableFileSystem Repository
baseurl=http://192.168.2.251/pub/rhel6.5/ScalableFileSystem
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

并且三台主机都得时间同步。

192.168.2.251给自身和另两个主机做解析,137和138给自身和彼此做解析!

在137和138上yum安装ricci yum install ricci -y 在251上yum安装luci yum install luci -y

安装完成之后,给ricci添加密码。启动ricci并且设置成开机自启动。

#passwd ricci

#/etc/init.d/ricci start

#chkconfig ricci on

#chkconfig luci on

启动luci

bubuko.com,布布扣

然后在firefox上访问上图的链接地址

bubuko.com,布布扣

输入root用户和密码进入web配置界面

bubuko.com,布布扣

添加集群

bubuko.com,布布扣

创建这个集群的过程中它会自动安装所需要的安装包,并且两个节点会重新启动。

创建完成之后要确定以下几个服务是开启的状态:

bubuko.com,布布扣

在命令行用clustat查询两个节点的连接情况

bubuko.com,布布扣

在137和138两个节点的/etc/cluster下会自动生成cluster.conf和cman-notify.d

bubuko.com,布布扣

添加fence隔离设备

在251上安装fence-virtd-libvirt,fence-virtd-multicast,fence-virtd这几个yum包
# fence_virtd -c
Module search path [/usr/lib64/fence-virt]:
Available backends:
libvirt 0.1
Available listeners:
multicast 1.0
Listener modules are responsible for accepting requests
from fencing clients.
Listener module [multicast]:
The multicast listener module is designed for use environments
where the guests and hosts may communicate over a network using
multicast.
The multicast address is the address that a client will use to
send fencing requests to fence_virtd.
Multicast IP Address [225.0.0.12]:
Using ipv4 as family.
Multicast IP Port [1229]:
Setting a preferred interface causes fence_virtd to listen only
on that interface. Normally, it listens on all interfaces.
In environments where the virtual machines are using the host
machine as a gateway, this *must* be set (typically to virbr0).
Set to ‘none‘ for no interface.
Interface [none]: br0
The key file is the shared key information which is used to
authenticate fencing requests. The contents of this file must
be distributed to each physical host and virtual machine within
a cluster.
Key File [/etc/cluster/fence_xvm.key]:
Backend modules are responsible for routing requests to
the appropriate hypervisor or management layer.
Backend module [libvirt]:
The libvirt backend module is designed for single desktops or
servers. Do not use in environments where virtual machines
may be migrated between hosts.
Libvirt URI [qemu:///system]:
Configuration complete.
=== Begin Configuration ===
backends {
libvirt {
uri = "qemu:///system";
}
}
listeners {
multicast {
key_file = "/etc/cluster/fence_xvm.key";
interface = "br0";
port = "1229";
address = "225.0.0.12";
family = "ipv4";
}
}
fence_virtd {
backend = "libvirt";
listener = "multicast";
module_path = "/usr/lib64/fence-virt";
}
=== End Configuration ===
Replace /etc/fence_virt.conf with the above [y/N]? y

除了红色部分改动之外,其他部分一路回车就行

然后在251的/etc/cluster目录下用下面这个命令生成key文件:

#dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=128 count=1

把生成的fence_xvm.key这个文件scp到137和138的/etc/cluster下,然后启动fence

bubuko.com,布布扣

在web界面添加fence Devices

bubuko.com,布布扣

然后进到集群节点添加fence method

bubuko.com,布布扣

再添加 fence instance

bubuko.com,布布扣

上图中的Domain是此节点主机的UUID

用fence命令隔离其中一个节点测试一下,在/var/log/cluster/下查看fence日志 cat fence.log

bubuko.com,布布扣

添加失效备援(Failover Domain)名字自己随便定义

bubuko.com,布布扣

添加资源Resource

浮动IP

bubuko.com,布布扣

添加脚本

bubuko.com,布布扣

在137和138两个节点上安装httpd

添加服务组

bubuko.com,布布扣   

把资源都添加到组里面

bubuko.com,布布扣

然后启动刚刚添加的那个组,这个组在哪个节点上运行,哪个节点的httpd就会自动开启

bubuko.com,布布扣

在137的/var/www/html/下写一个测试文件

#echo `hostname` > index.html

然后用浏览器访问刚刚绑定的那个浮动IP 192.168.2.111 就会看到你写的那个测试文件

用clusvcadm -e www 和 clusvcadm -d www 可以启动和关闭www服务组

bubuko.com,布布扣

这个命令可以用来迁移服务组到另一个节点。

添加分布式存储

在251上划分出一块LVM分区

bubuko.com,布布扣

在251上安装  

# yum install scsi-target-utils.x86_64 -y

在137和138上安装

#yum install iscsi-initiator-utils -y
然后编辑/etc/tgt/targets.conf

bubuko.com,布布扣

然后启动tgtd

/etc/init.d/tgtd start

在137和138上用iscsi命令发现并登录这块共享的分区

bubuko.com,布布扣

在这块磁盘上划分出一块住分区

bubuko.com,布布扣

划分完成之后格式化 mkfs.ext4 /dev/sda1

在web界面添加filesystem这个资源

bubuko.com,布布扣

同上把它添加到服务组

#clusvcadm -d www

现在将此设备先挂载到/var/www/html下

启动服务

#clusvcadm -e www

#clustat     查看服务是否启动成功

网络文件系统 gfs2

clusvcadm -d www   先停掉服务组 

然后在web界面删掉wwwdata这个资源

 在共享的那块磁盘划分出一块id为8e的分区

bubuko.com,布布扣

然后把它做成LVM

bubuko.com,布布扣

bubuko.com,布布扣

配置/etc/lvm/lvm.conf

bubuko.com,布布扣

bubuko.com,布布扣

格式化gfs2分区并且挂载

bubuko.com,布布扣

设置安全上下文

bubuko.com,布布扣

把这快分区写到/etc/fstab里

bubuko.com,布布扣

在web界面添加资源全局文件系统(gfs2)并添加到资源组

bubuko.com,布布扣

到这,就基本上RHCS配置可以告一段落了!!





本文出自 “9244137” 博客,请务必保留此出处http://9254137.blog.51cto.com/9244137/1537016

RHCS集群安装与配置,布布扣,bubuko.com

RHCS集群安装与配置

标签:rhcs ricci luci gfs2

原文地址:http://9254137.blog.51cto.com/9244137/1537016

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!