标签:sles ha
一:实验环境:
节点 | OS | IP(eth0) | DRBD_IP(eth1) | VIP | DRBD_DISK |
web1 | sles11 sp2 | 192.168.10.11 | 172.16.1.1 | 192.168.10.100 | /dev/sdb |
web2 | sles11 sp2 | 192.168.10.12 | 172.16.1.2 |
注:
节点web1和web2操作系统完全安装(gnome桌面环境)
IP已按上图设置好
节点web1和web2都添加了一块大小为2G的硬盘作为drbd使用
hosts文件添加完毕
168.10.11 web1
168.10.12 web2
ssh无密码访问已设置完毕
两节点时间已同步
二:安装SLES-HA扩展(其中包含drbd)和apache2
1.把下载好的SLE-HA-11-SP2-x86_64-GM-CD1.iso复制到任意目录下,这里为/root
2.输入yast2 repositories打开软件源配置
3.点击“add”,选择“Local ISO Image”后,点击“Nest”,如下图所示
4.输入repository名称和具体路径,然后一直点击“Next”直至添加完成,如下图所示
5.查看配置好的安装源并刷新安装源
web1:~ # zypper lr
# | Alias |Name | Enabled | Refresh
--+--------------------------------------------------+--------------------------------------------------+---------+--------
1 | SUSE-linux-Enterprise-Server-11-sp211.2.2-1.234 | SUSE-linux-Enterprise-Server-11-sp2 11.2.2-1.234 | Yes | No
2 | sles11sp2-ha |sles11sp2-ha | Yes | No
web1:~ # zypper ref
Repository ‘sles11sp2-ha‘ is up to date.
All repositories have been refreshed.
6.输入yast2 sw_single选择“Patterns”下的“High Availability”后点击“Accept”以完成“High Availability”的安装,如下图所示
web2操作同web1
7.安装apache2
web1:~ # for i in 1 2; do ssh web$i zypper -n install apache2; done
8.设置apache2服务不随机启动
web1:~ # for i in 1 2; do ssh web$i chkconfig apache2 off; done
三:配置drbd
1.复制主配置文件drbd.conf到etc目录下
web1:~ # for i in 1 2; do ssh web$i chkconfig drbd off; done
web1:~ # for i in 1 2; do ssh web$i cp /usr/share/doc/packages/drbd-utils/drbd.conf /etc/drbd.conf; done
2.全局通用配置
web1:~ # cd /etc/drbd.d/
web1:/etc/drbd.d # cat global_common.conf //红色字体处为修改或增加部分
global {
# usage-count yes;
usage-count no;
# minor-count dialog-refresh disable-ip-verification
}
common {
handlers {
pri-on-incon-degr"/usr/lib/drbd/notify-pri-on-incon-degr.sh;/usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ;reboot -f";
pri-lost-after-sb"/usr/lib/drbd/notify-pri-lost-after-sb.sh;/usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ;reboot -f";
local-io-error"/usr/lib/drbd/notify-io-error.sh;/usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ;halt -f";
# fence-peer"/usr/lib/drbd/crm-fence-peer.sh";
# split-brain"/usr/lib/drbd/notify-split-brain.sh root";
# out-of-sync"/usr/lib/drbd/notify-out-of-sync.sh root";
# before-resync-target"/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k";
# after-resync-target/usr/lib/drbd/unsnapshot-resync-target-lvm.sh;
}
startup {
# wfc-timeout degr-wfc-timeoutoutdated-wfc-timeout wait-after-sb
}
options {
# cpu-maskon-no-data-accessible
}
disk {
# size max-bio-bvecson-io-error fencing disk-barrier disk-flushes
# disk-drain md-flushesresync-rate resync-after al-extents
# c-plan-ahead c-delay-targetc-fill-target c-max-rate
# c-min-rate disk-timeout
}
net {
# protocol timeoutmax-epoch-size max-buffers unplug-watermark
# connect-int ping-intsndbuf-size rcvbuf-size ko-count
# allow-two-primariescram-hmac-alg shared-secret after-sb-0pri
# after-sb-1pri after-sb-2prialways-asbp rr-conflict
# ping-timeoutdata-integrity-alg tcp-cork on-congestion
# congestion-fillcongestion-extents csums-alg verify-alg
# use-rle
protocol C;
cram-hmac-alg sha1;
shared-secret"wjcyf";
}
syncer {
rate 150M;
}
}
3.增加一个名为r0的资源
web1:/etc/drbd.d # touch r0.res
web1:/etc/drbd.d # cat r0.res
resource r0 {
device /dev/drbd0;
disk /dev/sdb;
meta-disk internal;
on web1 {
address 172.16.1.1:7789;
}
on web2 {
address 172.16.1.2:7789;
}
}
4.复制global_common.conf和r0.res到web2对应的目录下
web1:/etc/drbd.d # scp global_common.conf r0.res web2:/etc/drbd.d/
5.为资源r0创建元数据
web1:~ # drbdadm create-md r0
Writing meta data...
initializing activity log
NOT initializing bitmap
New drbd meta data block successfully created.
web1:~ # ssh web2 drbdadm create-md r0
NOT initializing bitmap
Writing meta data...
initializing activity log
New drbd meta data block successfullycreated.
6.启动drbd服务
web1:~ # /etc/init.d/drbd start
Starting DRBD resources: [
create res: r0
prepare disk: r0
adjust disk: r0
adjust net: r0
]
web2:~ # /etc/init.d/drbd start
Starting DRBD resources: [
create res: r0
prepare disk: r0
adjust disk: r0
adjust net: r0
]
7.查看目前的状态
web1:~ # drbd-overview
0:r0/0 Connected Secondary/Secondary Inconsistent/Inconsistent C r-----
8.把web1设置为主节点开始同步数据
web1:~ # drbdadm -- --overwrite-data-of-peer primary r0
web1:~ # cat /proc/drbd
version: 8.4.1 (api:1/proto:86-100)
GIT-hash:91b4c048c1a0e06777b5f65d312b38d47abaea80 build by phil@fat-tyre, 2011-12-2012:43:15
0:cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-----
ns:536912 nr:0 dw:0 dr:541336 al:0 bm:32 lo:0 pe:2 ua:4 ap:0 ep:1 wo:boos:1561500
[====>...............] sync‘ed: 25.6% (1561500/2097052)K
finish: 0:00:14 speed: 107,108 (107,108) K/sec
web1:~ # cat /proc/drbd
version: 8.4.1 (api:1/proto:86-100)
GIT-hash:91b4c048c1a0e06777b5f65d312b38d47abaea80 build by phil@fat-tyre, 2011-12-20 12:43:15
0:cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDateC r-----
ns:2097052 nr:0 dw:0 dr:2097716 al:0 bm:128 lo:0 pe:0 ua:0 ap:0 ep:1wo:b oos:0
由上述红色字体处可知同步完成
9.创建文件系统,并挂载到/srv/www/htdocs目录下,放入测试网页,然后再卸载该挂载
web1:~ # mkfs.xfs /dev/drbd0
web1:~ # mount /dev/drbd0 /srv/www/htdocs
web1:~ # echo "SuSE Test Page">/srv/www/htdocs/index.html
web1:~ # umount /srv/www/htdocs
10.恢复drbd主备状态并停止drbd服务
web1:~ # drbdadm secondary r0
web1:~ # for i in 1 2; do ssh web$i /etc/init.d/drbd stop; done
至此drbd配置完成
四:配置ha
1.在任一节点上输入yast2 cluster(这里为web1),打开集群通信通道配置窗口,选择好节点间通讯的网口,输入多播地址和端口,并勾选“Auto Generate Node ID”后,点击“Next”,如下图所示
2.在集群安全页面,直接点击“Next”,如下图所示
3.在集群服务页面,直接点击“Next”,如下图所示
4. 后面一直“Next”直至完成
5.复制配置文件/etc/corosync/corosync.conf到web2的对应目录下
web1:~ # scp /etc/corosync/corosync.conf web2:/etc/corosync/
6.启动openais服务
web1:~ # rcopenais start
Starting OpenAIS/Corosync daemon(corosync): starting... OK
web1:~ # ssh web2 rcopenais start
Starting OpenAIS/Corosync daemon(corosync): starting... OK
7.设置openais服务开机启动
web1:~ # for i in 1 2; do ssh web$i chkconfig openais on; done
web1:~ # for i in 1 2; do ssh web$i chkconfig openais --list; done
openais 0:off 1:off 2:off 3:on 4:off 5:on 6:off
openais 0:off 1:off 2:off 3:on 4:off 5:on 6:off
8.查看目前的集群状态
web1:~ # crm status
============
Last updated: Wed Jul 1 17:06:26 2015
Last change: Wed Jul 1 17:01:16 2015 by hacluster via crmd on web1
Stack: openais
Current DC: web1 - partition with quorum
Version: 1.1.6-b988976485d15cb702c9307df55512d323831a5e
2 Nodes configured, 2 expected votes
0 Resources configured.
============
Online: [ web2 web1 ]
由以上可知web1和web2都在线,而且DC为web1,但未配置任何服务
五:配置和添加资源
1.设置hacluster用户密码为admin
web1:~ # for i in 1 2; do ssh web$i ‘echo admin | passwd --stdin hacluster‘; done
2.在web1节点上输入hb_gui &,打开集群配置窗口,点击左上角的连接,选择用户hacluster并输入密码admin,如下图所示
3.点击“OK”登录,登录后如下图所示
由上图也可知,目前无任何资源
4.点击“CRM Config”,把右边的“No Quorum Policy”值改为“ignore”,把“Stonith Enabled”的钩去掉后,点击右下角的“Apply”保存配置,如下图所示
5.添加资源,点击“Resources”后点击右边的“Add”,选择“Master”后,再点击“OK”,如下图所示
6.在弹出的页面内,输入ID为“ms_webdrbd”,同时加勾选最下方的三个复选框后,点击“Forward”,如下图所示
7.在弹出的界面内,直接点击“OK”,如下图所示
8.为ms_webdrbd添加子资源,如下图所示
9.点击“Forward”,设置实例属性drbd_resource的值为“r0”,其他默认,如下图所示
10.连续两次点击“Apply”保存配置后,点击左边的“Management”,查看配置好的资源,如下图所示
11.添加一个名为websrv的组,其中包括webstore、apache2、webip,点击“Resource”—“Add”,资源类型选择“Group”,如下图所示
12.点击“OK”输入组名“websrv”,如下图所示
13.点击“Forward”,再点击“OK”,如下图所示
14.添加webstore子资源
15.接着添加apache2子资源
16,接着再添加webip子资源
17.最后点击“Cancel”取消继续添加子资源
18.点击“Management”,查看添加好的所有资源
19.现在不要启动,点击“Constraints”—“Add”,选择“Resource Colocation”,如下图所示
20.设置的参数如下图所示(webstore必须和ms_webdrbd的Master角色在一起)
21.点击“OK”保存配置,接着添加“Resource Order”,如下图所示
22. 设置的参数如下图所示(必须等到ms_webdrbd提升完成后,websrv才能启动)
23.最后点击“OK”保存配置
24.启动所有资源,首先启动ms_webdrbd,右击“ms_webdrbd”选择“start”,如下图所示
25.接着再启动websrv(方法同启动ms_webdrbd)
26.所有资源启动后的状态如下图所示
至此配置和资源添加完毕
六:高可用测试
1.右击上图中的“web1”选择“standby”,如下图所示
2.资源会切换到web2上,如下图所示
3.再重新使web1上线,右击“web1”选择“Active”,资源没有切回web1(由于是drbd的原因,当出现脑裂时,最好手动消除脑裂,所以最好不要设置资源回切)
4.并且在资源切换过程中,访问http://192.168.10.100,都能看到页面“SuSE Test Page”
至此sles11-HA配置完成
本文出自 “永不止步” 博客,谢绝转载!
标签:sles ha
原文地址:http://wjcaiyf.blog.51cto.com/7105309/1670049