标签:
这部分主要工作有以下方面:(1)关于高可用架构的选型
IBA虚拟化
双控盘阵支撑
盘整MMP参数调整
lustre服务脚本改造
Lustre resource脚本
Lwfsd resource脚本
虚拟机服务加入到pacemaker
软件安装[all]
pacemaker-libs-1.1.12-4.el6.x86_64
pacemaker-1.1.12-4.el6.x86_64
pacemaker-cts-1.1.12-4.el6.x86_64
pacemaker-libs-devel-1.1.12-4.el6.x86_64
pacemaker-cluster-libs-1.1.12-4.el6.x86_64
pacemaker-remote-1.1.12-4.el6.x86_64
pacemaker-cli-1.1.12-4.el6.x86_64
pacemaker-doc-1.1.12-4.el6.x86_64
pcs-0.9.123-9.0.1.el6.centos.x86_64
双控盘整MMP设置,将mmp_update_interval设置为60s[all]
tune2fs -O mmp /dev/sdf
tune2fs -E mmp_update_interval=60 -f /dev/sdf
清理旧环境下的遗留信息[选做][all]
rm -fr /etc/cluster/cluster.conf
rm -fr /var/lib/pacemaker/cib/*
rm -fr /var/lib/pcsd/*
在每个节点上为用户hacluster设置密码[在每个需要添加的点上]
echo 123456 |passwd --stdin hacluster
关闭corosync开机自启动[all]
chkconfig corosync off
开机自启动pcsd服务[在每个需要添加的节点上]
chkconfig pcsd on
并在节点上启动以下服务,接受pcs命令[在每个需要添加的节点上]s
service pcsd start
认证节点脚本[在集群中的一个点上,添加新的节点进去] 命令格式:sh node_auth.sh gio033
将节点加入到同一个集群中[one]
pcs cluster setup --enable --name clustre raid-dev vm02 vm03【初始化】
pcs cluster node add gio031 --start --enable [后续节点添加]
启动集群[all]
pcs --debug cluster start
设置集群参数[one]
pcs property set start-failure-is-fatal=false
pcs property set stonith-enabled=false
pcs property set symmetric-cluster=false
设置资源的默认设置[one]
pcs resource defaults migration-threshold=5
pcs resource defaults failure-timeout=10s
创建资源[one]
创建资源colocation[all];在需要加入的点上执行:sh colocation.sh
设置Lustre的资源为optional order
关闭所有资源[one]
for i in `pcs resource|awk ‘{print $1}‘`;do pcs resource disable $i;done
打开所有资源[one]
for i in `pcs resource|awk ‘{print $1}‘`;do pcs resource enable $i;done
清理所有资源的状态
for i in `pcs resource|awk ‘{print $1}‘`;do pcs resource cleanup $i;done
yum install -y python-virtinst virt-viewer virt-manager libvirt-client libvirt qemu-img qemu-kvm gpxe-roms-qemu
更改主机网络配置文件分别如下:
/etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE="eth0"
BOOTPROTO="none"
IPV6INIT="no"
MTU="1500"
NM_CONTROLLED="no"
ONBOOT="yes"
BRIDGE=br0
/etc/sysconfig/network-scripts/ifcfg-br0
DEVICE="br0"
BOOTPROTO="static"
IPV6INIT="no"
MTU="1500"
NM_CONTROLLED="no"
ONBOOT="yes"
IPADDR="20.0.11.16"
GATEWAY="20.0.255.254"
NETMASK="255.0.0.0"
TYPE="Bridge"
qemu-img create /root/kvm.img 40G
virt-install --name=kvmtem --ram 8192 --vcpus=8 -f /root/kvmtem.img --cdrom rhel-6.4.iso --network bridge=br0 --force --graphics vnc,listen=*,port=5908 --noreboot
此时会自动启动KVM的图形安装界面,安装完毕后关闭虚拟机.
virsh attach-device kvmtem ./mellax02.xml --config
virsh attach-device kvmtem ./mellax01.xml --config
melle*.xml示例如下,其中的bus和slot值可以通过lspci | grep Mella获取
<hostdev mode=‘subsystem‘ type=‘pci‘ managed=‘yes‘>
<source>
<address domain=‘0x0000‘ bus=‘0x82‘ slot=‘0x01‘ function=‘0x0‘/>
</source>
<address type=‘pci‘ domain=‘0x0000‘ bus=‘0x00‘ slot=‘0x05‘ function=‘0x0‘/>
</hostdev>
再次启动虚拟机,安装Lustre文件系统,IB驱动,lwfsd,lwfsd-pacemaker脚本(拷贝至/usr/lib/ocf/resource.d/heartbeat/下.之后关闭虚拟机.此此安装的虚拟机只用于克隆目的.故请先安装好所有必须软件.
yum install -y python-virtinst virt-viewer virt-manager libvirt-client libvirt qemu-img qemu-kvm gpxe-roms-qemu
scp 18.0.11.1:/root/ /kmvtem.img /root/vnio008.img
此步当克隆虚拟机较多时,可写成脚本使用c3执行.
mkdir /mnt/loop
mount -o loop,offset=$((512 * 2099200)) ./vnio008.img /mnt/loop
rm -f /mnt/loop/etc/udev/rules.d/70-persistent-net.rules
sed -i "s/^HOSTNAME.*$/HOSTNAME=vnio008/" /mnt/loop/etc/sysconfig/network
sed -i "s/^IPADDR=.*$/IPADDR=20.0.211.8/" /mnt/loop/etc/sysconfig/network-scripts/ifcfg-eth0
sed -i "s/^UUID/# UUID/" /mnt/loop/etc/sysconfig/network-scripts/ifcfg-eth0
sed -i "s/^HWADDR/# HWADDR/" /mnt/loop/etc/sysconfig/network-scripts/ifcfg-eth0
sed -i "s/^IPADDR=.*$/IPADDR=19.0.211.8/" /mnt/loop/etc/sysconfig/network-scripts/ifcfg-ib0
sed -i "s/^UUID/# UUID/" /mnt/loop/etc/sysconfig/network-scripts/ifcfg-ib0
sed -i "s/^HWADDR/# HWADDR/" /mnt/loop/etc/sysconfig/network-scripts/ifcfg-ib0
sed -i "s/^IPADDR=.*$/IPADDR=18.0.211.8/" /mnt/loop/etc/sysconfig/network-scripts/ifcfg-ib1
sed -i "s/^UUID/# UUID/" /mnt/loop/etc/sysconfig/network-scripts/ifcfg-ib1
sed -i "s/^HWADDR/# HWADDR/" /mnt/loop/etc/sysconfig/network-scripts/ifcfg-ib1
virt-install --name=kvm --ram 8192 --vcpus=8 -f ./kvm.img --network bridge=cloudbr0 --force --graphics vnc,listen=*,port=5908 --noreboot --import –noautoconsole
当一台主机上要同时启动两台及以上虚拟机时,要保证每一个虚拟网卡的使用的slot与func组合唯一.具体可参见前面的安装部分,共用8个虚拟网卡可用.
virsh autostart vnio008
virsh attach-device vnio008 ./mellax02.xml --config
virsh attach-device vnio008 ./mellax01.xml --config
virsh start vnio008
virsh list –all
注: 以下文档中带有[ALL]的指令表示须在所有节点上执行(可使用C3),带有[ONE]的指令表示只须在其中一台上执行,此台主机最好固定.且以下操作在实机上执行.
[ALL] cat << -EOF >> /etc/hosts;
[mds00] cexec cluster: "mkdir -p /root/.ssh; rm -f /root/.ssh/id_ras*"
[mds00] cpush cluster: /etc/.ssh/* /etc/.ssh
[ALL] yum -y erase corosync pcs rubygems pacemaker-cli pacemaker-libs pacemaker pacemaker-cluster-libs
[ALL] rm -f /etc/cluster/cluster.conf
[ALL] yum install -y cman corosync
[ALL] yum localinstall -y pcs*.rpm rubygems*.rpm pacemaker*.rpm glib2*.rpm ccs*.rpm
[ALL] chkconfig corosync off
[ALL] /etc/init.d/corosync stop
[ALL] chkconfig NetworkManager off
[ALL] /etc/init.d/NetworkManager stop
[ALL] chkconfig pcsd on
[ALL] echo CMAN_QUORUM_TIMEOUT=0 > /etc/sysconfig/cman
[ALL] echo 123456 | passwd --stdin hacluster
[ALL] service pcsd start
[ONE] pcs cluster auth node01 node02
[ONE] pcs cluster setup --start --name 0102 node01 node02
[ONE] pcs property set stonith-enabled=false
[ONE] pcs property set symmetric-cluster=false
[ONE] pcs property set no-quorum-policy=stop
[ONE,OPT] pcs cluster enable --all
[ONE] pcs cluster status
注: 以下文档中带有[guest]的指令表示须在所有虚拟机中执行(可使用C3),带有[ONE]的指令表示只须在其中一台实体机上执行,此台主机最好固定.且以下操作在实机上执行.
[guest] yum -y localinstall pacemaker-remote-*.rpm resource-agents-*.rpm pacemaker-libs-*.rpm glib2-*.rpm pacemaker-cli-*.rpm pacemaker-cts-*.rpm pacemaker-cluster-libs-*.rpm
[guest] echo 123456 | passwd --stdin hacluster
[guest] echo CMAN_QUORUM_TIMEOUT=0 > /etc/sysconfig/cman
[guest] chkconfig corosync off
[guest] /etc/init.d/corosync stop
[guest] chkconfig NetworkManager off
[guest] /etc/init.d/NetworkManager stop
[guest] chkconfig pacemaker_remote on
[guest] /etc/init.d/pacemaker_remote start
[one] dd if=/dev/urandom of=/etc/pacemaker/authkey bs=4096 count=1
将生成的authkey拷贝至所有的节点,包括实体机与虚拟机.节点中可能没有/etc/pacemaekr文件夹,自行新建.
第一步在特定虚拟机所在的实体节点上执行.
[kvm-host] virsh dumpxml vnio006 > /root/vnio006.xml
[one] pcs resource create vm-vnio006 VirtualDomain hypervisor="qemu:///system" config="/root/vnio006.xml" meta remote-node=vnio006 --disabled
[one] pcs constraint location vm-vnio006 prefers nio006=500 nio005=100
[one] pcs resource enable vm-vnio006
当虚拟机及其内的pacemaker_remote服务均启动后,可用pcs status验证虚拟机作为节点(container)包括在集群中后,执行如下命令,其中的资源名称与虚拟机主机名据具体情况更改.
[one] pcs resource create lwfsd-vnio006 ocf:heartbeat:lwfsd vhost=vnio006
[one] pcs constraint location lwfsd-vnio006 prefers vnio006=500
[one] pcs resource enable lwfsd-vnio006
[one] pcs constraint order start vm-vnio006 then start lwfsd-vnio006
[one] pcs constraint order stop lwfsd-vnio006 then stop vm-vnio006
[ONE,optional] pcs resource update vm-vnio00 op start timeout=900
标签:
原文地址:http://www.cnblogs.com/wangtao1993/p/5901690.html