码迷,mamicode.com
首页 > 其他好文 > 详细

34J-3_4 corosync集群配置精讲

时间:2016-09-23 15:18:22      阅读:406      评论:0      收藏:0      [点我收藏+]

标签:linux   corosync   

配置环境

NFS: 192.168.1.121 CentOS6.7

Node1:192.168.1.131 CentOS7.2

Node2:192.168.1.132 CentOS7.2

Node3:192.168.1.133 CentOS7.2


配置前准备

[root@node1 ~]# cat /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.1.131   node1

192.168.1.132   node2

[root@node1 ~]# ssh-keygen -t rsa -P ‘‘  

[root@node1 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub node1

[root@node1 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub node2

[root@node1 ~]# yum -y install ansible

[root@node1 ~]# vim /etc/ansible/hosts 

[ha]

192.168.1.131

192.168.1.132


03 corosync集群配置精讲

Node1、Node2和node3

# yum -y install corosync pacemaker


[root@node1 ~]# cd /etc/corosync/

[root@node1 corosync]# cp corosync.conf.example corosync.conf

[root@node1 corosync]# vim corosync.conf

修改

    crypto_cipher: none

    crypto_hash: none

    crypto_cipher: aes128

    crypto_hash: sha1

    setauth: on

修改

   bindnetaddr: 192.168.1.0

       bindnetaddr: 192.168.1.0 #设定本地网络,本示例不需要修改

在logging程序段之前添加

nodelist {

node {

ring0_addr: 192.168.1.131

nodeid: 1

}

node {

ring0_addr: 192.168.1.132

nodeid: 2

}

node {

ring0_addr: 192.168.1.133

nodeid: 3

}

}


corosync.conf文件

[root@node1 corosync]# grep -v "^[[:space:]]*#" corosync.conf

totem {

version: 2


crypto_cipher: aes128

crypto_hash: sha1

setauth: on


interface {

ringnumber: 0

bindnetaddr: 192.168.1.0

mcastaddr: 239.255.1.1

mcastport: 5405

ttl: 1

}

}


nodelist { 

node {

ring0_addr: 192.168.1.131

nodeid: 1

}

node {

ring0_addr: 192.168.1.132

nodeid: 2

}

node {

ring0_addr: 192.168.1.133

nodeid: 3

}

}


logging {

fileline: off

to_stderr: no

to_logfile: yes

logfile: /var/log/cluster/corosync.log

to_syslog: no

debug: off

timestamp: on

logger_subsys {

subsys: QUORUM

debug: off

}

}


quorum {

provider: corosync_votequorum

}


生成认证文件

[root@node1 corosync]# corosync-keygen

认证文件 

[root@node1 corosync]# ll

total 20

-r-------- 1 root root  128 Sep 22 13:46 authkey


[root@node1 corosync]# scp -p authkey corosync.conf node2:/etc/corosync/

[root@node1 corosync]# scp -p authkey corosync.conf node3:/etc/corosync/

[root@node1 corosync]# systemctl start corosync.service 

[root@node2 corosync]# systemctl start corosync.service

[root@node3 ~]# systemctl start corosync.service


查看日志

[root@node2 corosync]# tail -f /var/log/cluster/corosync.log 

Sep 22 16:23:01 [36631] node2 corosync notice  [TOTEM ] A new membership (192.168.1.132:4) was formed. Members joined: 2

Sep 22 16:23:01 [36631] node2 corosync notice  [QUORUM] Members[1]: 2

Sep 22 16:23:01 [36631] node2 corosync notice  [MAIN  ] Completed service synchronization, ready to provide service.

Sep 22 16:23:01 [36631] node2 corosync notice  [TOTEM ] A new membership (192.168.1.131:8) was formed. Members joined: 1

Sep 22 16:23:01 [36631] node2 corosync notice  [QUORUM] This node is within the primary component and will provide service.

Sep 22 16:23:01 [36631] node2 corosync notice  [QUORUM] Members[2]: 1 2

Sep 22 16:23:01 [36631] node2 corosync notice  [MAIN  ] Completed service synchronization, ready to provide service.

Sep 22 16:23:09 [36631] node2 corosync notice  [TOTEM ] A new membership (192.168.1.131:12) was formed. Members joined: 3

Sep 22 16:23:09 [36631] node2 corosync notice  [QUORUM] Members[3]: 1 2 3

Sep 22 16:23:09 [36631] node2 corosync notice  [MAIN  ] Completed service synchronization, ready to provide service.


[root@node1 corosync]# vim /etc/sysconfig/pacemaker

去掉

PCMK_logfile=/var/log/pacemaker.log

前面的#号

[root@node1 corosync]# systemctl start pacemaker.service

[root@node2 corosync]# systemctl start pacemaker.service 

[root@node3 corosync]# systemctl start pacemaker.service 


[root@node2 corosync]# crm_mon

Last updated: Thu Sep 22 16:47:06 2016          Last change: Thu Sep 22 16:43:05 2016 by hacluster via crmd on node1

Stack: corosync

Current DC: node1 (version 1.1.13-10.el7_2.4-44eb2dd) - partition with quorum

3 nodes and 0 resources configured


Online: [ node1 node2 node3 ]


至此集群配置完成,下面开始配置资源


[root@node1 ~]# ls *rpm            

crmsh-2.1.4-1.1.x86_64.rpm  pssh-2.3.1-4.2.x86_64.rpm  python-pssh-2.3.1-4.2.x86_64.rpm

[root@node1 ~]# yum -y install *rpm

[root@node1 ~]# crm

crm(live)# 


HA Web Service:

vip: 172.16.1.80, ocf:heartbeat:IPaddr

httpd: systemd

nfs shared storage: ocf:heartbeat:Filesystem

[root@NFS ~]# mkdir /www/htdocs -pv

[root@NFS ~]# echo "<h1>Test Page on NFS Server</h1>" > /www/htdocs/index.html

[root@NFS ~]# vim /etc/exports 

/www/htdocs     192.168.1.0/24(rw)

[root@NFS ~]# service nfsd start


Node1、Node2和node3

# yum -y install httpd

# mount -t nfs 192.168.1.121:/www/htdocs /var/www/html/

# systemctl start httpd.service 

# systemctl stop httpd.service 

# systemctl enable httpd.service 

# umount /var/www/html/


错误:

[root@node1 ~]# crm_verify -L -V

   error: unpack_resources:     Resource start-up disabled since no STONITH resources have been defined

   error: unpack_resources:     Either configure some or disable STONITH with the stonith-enabled option

   error: unpack_resources:     NOTE: Clusters with shared data need STONITH to ensure data integrity

Errors found during check: config not valid


解决方法:

[root@node1 ~]# crm

crm(live)# configure property stonith-enabled=false

crm(live)# configure commit

INFO: apparently there is nothing to commit

INFO: try changing something first

crm(live)# configure

crm(live)configure# show

node 1: node1

node 2: node2

node 3: node3

property cib-bootstrap-options: \

        have-watchdog=false \

        dc-version=1.1.13-10.el7_2.4-44eb2dd \

        cluster-infrastructure=corosync \

        stonith-watchdog-timeout=false \

        stonith-enabled=false

[root@node1 ~]# crm configure

crm(live)configure# primitive webip ocf:heartbeat:IPaddr2 params ip="192.168.1.80" op monitor interval=30s timeout=20s

crm(live)configure# verify

crm(live)configure# show

node 1: node1

node 2: node2

node 3: node3

primitive webip IPaddr2 \

        params ip=192.168.1.80 \

        op monitor interval=30s timeout=20s

property cib-bootstrap-options: \

        have-watchdog=false \

        dc-version=1.1.13-10.el7_2.4-44eb2dd \

        cluster-infrastructure=corosync \

        stonith-watchdog-timeout=false \

        stonith-enabled=false

crm(live)configure# delete webip

crm(live)configure# show

node 1: node1

node 2: node2

node 3: node3

property cib-bootstrap-options: \

        have-watchdog=false \

        dc-version=1.1.13-10.el7_2.4-44eb2dd \

        cluster-infrastructure=corosync \

        stonith-watchdog-timeout=false \

        stonith-enabled=false

crm(live)configure# primitive webip ocf:heartbeat:IPaddr2 params ip="192.168.1.80" op monitor interval=20scrm(live)configure# primitive webip ocf:heartbeat:IPaddr2 params ip="192.168.1.80" op monitor interval=20s timeout=10s 

crm(live)configure# verify

WARNING: webip: specified timeout 10s for monitor is smaller than the advised 20s timeout=10s

crm(live)configure# edit

node 1: node1

node 2: node2

node 3: node3

primitive webip IPaddr2 \

    params ip=192.168.1.80 \

    op monitor interval=20s timeout=20s

property cib-bootstrap-options: \

    have-watchdog=false \

    dc-version=1.1.13-10.el7_2.4-44eb2dd \

    cluster-infrastructure=corosync \

    stonith-watchdog-timeout=false \

    stonith-enabled=false

# vim: set filetype=pcmk:

crm(live)configure# verify

crm(live)configure# primitive webserver systemd:httpd op monitor interval=20s timeout=20scrm(live)configure# verify

crm(live)configure# primitive webstore ocf:heartbeat:Filesystem params device="192.168.1.121:/www/htdocs" directory="/var/www/html" fstype="nfs" op start timeout=60s op stop timeout=60s op monitor interval=20s timeout=40s

crm(live)configure# verify

crm(live)configure# colocation web_server_with_webstore_and_webip inf: webserver ( webip webstore )

crm(live)configure# verify

crm(live)configure# show

node 1: node1

node 2: node2

node 3: node3

primitive webip IPaddr2 \

        params ip=192.168.1.80 \

        op monitor interval=20s timeout=20s

primitive webserver systemd:httpd \

        op monitor interval=20s timeout=20s

primitive webstore Filesystem \

        params device="192.168.1.121:/www/htdocs" directory="/var/www/html" fstype=nfs \

        op start timeout=60s interval=0 \

        op stop timeout=60s interval=0 \

        op monitor interval=20s timeout=40s

colocation web_server_with_webstore_and_webip inf: webserver ( webip webstore )

property cib-bootstrap-options: \

        have-watchdog=false \

        dc-version=1.1.13-10.el7_2.4-44eb2dd \

        cluster-infrastructure=corosync \

        stonith-watchdog-timeout=false \

        stonith-enabled=false

crm(live)configure# order webstore_after_webip Mandatory: webip webstore

crm(live)configure# order webserver_after_webstore Mandatory: webstore webserver

crm(live)configure# verify

crm(live)configure# verify

crm(live)configure# commit

crm(live)configure# cd

crm(live)# status

Last updated: Fri Sep 23 09:35:56 2016          Last change: Fri Sep 23 09:35:34 2016 by root via cibadmin on node1

Stack: corosync

Current DC: node1 (version 1.1.13-10.el7_2.4-44eb2dd) - partition with quorum

3 nodes and 3 resources configured


Online: [ node1 node2 node3 ]


[root@node1 ~]# crm node standby

[root@node1 ~]# crm status

Last updated: Fri Sep 23 09:43:31 2016          Last change: Fri Sep 23 09:42:45 2016 by root via crm_attribute on node1

Stack: corosync

Current DC: node1 (version 1.1.13-10.el7_2.4-44eb2dd) - partition with quorum

3 nodes and 3 resources configured


Node node1: standby

Online: [ node2 node3 ]


[root@node1 ~]# crm node online

crm(live)configure# location webservice_pref_node1 webip 100: node1

crm(live)configure# verify

crm(live)configure# commit

crm(live)# status

Last updated: Fri Sep 23 09:52:47 2016          Last change: Fri Sep 23 09:51:33 2016 by root via crm_attribute on node1

Stack: corosync

Current DC: node1 (version 1.1.13-10.el7_2.4-44eb2dd) - partition with quorum

3 nodes and 3 resources configured


Online: [ node1 node2 node3 ]

crm(live)configure# property default-resource-stickiness=50

crm(live)configure# commit

crm(live)configure# cd

crm(live)# node standby

crm(live)# status

Last updated: Fri Sep 23 09:56:48 2016          Last change: Fri Sep 23 09:56:41 2016 by root via crm_attribute on node1

Stack: corosync

Current DC: node1 (version 1.1.13-10.el7_2.4-44eb2dd) - partition with quorum

3 nodes and 3 resources configured


Node node1: standby

Online: [ node2 node3 ]

crm(live)# node online

crm(live)# status

Last updated: Fri Sep 23 09:57:04 2016          Last change: Fri Sep 23 09:57:02 2016 by root via crm_attribute on node1

Stack: corosync

Current DC: node1 (version 1.1.13-10.el7_2.4-44eb2dd) - partition with quorum

3 nodes and 3 resources configured


Online: [ node1 node2 node3 ]


04 corosync集群配置-配置示例


清空集群配置

crm(live)configure# edit

删除后的资料如下所示:

node 1: node1 \

    attributes standby=off

node 2: node2

node 3: node3

property cib-bootstrap-options: \

    have-watchdog=false \

    dc-version=1.1.13-10.el7_2.4-44eb2dd \

    cluster-infrastructure=corosync \

    stonith-watchdog-timeout=false \

    stonith-enabled=false \

    default-resource-stickiness=50

# vim: set filetype=pcmk:


crm(live)configure# verify

crm(live)configure# commit

crm(live)configure# cd

crm(live)# status

Last updated: Fri Sep 23 11:07:47 2016          Last change: Fri Sep 23 11:07:04 2016 by root via cibadmin on node1

Stack: corosync

Current DC: node1 (version 1.1.13-10.el7_2.4-44eb2dd) - partition with quorum

3 nodes and 0 resources configured


Online: [ node1 node2 node3 ]


Node1、Node2和node3

# yum -y install pcs

# systemctl start pcsd.service 


[root@node1 ~]# vim /etc/ansible/hosts 

[ha]

192.168.1.131

192.168.1.132

192.168.1.133

[root@node1 ~]# ansible ha -m shell -a ‘echo mageedu | passwd --stdin hacluster‘

[root@node1 ~]# pcs cluster auth node1 node2 node3

Username: hacluster

Password: 

node1: Authorized

node3: Authorized

node2: Authorized


[root@node1 ~]# pcs status

Cluster name: 

WARNING: corosync and pacemaker node names do not match (IPs used in setup?)

Last updated: Fri Sep 23 12:17:48 2016          Last change: Fri Sep 23 11:07:04 2016 by root via cibadmin on node1

Stack: corosync

Current DC: node1 (version 1.1.13-10.el7_2.4-44eb2dd) - partition with quorum

3 nodes and 0 resources configured


Online: [ node1 node2 node3 ]


Full list of resources:



PCSD Status:

  node1 (192.168.1.131): Unable to authenticate

  node2 (192.168.1.132): Unable to authenticate

  node3 (192.168.1.133): Unable to authenticate


Daemon Status:

  corosync: active/disabled

  pacemaker: active/disabled

  pcsd: active/disabled


# vim /etc/corosync/corosync.conf

修改

    setauth: on

   setauth: off

  

  [root@node1 ~]# ansible ha -m service -a ‘name=corosync state=restarted‘192.168.1.131 | SUCCESS => {

    "changed": true, 

    "name": "corosync", 

    "state": "started"

}

192.168.1.132 | SUCCESS => {

    "changed": true, 

    "name": "corosync", 

    "state": "started"

}

192.168.1.133 | SUCCESS => {

    "changed": true, 

    "name": "corosync", 

    "state": "started"

}


[root@node1 ~]# ansible ha -m service -a ‘name=pacemaker state=stopped‘

[root@node1 ~]# ansible ha -m service -a ‘name=corosync state=stoppedd‘

[root@node1 ~]# pcs cluster setup --name=mycluster node1 node2 node3 --force


[root@node1 ~]# pcs cluster start --all

[root@node1 ~]# corosync-cfgtool -s

Printing ring status.

Local node ID 1

RING ID 0

        id      = 192.168.1.131

        status  = ring 0 active with no faults


[root@node1 ~]# pcs resource create webip ocf:heartbeat:IPaddr ip="192.168.1.80" op monitor interval=20s timeout=10s

[root@node1 ~]# pcs status

Cluster name: mycluster

Last updated: Fri Sep 23 12:40:55 2016          Last change: Fri Sep 23 12:40:29 2016 by root via cibadmin on node1

Stack: corosync

Current DC: node1 (version 1.1.13-10.el7_2.4-44eb2dd) - partition with quorum

3 nodes and 1 resource configured


Online: [ node1 ]

OFFLINE: [ node2 node3 ]


Full list of resources:


 webip  (ocf::heartbeat:IPaddr):        Started node1


PCSD Status:

  node1: Online

  node2: Online

  node3: Online


Daemon Status:

  corosync: active/disabled

  pacemaker: active/disabled

  pcsd: active/disabled

  

[root@node1 ~]# pcs resource delete webip

[root@node1 ~]# pcs status

Cluster name: mycluster

Last updated: Fri Sep 23 12:44:06 2016          Last change: Fri Sep 23 12:43:20 2016 by root via cibadmin on node1

Stack: corosync

Current DC: node1 (version 1.1.13-10.el7_2.4-44eb2dd) - partition with quorum

3 nodes and 0 resources configured


Online: [ node1 ]

OFFLINE: [ node2 node3 ]


Full list of resources:



PCSD Status:

  node1: Online

  node2: Online

  node3: Online


Daemon Status:

  corosync: active/disabled

  pacemaker: active/disabled

  pcsd: active/disabled

  

[root@node1 ~]# pcs resource create webip ocf:heartbeat:IPaddr ip="192.168.1.80" op monitor interval=20s timeout=10s

[root@node1 ~]# pcs resource create webstore ocf:heartbeat:Filesystem device="192.168.1.121:/www/htdocs" directory="/var/www/html" fstype="nfs" op start timeout=60s op stop timeout=60s op monitor interval=20s timeout=40s


[root@node1 ~]# pcs status

Cluster name: mycluster

Last updated: Fri Sep 23 12:54:11 2016          Last change: Fri Sep 23 12:51:33 2016 by root via cibadmin on node1

Stack: corosync

Current DC: node1 (version 1.1.13-10.el7_2.4-44eb2dd) - partition with quorum

3 nodes and 2 resources configured


Online: [ node1 ]

OFFLINE: [ node2 node3 ]


Full list of resources:


 webip  (ocf::heartbeat:IPaddr):        Started node1

 webstore       (ocf::heartbeat:Filesystem):    Started node1


PCSD Status:

  node1: Online

  node2: Online

  node3: Online


Daemon Status:

  corosync: active/disabled

  pacemaker: active/disabled

  pcsd: active/disabled


[root@node1 ~]# pcs resource create webserver systemd:httpd op monitor interval=30s timeout=20s

[root@node1 ~]# pcs status

Cluster name: mycluster

Last updated: Fri Sep 23 12:55:56 2016          Last change: Fri Sep 23 12:55:36 2016 by root via cibadmin on node1

Stack: corosync

Current DC: node1 (version 1.1.13-10.el7_2.4-44eb2dd) - partition with quorum

3 nodes and 3 resources configured


Online: [ node1 ]

OFFLINE: [ node2 node3 ]


Full list of resources:


 webip  (ocf::heartbeat:IPaddr):        Started node1

 webstore       (ocf::heartbeat:Filesystem):    Started node1

 webserver      (systemd:httpd):        Started node1


PCSD Status:

  node1: Online

  node2: Online

  node3: Online


Daemon Status:

  corosync: active/disabled

  pacemaker: active/disabled

  pcsd: active/disabled


[root@node1 ~]# pcs resource group add webservice webip webstore webserver

[root@node1 ~]# pcs status

Cluster name: mycluster

Last updated: Fri Sep 23 12:59:38 2016          Last change: Fri Sep 23 12:59:15 2016 by root via cibadmin on node1

Stack: corosync

Current DC: node1 (version 1.1.13-10.el7_2.4-44eb2dd) - partition with quorum

3 nodes and 3 resources configured


Online: [ node1 ]

OFFLINE: [ node2 node3 ]


Full list of resources:


 Resource Group: webservice

     webip      (ocf::heartbeat:IPaddr):        Started node1

     webstore   (ocf::heartbeat:Filesystem):    Started node1

     webserver  (systemd:httpd):        Started node1


PCSD Status:

  node1: Online

  node2: Online

  node3: Online


Daemon Status:

  corosync: active/disabled

  pacemaker: active/disabled

  pcsd: active/disabled


让node1、node2结点下线

[root@node1 ~]# pcs cluster standby node1

[root@node1 ~]# pcs cluster standby node2


让node1、node2结点上线

[root@node1 ~]# pcs cluster unstandby node1

[root@node1 ~]# pcs cluster unstandby node2


添加约束

[root@node1 ~]# pcs constraint location add webservice_pref_node1 webservice node1 100

[root@node1 ~]# pcs status

Cluster name: mycluster

Last updated: Fri Sep 23 13:22:02 2016          Last change: Fri Sep 23 13:21:24 2016 by root via cibadmin on node1

Stack: corosync

Current DC: node1 (version 1.1.13-10.el7_2.4-44eb2dd) - partition with quorum

3 nodes and 3 resources configured


Online: [ node1 node2 ]

OFFLINE: [ node3 ]


Full list of resources:


 Resource Group: webservice

     webip      (ocf::heartbeat:IPaddr):        Started node1

     webstore   (ocf::heartbeat:Filesystem):    Started node1

     webserver  (systemd:httpd):        Started node1


PCSD Status:

  node1: Online

  node2: Online

  node3: Online


Daemon Status:

  corosync: active/disabled

  pacemaker: active/disabled

  pcsd: active/disabled

  

[root@node1 ~]# pcs constraint location add webservice_pref_node1 webip node1 INFINITY

[root@node1 ~]# pcs status

Cluster name: mycluster

Last updated: Fri Sep 23 13:24:57 2016          Last change: Fri Sep 23 13:24:21 2016 by root via cibadmin on node1

Stack: corosync

Current DC: node1 (version 1.1.13-10.el7_2.4-44eb2dd) - partition with quorum

3 nodes and 3 resources configured


Online: [ node1 node2 ]

OFFLINE: [ node3 ]


Full list of resources:


 Resource Group: webservice

     webip      (ocf::heartbeat:IPaddr):        Started node1

     webstore   (ocf::heartbeat:Filesystem):    Started node1

     webserver  (systemd:httpd):        Started node1


PCSD Status:

  node1: Online

  node2: Online

  node3: Online


Daemon Status:

  corosync: active/disabled

  pacemaker: active/disabled

  pcsd: active/disabled


查看全局属性

[root@node1 ~]# pcs property list --all  

本文出自 “追梦” 博客,请务必保留此出处http://sihua.blog.51cto.com/377227/1855699

34J-3_4 corosync集群配置精讲

标签:linux   corosync   

原文地址:http://sihua.blog.51cto.com/377227/1855699

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!