码迷,mamicode.com
首页 > 其他好文 > 详细

34J-2 corosync集群初步

时间:2016-09-22 11:40:12      阅读:202      评论:0      收藏:0      [点我收藏+]

标签:linux   corosync   

配置环境

Node1:192.168.1.131 CentOS7.2

Node2:192.168.1.132 CentOS7.2


配置前准备

[root@node1 ~]# cat /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.1.131   node1

192.168.1.132   node2

[root@node1 ~]# ssh-keygen -t rsa -P ‘‘  

[root@node1 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub node1

[root@node1 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub node2

[root@node1 ~]# yum -y install ansible

[root@node1 ~]# vim /etc/ansible/hosts 

[ha]

192.168.1.131

192.168.1.132


Node1和Node2

# yum -y install pcs


[root@node1 ~]# ansible ha -m service -a ‘name=pcsd state=started enabled=yes‘

[root@node1 ~]# ansible ha -m shell -a ‘echo "mageedu" | passwd --stdin hacluster‘

[root@node1 ~]# pcs cluster auth node1 node2 -u hacluster

Password: 

node1: Already authorized

node2: Already authorized

[root@node2 ~]# pcs cluster auth node1 node2 -u hacluster  

Password: 

node1: Already authorized

node2: Already authorized


[root@node1 ~]# pcs cluster setup --name mycluster node1 node2

Shutting down pacemaker/corosync services...

Redirecting to /bin/systemctl stop  pacemaker.service

Redirecting to /bin/systemctl stop  corosync.service

Killing any remaining services...

Removing all cluster configuration files...

node1: Succeeded

node2: Succeeded

Synchronizing pcsd certificates on nodes node1, node2...

node1: Success

node2: Success


Restaring pcsd on the nodes in order to reload the certificates...

node1: Success

node2: Success


[root@node1 ~]# cd /etc/corosync/

[root@node1 corosync]# vim corosync.conf

修改loggin值为

logging {

    to_logfile: yes

    logfile: /var/log/cluster/corosync.log

}   

[root@node1 corosync]# scp corosync.conf node2:/etc/corosync/


启动集群:

[root@node1 corosync]# pcs cluster start --all 

node2: Starting Cluster...

node1: Starting Cluster...


检查各节点通信状态(显示为no faults即为OK);

[root@node1 corosync]# corosync-cfgtool -s

Printing ring status.

Local node ID 1

RING ID 0

        id      = 192.168.1.131

        status  = ring 0 active with no faults


检查集群成员关系及Quorum API

[root@node1 corosync]# corosync-cmapctl | grep members

runtime.totem.pg.mrp.srp.members.1.config_version (u64) = 0

runtime.totem.pg.mrp.srp.members.1.ip (str) = r(0) ip(192.168.1.131) 

runtime.totem.pg.mrp.srp.members.1.join_count (u32) = 1

runtime.totem.pg.mrp.srp.members.1.status (str) = joined

runtime.totem.pg.mrp.srp.members.2.config_version (u64) = 0

runtime.totem.pg.mrp.srp.members.2.ip (str) = r(0) ip(192.168.1.132) 

runtime.totem.pg.mrp.srp.members.2.join_count (u32) = 1

runtime.totem.pg.mrp.srp.members.2.status (str) = joined


查看集群状态:

[root@node1 corosync]# pcs status


检查集群错误:

[root@node1 corosync]# crm_verify -L -V

   error: unpack_resources:     Resource start-up disabled since no STONITH resources have been defined

   error: unpack_resources:     Either configure some or disable STONITH with the stonith-enabled option

   error: unpack_resources:     NOTE: Clusters with shared data need STONITH to ensure data integrity

Errors found during check: config not valid


处理:

[root@node1 corosync]# pcs property set stonith-enabled=false

[root@node1 corosync]# crm_verify -L -V    

[root@node1 corosync]# pcs property list

Cluster Properties:

 cluster-infrastructure: corosync

 cluster-name: mycluster

 dc-version: 1.1.13-10.el7_2.4-44eb2dd

 have-watchdog: false

 stonith-enabled: false

 

 [root@node1 ~]# ls

anaconda-ks.cfg             Pictures

crmsh-2.1.4-1.1.x86_64.rpm  pssh-2.3.1-4.2.x86_64.rpm

Desktop                     Public

Documents                   python-pssh-2.3.1-4.2.x86_64.rpm

Downloads                   Templates

Music                       Videos

[root@node1 ~]# yum install *rpm

[root@node1 ~]# scp *rpm node2:/root

[root@node2 ~]# yum install *rpm -y


显示集群状态:

[root@node1 ~]# crm status

Last updated: Wed Sep 21 17:31:26 2016          Last change: Wed Sep 21 17:16:32 2016 by root via cibadmin on node1

Stack: corosync

Current DC: node1 (version 1.1.13-10.el7_2.4-44eb2dd) - partition with quorum

2 nodes and 0 resources configured


Online: [ node1 node2 ]


[root@node1 ~]# yum -y install httpd

[root@node1 ~]# echo "<h1>Node1.magedu.com</h1>" > /var/www/html/index.html 

[root@node1 ~]# systemctl start httpd.service 

[root@node1 ~]# systemctl enable httpd.service

[root@node2 ~]# yum -y install httpd

[root@node2 ~]# echo "<h1>Node2.magedu.com</h1>" > /var/www/html/index.html

[root@node2 ~]# systemctl start httpd.service

[root@node2 ~]# systemctl enable httpd.service


[root@node1 ~]# crm ra

crm(live)ra# cd

crm(live)# configure

crm(live)configure# primitive webip ocf:heartbeat:IPaddr params ip=192.168.1.80


本文出自 “追梦” 博客,请务必保留此出处http://sihua.blog.51cto.com/377227/1855281

34J-2 corosync集群初步

标签:linux   corosync   

原文地址:http://sihua.blog.51cto.com/377227/1855281

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!