码迷,mamicode.com
首页 > 数据库 > 详细

Mongodb延迟复制节点配置

时间:2016-08-22 16:31:09      阅读:1131      评论:0      收藏:0      [点我收藏+]

标签:mongodb   延迟复制   

背景:

  我们一般配置的Mongodb主从,或者Mongodb复制集,数据同步都是实时的。但如果在主节点上进行了错误的数据操作,这时候就会导致整个集群的数据都出错。因此,我们可以在一个集群中,挑选一个mongodb实例,用作复制延迟。当在主节点上误操作的时候,集群中有一个实例是不受影响的。这时候就可以利用这台不受影响的实例进行数据恢复。

  以上就是mongodb的延迟复制节点的功能,当主节点进行一次数据操作后,延迟复制节不立马进行数据同步操作,而是在一段时间后,才同步数据。


配置:

  以我的实验环境为例,以下为我的mongodb复制集:

cmh0:PRIMARY> rs.status()
{
        "set" : "cmh0",
        "date" : ISODate("2016-08-22T02:43:16.240Z"),
        "myState" : 1,
        "members" : [
                {
                        "_id" : 1,
                        "name" : "192.168.52.128:27017",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 82,
                        "optime" : Timestamp(1470581983, 1),
                        "optimeDate" : ISODate("2016-08-07T14:59:43Z"),
                        "electionTime" : Timestamp(1471833721, 1),
                        "electionDate" : ISODate("2016-08-22T02:42:01Z"),
                        "configVersion" : 1,
                        "self" : true
                },
                {
                        "_id" : 2,
                        "name" : "192.168.52.135:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 71,
                        "optime" : Timestamp(1470581983, 1),
                        "optimeDate" : ISODate("2016-08-07T14:59:43Z"),
                        "lastHeartbeat" : ISODate("2016-08-22T02:43:15.138Z"),
                        "lastHeartbeatRecv" : ISODate("2016-08-22T02:43:14.978Z"),
                        "pingMs" : 0,
                        "lastHeartbeatMessage" : "could not find member to sync from",
                        "configVersion" : 1
                },
                {
                        "_id" : 3,
                        "name" : "192.168.52.135:27019",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 75,
                        "optime" : Timestamp(1470581983, 1),
                        "optimeDate" : ISODate("2016-08-07T14:59:43Z"),
                        "lastHeartbeat" : ISODate("2016-08-22T02:43:15.138Z"),
                        "lastHeartbeatRecv" : ISODate("2016-08-22T02:43:15.138Z"),
                        "pingMs" : 0,
                        "configVersion" : 1
                }
        ],
        "ok" : 1
}

  这时还未配置延迟复制节点,所以数据是实时同步的:

cmh0:PRIMARY> use cmhtest
switched to db cmhtest
cmh0:PRIMARY> db.cmh.insert({"name":"ChenMinghui"})
WriteResult({ "nInserted" : 1 })
cmh0:PRIMARY> rs.printReplicationInfo()
configured oplog size:   990MB
log length start to end: 195secs (0.05hrs)
oplog first event time:  Mon Aug 22 2016 10:51:22 GMT+0800 (CST)
oplog last event time:   Mon Aug 22 2016 10:54:37 GMT+0800 (CST)
now:                     Mon Aug 22 2016 10:55:00 GMT+0800 (CST)
cmh0:PRIMARY> rs.printSlaveReplicationInfo()
source: 192.168.52.135:27017
        syncedTo: Mon Aug 22 2016 10:54:37 GMT+0800 (CST)
        0 secs (0 hrs) behind the primary 
source: 192.168.52.135:27019
        syncedTo: Mon Aug 22 2016 10:54:37 GMT+0800 (CST)
        0 secs (0 hrs) behind the primary

  可以看到两个Secondary节点都在同一时间实时同步了数据。


  配置192.168.52.135:27017为延迟复制节点:

cmh0:PRIMARY> cfg=rs.conf();
{
        "_id" : "cmh0",
        "version" : 1,
        "members" : [
                {
                        "_id" : 1,
                        "host" : "192.168.52.128:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {
                        },
                        "slaveDelay" : 0,
                        "votes" : 1
                },
                {
                        "_id" : 2,
                        "host" : "192.168.52.135:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {
                        },
                        "slaveDelay" : 0,
                        "votes" : 1
                },
                {
                        "_id" : 3,
                        "host" : "192.168.52.135:27019",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {
                        },
                        "slaveDelay" : 0,
                        "votes" : 1
                }
        ],
        "settings" : {
                "chainingAllowed" : true,
                "heartbeatTimeoutSecs" : 10,
                "getLastErrorModes" : {
                },
                "getLastErrorDefaults" : {
                        "w" : 1,
                        "wtimeout" : 0
                }
        }
}
cmh0:PRIMARY> cfg.members[1].priority=0
0
cmh0:PRIMARY> cfg.members[1].slaveDelay=30
30
cmh0:PRIMARY> rs.reconfig(cfg);
{ "ok" : 1 }
cmh0:PRIMARY> rs.conf()
{
        "_id" : "cmh0",
        "version" : 2,
        "members" : [
                {
                        "_id" : 1,
                        "host" : "192.168.52.128:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {
                        },
                        "slaveDelay" : 0,
                        "votes" : 1
                },
                {
                        "_id" : 2,
                        "host" : "192.168.52.135:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 0,
                        "tags" : {
                        },
                        "slaveDelay" : 30,
                        "votes" : 1
                },
                {
                        "_id" : 3,
                        "host" : "192.168.52.135:27019",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {
                        },
                        "slaveDelay" : 0,
                        "votes" : 1
                }
        ],
        "settings" : {
                "chainingAllowed" : true,
                "heartbeatTimeoutSecs" : 10,
                "getLastErrorModes" : {
                },
                "getLastErrorDefaults" : {
                        "w" : 1,
                        "wtimeout" : 0
                }
        }
}

  可以看到192.168.52.135:27017节点出现了"slaveDelay":30的值,说明该节点的同步时间向后推迟了30秒。

  具体大家可以测试一下,延迟复制时间大概会在30秒左右。有一点要注意,mongodb的系统时间必须一致,否则会造成延迟复制异常,导致在规定同步时间到了之后不进行同步操作。




本文出自 “扮演上帝的小丑” 博客,转载请与作者联系!

Mongodb延迟复制节点配置

标签:mongodb   延迟复制   

原文地址:http://icenycmh.blog.51cto.com/4077647/1841001

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!