码迷,mamicode.com
首页 > 数据库 > 详细

MongoDB健壮集群——用副本集做分片

时间:2015-08-17 11:58:05      阅读:532      评论:0      收藏:0      [点我收藏+]

标签:

1.    MongoDB分片+副本集

健壮的集群方案

多个配置服务器 多个mongos服务器  每个片都是副本集 正确设置w

架构图

 

技术分享

说明:

1.   此实验环境在一台机器上通过不同port和dbpath实现启动不同的mongod实例

2.   总的9个mongod实例,分别做成shard1、shard2、shard3三组副本集,每组1主2从

3.   Mongos进程的数量不限,建议把mongos配置在每个应用服务器本机上,这样每个应用服务器就与自身的mongos进行通信,如果服务器不工作了,并不会影响其他的应用服务器与其自己的mongos通信

4.   此实验模拟2台应用服务器(2个mongos服务)

5.   生产环境中每个片都应该是副本集,这样单个服务器坏了,才不会导致片失效

 技术分享

 

部署环境

创建相关目录

[root@Master cluster2]# mkdir -p shard{1,2,3}/node{1,2,3}
[root@Master cluster2]# mkdir -p shard{1,2,3}/logs
[root@Master cluster2]# ls shard*
shard1:
logs  node1  node2  node3

shard2:
logs  node1  node2  node3

shard3:
logs  node1  node2  node3
[root@Master cluster2]# mkdir -p config/logs
[root@Master cluster2]# mkdir -p config/node{1,2,3}
[root@Master cluster2]# ls config/
logs  node1  node2  node3

[root@Master cluster2]# mkdir -p mongos/logs

启动配置服务

 

Config server

/data/mongodb/config/node1

/data/mongodb/config/logs/node1.log

10000

/data/mongodb/config/node2

/data/mongodb/config/logs/node2.log

20000

/data/mongodb/config/node3

/data/mongodb/config/logs/node3.log

30000

#按规划启动3个:跟启动单个配置服务一样,只是重复3次

[root@Master cluster2]# mongod --dbpath config/node1 --logpath config/logs/node1.log --logappend --fork --port 10000
[root@Master cluster2]# mongod --dbpath config/node2 --logpath config/logs/node2.log --logappend --fork --port 20000
[root@Master cluster2]# mongod --dbpath config/node3 --logpath config/logs/node3.log --logappend --fork --port 30000
[root@Master cluster2]# ps -ef|grep mongod|grep -v grep
mongod    2329     1  0 20:05 ?        00:00:02 /usr/bin/mongod -f /etc/mongod.conf
root      2703     1  1 20:13 ?        00:00:00 mongod --dbpath config/node1 --logpath config/logs/node1.log --logappend --fork --port 10000
root      2716     1  1 20:13 ?        00:00:00 mongod --dbpath config/node2 --logpath config/logs/node2.log --logappend --fork --port 20000
root      2729     1  1 20:13 ?        00:00:00 mongod --dbpath config/node3 --logpath config/logs/node3.log --logappend --fork --port 30000

启动路由服务

Mongos server

——

/data/mongodb/mongos/logs/node1.log

40000

——

/data/mongodb/mongos/logs/node2.log

50000

#mongos的数量不受限制,通常应用一个服务器运行一个mongos

[root@Master cluster2]# mongos --port 40000 --configdb localhost:10000,localhost:20000,localhost:30000 --logpath mongos/logs/mongos1.log  --logappend --fork
[root@Master cluster2]# mongos --port 50000 --configdb localhost:10000,localhost:20000,localhost:30000 --logpath mongos/logs/mongos1.log  --logappend --fork
[root@Master cluster2]# ps -ef|grep mongos|grep -v grep
root      2809     1  0 20:18 ?        00:00:00 mongos --port 40000 --configdb localhost:10000,localhost:20000,localhost:30000 --logpath mongos/logs/mongos1.log --logappend --fork
root      2862     1  0 20:19 ?        00:00:00 mongos --port 50000 --configdb localhost:10000,localhost:20000,localhost:30000 --logpath mongos/logs/mongos1.log --logappend --fork

配置副本集

按规划,配置启动shard1、shard2、shard3三组副本集

#此处以shard1为例说明配置方法

#启动三个mongod进程

[root@Master cluster2]# mongod --replSet shard1 --dbpath shard1/node1 --logpath shard1/logs/node1.log --logappend --fork --port 10001
[root@Master cluster2]# mongod --replSet shard1 --dbpath shard1/node2 --logpath shard1/logs/node2.log --logappend --fork --port 10002
[root@Master cluster2]# mongod --replSet shard1 --dbpath shard1/node3 --logpath shard1/logs/node3.log --logappend --fork --port 10003

#初始化Replica Set:shard1

[root@Master cluster2]# mongo --port 10001
MongoDB shell version: 3.0.5
connecting to: 127.0.0.1:10001/test
> use adminuse admin
switched to db admin
> rsconf={"_id" : "shard1","members" : [{"_id" : 0, "host" : "localhost:10001"}]}rsconf={"_id" : "shard1","members" : [{"_id" : 0, "host" : "localhost:10001"}]}
{
        "_id" : "shard1",
        "members" : [
                {
                        "_id" : 0,
                        "host" : "localhost:10001"
                }
        ]
}
> rs.initiate(rsconf)rs.initiate(rsconf)
{ "ok" : 1 }
shard1:OTHER> rs.add("localhost:10002")
{ "ok" : 1 }
shard1:PRIMARY> rs.add("localhost:10003")
{ "ok" : 1 }
shard1:PRIMARY> rs.conf()
{
        "_id" : "shard1",
        "version" : 3,
        "members" : [
                {
                        "_id" : 0,
                        "host" : "localhost:10001",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : 0,
                        "votes" : 1
                },
                {
                        "_id" : 1,
                        "host" : "localhost:10002",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : 0,
                        "votes" : 1
                },
                {
                        "_id" : 2,
                        "host" : "localhost:10003",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : 0,
                        "votes" : 1
                }
        ],
        "settings" : {
                "chainingAllowed" : true,
                "heartbeatTimeoutSecs" : 10,
                "getLastErrorModes" : {

                },
                "getLastErrorDefaults" : {
                        "w" : 1,
                        "wtimeout" : 0
                }
        }
} 

Shard2和shard3同shard1配置副本集

[root@Master cluster2]# mongod --replSet shard2 --dbpath shard2/node1 --logpath shard2/logs/node1.log --logappend --fork --port 20001
[root@Master cluster2]# mongod --replSet shard2 --dbpath shard2/node2 --logpath shard2/logs/node2.log --logappend --fork --port 20002
[root@Master cluster2]# mongod --replSet shard2 --dbpath shard2/node3 --logpath shard2/logs/node3.log --logappend --fork --port 20003
[root@Master cluster2]# mongo --port 20001
> use adminuse admin
> rsconf={"_id" : "shard2","members" : [{"_id" : 0, "host" : "localhost:20001"}]}
> rs.initiate(rsconf)
shard2:OTHER> rs.add("localhost:20002")
shard2:PRIMARY> rs.add("localhost:20003")
shard2:PRIMARY> rs.conf()rs.conf()
{
        "_id" : "shard2",
        "version" : 3,
        "members" : [
                {
                        "_id" : 0,
                        "host" : "localhost:20001",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : 0,
                        "votes" : 1
                },
                {
                        "_id" : 1,
                        "host" : "localhost:20002",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : 0,
                        "votes" : 1
                },
                {
                        "_id" : 2,
                        "host" : "localhost:20003",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : 0,
                        "votes" : 1
                }
        ],
        "settings" : {
                "chainingAllowed" : true,
                "heartbeatTimeoutSecs" : 10,
                "getLastErrorModes" : {

                },
                "getLastErrorDefaults" : {
                        "w" : 1,
                        "wtimeout" : 0
                }
        }
}

  

[root@Master cluster2]# mongod --replSet shard3 --dbpath shard3/node1 --logpath shard3/logs/node1.log --logappend --fork --port 30001
[root@Master cluster2]# mongod --replSet shard3 --dbpath shard3/node2 --logpath shard3/logs/node2.log --logappend --fork --port 30002
[root@Master cluster2]# mongod --replSet shard3 --dbpath shard3/node3 --logpath shard3/logs/node3.log --logappend --fork --port 30003
[root@Master cluster2]# mongo --port 30001
connecting to: 127.0.0.1:30001/test> use admin
> rsconf={"_id" : "shard3","members" : [{"_id" : 0, "host" : "localhost:30001"}]}> rs.initiate(rsconf)r
shard3:OTHER> rs.add("localhost:30002")
shard3:PRIMARY> rs.add("localhost:30003")
shard3:PRIMARY> rs.conf()
{
        "_id" : "shard3",
        "version" : 3,
        "members" : [
                {
                        "_id" : 0,
                        "host" : "localhost:30001",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : 0,
                        "votes" : 1
                },
                {
                        "_id" : 1,
                        "host" : "localhost:30002",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : 0,
                        "votes" : 1
                },
                {
                        "_id" : 2,
                        "host" : "localhost:30003",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : 0,
                        "votes" : 1
                }
        ],
        "settings" : {
                "chainingAllowed" : true,
                "heartbeatTimeoutSecs" : 10,
                "getLastErrorModes" : {

                },
                "getLastErrorDefaults" : {
                        "w" : 1,
                        "wtimeout" : 0
                }
        }
}

添加(副本集)分片

#连接到mongs,并切换到admin这里必须连接路由节点

[root@Master cluster2]# mongo --port 40000
MongoDB shell version: 3.0.5
connecting to: 127.0.0.1:40000/test
mongos> use admin
switched to db admin
mongos> db.runCommand({"addShard":"shard1/localhost:10001"})
{ "shardAdded" : "shard1", "ok" : 1 }
mongos> db.runCommand({"addShard":"shard2/localhost:20001"})
{ "shardAdded" : "shard2", "ok" : 1 }
mongos> db.runCommand({"addShard":"shard3/localhost:30001"})
{ "shardAdded" : "shard3", "ok" : 1 }
mongos> db.runCommand({listshards:1})
{
        "shards" : [
                {
                        "_id" : "shard1",
                        "host" : "shard1/localhost:10001,localhost:10002,localhost:10003"
                },
                {
                        "_id" : "shard2",
                        "host" : "shard2/localhost:20001,localhost:20002,localhost:20003"
                },
                {
                        "_id" : "shard3",
                        "host" : "shard3/localhost:30001,localhost:30002,localhost:30003"
                }
        ],
        "ok" : 1
}

激活db和collections分片

激活数据库分片,命令

> db.runCommand( { enablesharding : "数据库名称"} );

执行以上命令,可以让数据库跨shard,如果不执行这步,数据库只会存放在一个shard

一旦激活数据库分片,数据库中不同的collection将被存放在不同的shard上

但一个collection仍旧存放在同一个shard上,要使单个collection也分片,还需单独对collection作些操作

#如:激活test数据库分片功能,连接mongos进程

[root@Master cluster2]# mongo --port 50000
MongoDB shell version: 3.0.5
connecting to: 127.0.0.1:50000/test
mongos> use admin
switched to db admin
mongos> db.runCommand({"enablesharding":"test"})
{ "ok" : 1 }

要使单个collection也分片存储,需要给collection指定一个分片key,通过以下命令操作:

> db.runCommand( { shardcollection : "集合名称",key : "字段名称"});

注:  a. 分片的collection系统会自动创建一个索引(也可用户提前创建好)

         b. 分片的collection只能有一个在分片key上的唯一索引,其它唯一索引不被允许

#对collection:test.yujx分片

mongos> db.runCommand({"shardcollection":"test.yujx","key":{"_id":1}})
{ "collectionsharded" : "test.yujx", "ok" : 1 }

生成测试数据

mongos> use test
switched to db test
mongos> for(var i=1;i<=88888;i++) db.yujx.save({"id":i,"a":123456789,"b":888888888,"c":100000000})
WriteResult({ "nInserted" : 1 })
mongos> db.yujx.count()
88888

查看集合分片

mongos> db.yujx.stats()
{
        "sharded" : true,
        "paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.",
        "userFlags" : 1,
        "capped" : false,
        "ns" : "test.yujx",
        "count" : 88888,
        "numExtents" : 9,
        "size" : 9955456,
        "storageSize" : 22523904,
        "totalIndexSize" : 2918832,
        "indexSizes" : {
                "_id_" : 2918832
        },
        "avgObjSize" : 112,
        "nindexes" : 1,
        "nchunks" : 3,
        "shards" : {
                "shard1" : {
                        "ns" : "test.yujx",
                        "count" : 8,
                        "size" : 896,
                        "avgObjSize" : 112,
                        "numExtents" : 1,
                        "storageSize" : 8192,
                        "lastExtentSize" : 8192,
                        "paddingFactor" : 1,
                        "paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.",
                        "userFlags" : 1,
                        "capped" : false,
                        "nindexes" : 1,
                        "totalIndexSize" : 8176,
                        "indexSizes" : {
                                "_id_" : 8176
                        },
                        "ok" : 1,
                        "$gleStats" : {
                                "lastOpTime" : Timestamp(0, 0),
                                "electionId" : ObjectId("55d15366716d7504d5d74c4c")
                        }
                },
                "shard2" : {
                        "ns" : "test.yujx",
                        "count" : 88879,
                        "size" : 9954448,
                        "avgObjSize" : 112,
                        "numExtents" : 7,
                        "storageSize" : 22507520,
                        "lastExtentSize" : 11325440,
                        "paddingFactor" : 1,
                        "paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.",
                        "userFlags" : 1,
                        "capped" : false,
                        "nindexes" : 1,
                        "totalIndexSize" : 2902480,
                        "indexSizes" : {
                                "_id_" : 2902480
                        },
                        "ok" : 1,
                        "$gleStats" : {
                                "lastOpTime" : Timestamp(0, 0),
                                "electionId" : ObjectId("55d1543eabed7d6d4a71d25e")
                        }
                },
                "shard3" : {
                        "ns" : "test.yujx",
                        "count" : 1,
                        "size" : 112,
                        "avgObjSize" : 112,
                        "numExtents" : 1,
                        "storageSize" : 8192,
                        "lastExtentSize" : 8192,
                        "paddingFactor" : 1,
                        "paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.",
                        "userFlags" : 1,
                        "capped" : false,
                        "nindexes" : 1,
                        "totalIndexSize" : 8176,
                        "indexSizes" : {
                                "_id_" : 8176
                        },
                        "ok" : 1,
                        "$gleStats" : {
                                "lastOpTime" : Timestamp(0, 0),
                                "electionId" : ObjectId("55d155346f36550e3c5f062c")
                        }
                }
        },
        "ok" : 1
}

查看数据库分片

mongos> db.printShardingStatus()
--- Sharding Status --- 
  sharding version: {
        "_id" : 1,
        "minCompatibleVersion" : 5,
        "currentVersion" : 6,
        "clusterId" : ObjectId("55d152a35348652fbc726a10")
}
  shards:
        {  "_id" : "shard1",  "host" : "shard1/localhost:10001,localhost:10002,localhost:10003" }
        {  "_id" : "shard2",  "host" : "shard2/localhost:20001,localhost:20002,localhost:20003" }
        {  "_id" : "shard3",  "host" : "shard3/localhost:30001,localhost:30002,localhost:30003" }
  balancer:
        Currently enabled:  yes
        Currently running:  yes
                Balancer lock taken at Sun Aug 16 2015 20:43:49 GMT-0700 (PDT) by Master.Hadoop:50000:1439781543:1804289383:Balancer:846930886
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                2 : Success
                1 : Failed with error could not acquire collection lock for test.yujx to migrate chunk [{ : MinKey },{ : MaxKey }) :: caused by :: Lock for migrating chunk [{ : MinKey }, { : MaxKey }) in test.yujx is taken., from shard1 to shard2
  databases:
        {  "_id" : "admin",  "partitioned" : false,  "primary" : "config" }
        {  "_id" : "test",  "partitioned" : true,  "primary" : "shard1" }
                test.yujx
                        shard key: { "_id" : 1 }
                        chunks:
                                shard1  1
                                shard2  1
                                shard3  1
                        { "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("55d157cca0c90140e33a9342") } on : shard3 Timestamp(3, 0) 
                        { "_id" : ObjectId("55d157cca0c90140e33a9342") } -->> { "_id" : ObjectId("55d157cca0c90140e33a934a") } on : shard1 Timestamp(3, 1) 
                        { "_id" : ObjectId("55d157cca0c90140e33a934a") } -->> { "_id" : { "$maxKey" : 1 } } on : shard2 Timestamp(2, 0) 

#或者连接mongos的config数据库查询

mongos> use config
switched to db config
mongos> db.shards.find()db.shards.find()
{ "_id" : "shard1", "host" : "shard1/localhost:10001,localhost:10002,localhost:10003" }
{ "_id" : "shard2", "host" : "shard2/localhost:20001,localhost:20002,localhost:20003" }
{ "_id" : "shard3", "host" : "shard3/localhost:30001,localhost:30002,localhost:30003" }
mongos> db.databases.find()db.databases.find()
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test", "partitioned" : true, "primary" : "shard1" }
mongos> db.chunks.find()db.chunks.find()
{ "_id" : "test.yujx-_id_MinKey", "lastmod" : Timestamp(3, 0), "lastmodEpoch" : ObjectId("55d15738679c4d5f9108eba0"), "ns" : "test.yujx", "min" : { "_id" : { "$minKey" : 1 } }, "max" : { "_id" : ObjectId("55d157cca0c90140e33a9342") }, "shard" : "shard3" }
{ "_id" : "test.yujx-_id_ObjectId(‘55d157cca0c90140e33a9342‘)", "lastmod" : Timestamp(3, 1), "lastmodEpoch" : ObjectId("55d15738679c4d5f9108eba0"), "ns" : "test.yujx", "min" : { "_id" : ObjectId("55d157cca0c90140e33a9342") }, "max" : { "_id" : ObjectId("55d157cca0c90140e33a934a") }, "shard" : "shard1" }
{ "_id" : "test.yujx-_id_ObjectId(‘55d157cca0c90140e33a934a‘)", "lastmod" : Timestamp(2, 0), "lastmodEpoch" : ObjectId("55d15738679c4d5f9108eba0"), "ns" : "test.yujx", "min" : { "_id" : ObjectId("55d157cca0c90140e33a934a") }, "max" : { "_id" : { "$maxKey" : 1 } }, "shard" : "shard2" }

单点故障分析

 由于这是为了了解入门mongodb做的实验,而故障模拟太浪费时间,所以这里就不一一列出,关于故障场景分析,可以参考:
http://blog.itpub.net/27000195/viewspace-1404402/

MongoDB健壮集群——用副本集做分片

标签:

原文地址:http://www.cnblogs.com/yangmengdx3/p/4736086.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!