标签:
| 主机名 | ip地址 | 角色 | 作用 |
| mongodb1 | 192.168.56.80 | config server | 配置服务器 |
| mongodb2 | 192.168.56.81 | mongod server/mongos | 分片服务器/路由服务 |
| mongodb3 | 192.168.56.82 | mongod server | 分片服务器 |
[root@mongodb1 db]# mongod -hOptions:...Sharding options:--configsvr declare this is a config db of acluster; default port 27019; defaultdir /data/configdb--configsvrMode arg Controls what config server protocol isin use. When set to "sccc" keeps serverin legacy SyncClusterConnection modeeven when the service is running as areplSet--shardsvr declare this is a shard db of acluster; default port 27018
[root@mongodb1 db]# cat /etc/mongod.confport = 27017dbpath = /data/dblogpath = /data/log/config.loglogappend = truefork = trueconfigsvr = true#replSet = configreplset
[root@mongodb1 db]# mongod -f /etc/mongod.confabout to fork child process, waiting until server is ready for connections.forked process: 3767child process started successfully, parent exiting
rs.initiate( {_id: "configReplSet",configsvr: true,members: [{ _id: 0, host: "<host1>:<port1>" },{ _id: 1, host: "<host2>:<port2>" },{ _id: 2, host: "<host3>:<port3>" }]} )
[root@mongodb2 ~]# mongos -h Options:... Sharding options:--configdb arg Connection string for communicating with configservers. Acceptable forms:CSRS: <config replset name>/<host1:port>,<host2:port>,[...]
[root@mongodb2 ~]# cat /etc/mongos.conflogpath = /data/log/mongos.loglogappend = trueconfigdb = mongodb1:27017port = 27018fork = true
[root@mongodb2 ~]# mongos -f /etc/mongos.conf2016-06-21T20:27:15.748+0800 W SHARDING [main] Running a sharded cluster with fewer than 3 config servers should only be done for testing purposes and is not recommended for production.about to fork child process, waiting until server is ready for connections.forked process: 3878child process started successfully, parent exiting
[root@mongodb2 ~]# cat /etc/mongod.confport=27017dbpath=/data/dblogpath=/data/log/mongod.loglogappend=truefork = true#replSet = rs0oplogSize = 500shardsvr = true
[root@mongodb2 db]# mongod -f /etc/mongod.confabout to fork child process, waiting until server is ready for connections.forked process: 3944child process started successfully, parent exiting
[root@mongodb2 db]# mongo 127.0.0.1:27018MongoDB shell version: 3.2.6connecting to: 127.0.0.1:27018/testServer has startup warnings:2016-06-21T20:27:15.759+0800 I CONTROL [main] ** WARNING: You are running this process as the root user, which is not recommended.2016-06-21T20:27:15.760+0800 I CONTROL [main]mongos>
mongos> sh.addShard("mongodb2:27017"){ "shardAdded" : "shard0000", "ok" : 1 }mongos> sh.addShard("mongodb3:27017"){ "shardAdded" : "shard0001", "ok" : 1 }
mongos> show dbs config 0.000GBmongos> sh.enableSharding("test");{ "ok" : 1 }
mongos> use testswitched to db testmongos> for(i=0;i<100;i++){... db.users.insert(... {... "i":i,... "username":"user"+i,... "age":Math.floor(Math.random()*120),... "created":new Date()... }... );... }WriteResult({ "nInserted" : 1 })
mongos> db.users.createIndex({i:1,username:1}){"raw" : {"mongodb2:27017" : {"createdCollectionAutomatically" : false,"numIndexesBefore" : 1,"numIndexesAfter" : 2,"ok" : 1}},"ok" : 1}
mongos> sh.shardCollection("test.users",{"i":1,"username":1}){ "collectionsharded" : "test.users", "ok" : 1 }
mongos> sh.status()--- Sharding Status ---sharding version: {"_id" : 1,"minCompatibleVersion" : 5,"currentVersion" : 6,"clusterId" : ObjectId("576932a3f43b00b3956fecb7")}shards:{ "_id" : "shard0000", "host" : "mongodb2:27017" }{ "_id" : "shard0001", "host" : "mongodb3:27017" }active mongoses:"3.2.6" : 1balancer:Currently enabled: yesCurrently running: noFailed balancer rounds in last 5 attempts: 0Migration Results for the last 24 hours:No recent migrationsdatabases:{ "_id" : "test", "primary" : "shard0000", "partitioned" : true }test.usersshard key: { "i" : 1, "username" : 1 }unique: falsebalancing: truechunks:shard0000 1{ "i" : { "$minKey" : 1 }, "username" : { "$minKey" : 1 } } -->> { "i" : { "$maxKey" : 1 }, "username" : { "$maxKey" : 1 } } on : shard0000 Timestamp(1, 0)
mongos> for(i=100;i<200000;i++){... db.users.insert(... {... "i":i,... "username":"user"+i,... "age":Math.floor(Math.random()*120),... "created":new Date()... }... );... }
mongos> sh.status()--- Sharding Status ---sharding version: {"_id" : 1,"minCompatibleVersion" : 5,"currentVersion" : 6,"clusterId" : ObjectId("576932a3f43b00b3956fecb7")}shards:{ "_id" : "shard0000", "host" : "mongodb2:27017" }{ "_id" : "shard0001", "host" : "mongodb3:27017" }active mongoses:"3.2.6" : 1balancer:Currently enabled: yesCurrently running: noFailed balancer rounds in last 5 attempts: 0Migration Results for the last 24 hours:7 : Successdatabases:{ "_id" : "test", "primary" : "shard0000", "partitioned" : true }test.usersshard key: { "i" : 1, "username" : 1 }unique: falsebalancing: truechunks:shard0000 8shard0001 7{ "i" : { "$minKey" : 1 }, "username" : { "$minKey" : 1 } } -->> { "i" : 1, "username" : "user1" } on : shard0001 Timestamp(2, 0){ "i" : 1, "username" : "user1" } -->> { "i" : 13, "username" : "user13" } on : shard0001 Timestamp(3, 0){ "i" : 13, "username" : "user13" } -->> { "i" : 20, "username" : "user20" } on : shard0001 Timestamp(4, 0){ "i" : 20, "username" : "user20" } -->> { "i" : 27, "username" : "user27" } on : shard0001 Timestamp(5, 0){ "i" : 27, "username" : "user27" } -->> { "i" : 34, "username" : "user34" } on : shard0001 Timestamp(6, 0){ "i" : 34, "username" : "user34" } -->> { "i" : 41, "username" : "user41" } on : shard0001 Timestamp(7, 0){ "i" : 41, "username" : "user41" } -->> { "i" : 48, "username" : "user48" } on : shard0001 Timestamp(8, 0){ "i" : 48, "username" : "user48" } -->> { "i" : 55, "username" : "user55" } on : shard0000 Timestamp(8, 1){ "i" : 55, "username" : "user55" } -->> { "i" : 62, "username" : "user62" } on : shard0000 Timestamp(1, 9){ "i" : 62, "username" : "user62" } -->> { "i" : 69, "username" : "user69" } on : shard0000 Timestamp(1, 10){ "i" : 69, "username" : "user69" } -->> { "i" : 76, "username" : "user76" } on : shard0000 Timestamp(1, 11){ "i" : 76, "username" : "user76" } -->> { "i" : 83, "username" : "user83" } on : shard0000 Timestamp(1, 12){ "i" : 83, "username" : "user83" } -->> { "i" : 90, "username" : "user90" } on : shard0000 Timestamp(1, 13){ "i" : 90, "username" : "user90" } -->> { "i" : 97, "username" : "user97" } on : shard0000 Timestamp(1, 14){ "i" : 97, "username" : "user97" } -->> { "i" : { "$maxKey" : 1 }, "username" : { "$maxKey" : 1 } } on : shard0000 Timestamp(1, 15)
for ( var x=97; x<97+26; x++ ){for( var y=97; y<97+26; y+=6 ) {var prefix = String.fromCharCode(x) + String.fromCharCode(y);db.runCommand( { split : "myapp.users" , middle : { email : prefix } } );}}
sh.splitFind( "records.people", { "zipcode": "63109" } )
sh.splitAt( "records.people", { "zipcode": "63109" } )
db.adminCommand( { moveChunk : "myapp.users",find : {username : "smith"},to : "mongodb-shard3.example.net" } )
var shServer = [ "sh0.example.net", "sh1.example.net", "sh2.example.net", "sh3.example.net", "sh4.example.net" ];for ( var x=97; x<97+26; x++ ){for( var y=97; y<97+26; y+=6 ) {var prefix = String.fromCharCode(x) + String.fromCharCode(y);db.adminCommand({moveChunk : "myapp.users", find : {email : prefix}, to : shServer[(y-97)/6]})}}
use testdb.runCommand({"dataSize": "test.users","keyPattern": { username: 1 },"min": { "username": "user36583" },"max": { "username": "user43229" }})
{ "size" : 0, "numObjects" : 0, "millis" : 0, "ok" : 1 }
db.runCommand( { mergeChunks: "test.users",bounds: [ { "username": "user68982" },{ "username": "user95197" } ]} )
{ "ok" : 1 }
use config
db.settings.save( { _id:"chunksize", value: <sizeInMB> } )
db.runCommand( { shardCollection : "test.users" , key : { email : 1 } , unique : true } );
db.fs.chunks.createIndex( { files_id : 1 , n : 1 } )db.runCommand( { shardCollection : "test.fs.chunks" , key : { files_id : 1 , n : 1 } } )
db.runCommand( { shardCollection : "test.fs.chunks" , key : { files_id : 1 } } )
mongos> use configswitched to db configmongos> show tablesactionlogchangelogchunkscollectionsdatabaseslockpingslocksmongossettingsshardstagsversionmongos> db.databases.find(){ "_id" : "test", "primary" : "shard0000", "partitioned" : true }
mongos> use adminswitched to db adminmongos> db.runCommand({listShards:1}){"shards" : [{"_id" : "shard0000","host" : "mongodb2:27017"},{"_id" : "shard0001","host" : "mongodb3:27017"}],"ok" : 1}
mongos> db.printShardingStatus()--- Sharding Status ---sharding version: {"_id" : 1,"minCompatibleVersion" : 5,"currentVersion" : 6,"clusterId" : ObjectId("576932a3f43b00b3956fecb7")}shards:{ "_id" : "shard0000", "host" : "mongodb2:27017" }{ "_id" : "shard0001", "host" : "mongodb3:27017" }active mongoses:"3.2.6" : 1balancer:Currently enabled: yesCurrently running: noFailed balancer rounds in last 5 attempts: 0Migration Results for the last 24 hours:7 : Successdatabases:{ "_id" : "test", "primary" : "shard0000", "partitioned" : true }test.usersshard key: { "i" : 1, "username" : 1 }unique: falsebalancing: truechunks:shard0000 8shard0001 7{ "i" : { "$minKey" : 1 }, "username" : { "$minKey" : 1 } } -->> { "i" : 1, "username" : "user1" } on : shard0001 Timestamp(2, 0){ "i" : 1, "username" : "user1" } -->> { "i" : 13, "username" : "user13" } on : shard0001 Timestamp(3, 0){ "i" : 13, "username" : "user13" } -->> { "i" : 20, "username" : "user20" } on : shard0001 Timestamp(4, 0){ "i" : 20, "username" : "user20" } -->> { "i" : 27, "username" : "user27" } on : shard0001 Timestamp(5, 0){ "i" : 27, "username" : "user27" } -->> { "i" : 34, "username" : "user34" } on : shard0001 Timestamp(6, 0){ "i" : 34, "username" : "user34" } -->> { "i" : 41, "username" : "user41" } on : shard0001 Timestamp(7, 0){ "i" : 41, "username" : "user41" } -->> { "i" : 48, "username" : "user48" } on : shard0001 Timestamp(8, 0){ "i" : 48, "username" : "user48" } -->> { "i" : 55, "username" : "user55" } on : shard0000 Timestamp(8, 1){ "i" : 55, "username" : "user55" } -->> { "i" : 62, "username" : "user62" } on : shard0000 Timestamp(1, 9){ "i" : 62, "username" : "user62" } -->> { "i" : 69, "username" : "user69" } on : shard0000 Timestamp(1, 10){ "i" : 69, "username" : "user69" } -->> { "i" : 76, "username" : "user76" } on : shard0000 Timestamp(1, 11){ "i" : 76, "username" : "user76" } -->> { "i" : 83, "username" : "user83" } on : shard0000 Timestamp(1, 12){ "i" : 83, "username" : "user83" } -->> { "i" : 90, "username" : "user90" } on : shard0000 Timestamp(1, 13){ "i" : 90, "username" : "user90" } -->> { "i" : 97, "username" : "user97" } on : shard0000 Timestamp(1, 14){ "i" : 97, "username" : "user97" } -->> { "i" : { "$maxKey" : 1 }, "username" : { "$maxKey" : 1 } } on : shard0000 Timestamp(1, 15)
mongos> sh.getBalancerState()true
use configsh.setBalancerState( true )db.settings.update({ _id: "balancer" },{ $set: { activeWindow : { start : "<start-time>", stop : "<stop-time>" } } },{ upsert: true })
use configdb.settings.update({ _id : "balancer" }, { $unset : { activeWindow : true } })
mongos> sh.stopBalancer()Waiting for active hosts...Waiting for the balancer lock...Waiting again for active hosts after balancer is off...
mongos> sh.getBalancerState()false
use configwhile( sh.isBalancerRunning() ) {print("waiting...");sleep(1000);}
db.settings.update( { _id: "balancer" }, { $set : { stopped: true } } , { upsert: true } )
mongos> sh.setBalancerState(true)
db.settings.update( { _id: "balancer" }, { $set : { stopped: false } } , { upsert: true } )
mongos> sh.disableBalancing("test.users")mongos> sh.enableBalancing("test.users")
mongos> db.adminCommand( { listShards: 1 } ){"shards" : [{"_id" : "shard0000","host" : "mongodb2:27017"},{"_id" : "shard0001","host" : "mongodb3:27017"}],"ok" : 1}
use admindb.runCommand( { removeShard: "mongodb0" } )
{"msg" : "draining ongoing","state" : "ongoing","remaining" : {"chunks" : 42,"dbs" : 1},"ok" : 1}
mongos> db.runCommand( { removeShard:"shard0001" }){"msg" : "removeshard completed successfully","state" : "completed","shard" : "shard0001","ok" : 1}
db.runCommand( { movePrimary: "products", to: "mongodb1" })
{ "primary" : "mongodb1", "ok" : 1 }
|
名字 |
描述 |
|---|---|
| sh._adminCommand() |
在admin数据库运行 database command ,就像 db.runCommand() ,不过可以保证只在 mongos 上运行. |
| sh.getBalancerLockDetails() | Reports on the active balancer lock, if it exists. |
| sh._checkFullName() | Tests a namespace to determine if its well formed. |
| sh._checkMongos() | |
| sh._lastMigration() |
报告最后进行的 chunk 迁移. |
| sh.addShard() |
向集群中添加一个 shard |
| sh.addShardTag() |
将一个分片与一个标记相关联,用以支持 标记相关的分片. |
| sh.addTagRange() |
将片键的范围与某个标记相关联,用以支持 标记相关的分片. |
| sh.removeTagRange() | Removes an association between a range shard keys and a shard tag. Use to manage tag aware sharding. |
| sh.disableBalancing() |
禁用一个分片数据库中某个集合的均衡过程,这并不影响这个分片数据库中其他分片的均衡过程. |
| sh.enableBalancing() |
如果之前使用了命令 sh.disableBalancing() 禁用了某个集合的均衡过程,这个命令将重新启用均衡过程. |
| sh.enableSharding() |
对指定的数据库开启分片. |
| sh.getBalancerHost() |
返回负责均衡过程的一个 mongos 名字. |
| sh.getBalancerState() |
返回一个布尔值,反应 balancer 是否被启用. |
| sh.help() |
返回 sh 命令的帮助信息. |
| sh.isBalancerRunning() |
返回一个布尔值,报告当前是否有均衡器在进行数据块的迁移. |
| sh.moveChunk() |
迁移 sharded cluster 中一个 chunk . |
| sh.removeShardTag() |
删除一个分片与一个标记的关联. |
| sh.setBalancerState() | |
| sh.shardCollection() |
为一个集合开启分片 |
| sh.splitAt() | |
| sh.splitFind() |
将包含查询文档的一个已经存在的 chunk 分成两个差不多大小的数据块. |
| sh.startBalancer() |
启用 balancer 并等待均衡过程开始. |
| sh.status() |
就像 db.printShardingStatus() 一样,返回 sharded cluster 的状态信息. |
| sh.stopBalancer() |
禁用 balancer 并等待进行中的均衡过程完成. |
| sh.waitForBalancer() |
内部命令,等待均衡状态改变. |
| sh.waitForBalancerOff() |
内部命令.等待均衡器停止运行. |
| sh.waitForDLock() |
内部命令,等待指定的 sharded cluster 分布锁. |
| sh.waitForPingChange() |
Internal. Waits for a change in ping state from one of the mongos in
the sharded cluster. |
标签:
原文地址:http://blog.csdn.net/su377486/article/details/51736846