Database systems with large data sets and high throughput applications can challenge the capacity of a single server. High query rates can exhaust the CPU capacity of the server. Larger data sets exceed the storage capacity of a single machine. Finally, working set sizes larger than the system’s RAM stress the I/O capacity of disk drives
MongoDB Sharding技术是MongoDB为了解决随着数据量的增加和读写请求的增加,单个MongoDB实例无法应对的问题.通过使用Sharding,MongoDB将数据切分成多个部分,将数据分布存放在多个shard上.Sharding技术使单个shard处理请求减少和存储容量减小,同时,随着集群的扩大,整个集群的吞吐量和容量都会扩大.
Sharded cluster has the following components: shards, query routers and config servers.
Sharded cluster分片集群有以下几个组件:shards,query routers,config servers.
Shards store the data. To provide high availability and data consistency, in a production sharded cluster, each shard is a replica set [1]. For more information on replica sets, see Replica Sets.
shards: 用来存储数据,为这个分片集群提供高可用和数据一致性。在一个生产环境中,每个shard都是一个replica set。
Query Routers, or mongos instances, interface with client applications and direct operations to the appropriate shard or shards. A client sends requests to a mongos, which then routes the operations to the shards and returns the results to the clients. A sharded cluster can contain more than one mongos to divide the client request load, and most sharded clusters have more than one mongos for this reason.
query routers:或者是mongos实例,用于与应用程序交互,将请求转发到后端的shards,然后将请求结果返回给客户端。一个分片集群可以有多个query router即mongos实例用于分摊客户端的请求压力。如果使用多个mongos实例,可以使用HAProxy或者LVS等代理来转发客户端请求到后端的mongos,必须要配置成client affinity模式保证来自同一个客户端的请求转发到后端相同的mongos.通常会将mongos实例部署到应用服务器上。
Config servers store the cluster’s metadata. This data contains a mapping of the cluster’s data set to the shards. The query router uses this metadata to target operations to specific shards.
config servers: 用于存储分片集群的元数据。这些元数据包含整个集群的数据集合data sets与后端shards的对应关系。query router使用这些元数据来将客户端请求定位到后端相应的shards。
解释:config server,配置服务器,存储所有数据库元信息(路由、分片)的配置。 mongos本身没有物理存储分片服务器和数据路由信息,只是缓存在内存里,配置服务器则实际存储这些数据。mongos第一次启动或者关掉重启就会从config server 加载配置信息,如果配置服务器信息变化会通知到所有的 mongos更新自己的状态,这样 mongos就能继续准确路由。config servers里的数据非常重要,如果config servers全部挂掉,整个分片集群将不可用。在生产环境中,通常有多个config server配置服务器,因为它存储了分片路由的元数据。
Changed in version 3.2: Starting in MongoDB 3.2, config servers for sharded clusters can be deployed as a replica set. The replica set config servers must run the WiredTiger storage engine. MongoDB 3.2 deprecates the use of three mirroredmongod instances for config servers.
从mongodb3.2开始,分片集群中的config servers可以部署成一个副本集。这个config servers的副本集必须使用WiredTiger 的存储引擎。MongoDB3.2不建议使用镜像的方式构建config servers 。
Data Partitioning
MongoDB distributes data, or shards, at the collection level. Sharding partitions a collection’s data by the shard key.
MongoDB Sharding是在Collection即集合层面来分布存储数据的,Sharding依据shard key来讲一个集合的数据来分布存储。
Shard Keys
To shard a collection, you need to select a shard key. A shard key is either an indexed field or an indexed compound field that exists in every document in the collection. MongoDB divides the shard key values into chunks and distributes the chunks evenly across the shards. To divide the shard key values into chunks, MongoDB uses either range based partitioning or hash based partitioning. See the Shard Key documentation for more information.
为了将一个集合的数据进行分片,首先需要选择一个shard key。一个shard key可以是存在于一个集合中每个文档的索引字段或者混合索引字段。MongoDB将这个shard key的值切分成多个数据块,然后将这些数据块均匀分布到后端的shard上。MongoDB使用range based partitioning 或者 hash based partitionning来对一个shard key的值进行切分。shard key一旦选择好是不能变更的。
Range Based Sharding
For range-based sharding, MongoDB divides the data set into ranges determined by the shard key values to provide range based partitioning. Consider a numeric shard key: If you visualize a number line that goes from negative infinity to positive infinity, each value of the shard key falls at some point on that line. MongoDB partitions this line into smaller, non-overlapping ranges called chunks where a chunk is range of values from some minimum value to some maximum value.
基于范围的分片,mongodb视shard key的值作为将数据集合划分范围的依据。shard key可以认为是一个数值:如果你想象一个从负无穷大到正无穷大数字线,shard key的每个值都落在这条线上。mongodb分区线分成更小的、非重叠的范围称为chunk,chunk是范围值最小值到最大值的区间(本人也是第一次尝试在学习的过程中翻译技术文档,这一段实在不通顺,我已尽力,希望大家指正)。
Given a range based partitioning system, documents with “close” shard key values are likely to be in the same chunk, and therefore on the same shard.
基于范围分区,文档接近shard key的值时很可能被分配到相同的chunk中,所以它们会在同一个shard上。
Hash Based Sharding
For hash based partitioning, MongoDB computes a hash of a field’s value, and then uses these hashes to create chunks.
对于hash based partitionning,MongoDB会先计算一个字段值得哈希值,然后使用这些哈希值来创建数据块。
With hash based partitioning, two documents with “close” shard key values are unlikely to be part of the same chunk. This ensures a more random distribution of a collection in the cluster.
基于hash分区,两个文档都按近shard key的值时,也不可能被分配到相同的chunk上,这样确保collection尽量随机的分布到集群中。
Performance Distinctions between Range and Hash Based Partitioning
Range based partitioning supports more efficient range queries. Given a range query on the shard key, the query router can easily determine which chunks overlap that range and route the query to only those shards that contain these chunks.
Range based partitioning支持更有效的范围查询。对于一个给定shard key的范围查询,query router可以更容易地判断将请求只路由到包含相应数据库的shard上。
However, range based partitioning can result in an uneven distribution of data, which may negate some of the benefits of sharding. For example, if the shard key is a linearly increasing field, such as time, then all requests for a given time range will map to the same chunk, and thus the same shard. In this situation, a small set of shards may receive the majority of requests and the system would not scale very well.
Range based partitioning可能会导致数据分布不均,这样会对sharding产生负面作用,比如会出现大部分请求被分发到同一个shard的情况发生。
Hash based partitioning, by contrast, ensures an even distribution of data at the expense of efficient range queries. Hashed key values results in random distribution of data across chunks and therefore shards. But random distribution makes it more likely that a range query on the shard key will not be able to target a few shards but would more likely query every shard in order to return a result.
Hash based partitioning可以确保数据平均分布,但是这样会导致经过哈希处理的值在各个数据块和shard上随机分布,进而使制定的范围查询range query不能定位到某些shard而是在每个shard上进行查询。
Customized Data Distribution with Tag Aware Sharding
MongoDB allows administrators to direct the balancing policy using tag aware sharding. Administrators create and associate tags with ranges of the shard key, and then assign those tags to the shards. Then, the balancer migrates tagged data to the appropriate shards and ensures that the cluster always enforces the distribution of data that the tags describe.
MongoDB允许使用tag aware sharding来根据shard key的范围创建并关联一些tag到后端的shards。主要用于同一个分片集群数据分布到多个数据中心的情况。
Tags are the primary mechanism to control the behavior of the balancer and the distribution of chunks in a cluster. Most commonly, tag aware sharding serves to improve the locality of data for sharded clusters that span multiple data centers.
Maintaining a Balanced Data Distribution
The addition of new data or the addition of new servers can result in data distribution imbalances within the cluster, such as a particular shard contains significantly more chunks than another shard or a size of a chunk is significantly greater than other chunk sizes.
随着数据的增加或者是服务器的增加都会导致整个分片集群的数据分布不均衡,比如一个shard比其他shard上的数据库chunk明显多了很多,或者一个数据块chunk的大小明显比其他chunk大很多。
MongoDB ensures a balanced cluster using two background process: splitting and the balancer.
MongoDB使用两个后台进程来确保一个均衡的分片集群,它们分别是splitting和balancer.
Splitting
Splitting is a background process that keeps chunks from growing too large. When a chunk grows beyond a specified chunk size, MongoDB splits the chunk in half. Inserts and updates triggers splits. Splits are an efficient meta-data change. To create splits, MongoDB does not migrate any data or affect the shards.
Splitting是一个防止chunk变得太大的后台进程,当一个chunk大小超过了指定的大小,MongoDB将会把这个chunk分成两半。插入和更新操作都会触发split。
Balancing
The balancer is a background process that manages chunk migrations. The balancer can run from any of the mongos instances in a cluster.
balancer是一个用于管理chunk迁移的后台进程。blancer 能够运行在集群中的任何一个mongos实例上。
When the distribution of a sharded collection in a cluster is uneven, the balancer process migrates chunks from the shard that has the largest number of chunks to the shard with the least number of chunks until the collection balances. For example: if collection users has 100 chunks on shard 1 and 50 chunks on shard 2, the balancer will migrate chunks from shard 1 to shard 2 until the collection achieves balance.
当分片集群中一个分片集合的数据分布不均衡时,balancer进程会把拥有最多chunk的shard上的chunk迁移到拥有最少chunk的shard上,直到这个集合的数据分布均衡为止。例如,集合user在shard 1上拥有100个chunk,在shard2上拥有50个chunk,balancer进程会将shard1上的chunk迁移到shard2,直到两个shard上的chunk数量保持均衡位置。
The shards manage chunk migrations as a background operation between an origin shard and a destination shard. During a chunk migration, the destination shard is sent all the current documents in the chunk from the origin shard. Next, the destination shard captures and applies all changes made to the data during the migration process. Finally, the metadata regarding the location of the chunk on config server is updated.
If there’s an error during the migration, the balancer aborts the process leaving the chunk unchanged on the origin shard. MongoDB removes the chunk’s data from the origin shard after the migration completes successfully.
向一个分片集群中添加或删除shard都会影响整个集群的均衡性。
Adding and Removing Shards from the Cluster
Adding a shard to a cluster creates an imbalance since the new shard has no chunks. While MongoDB begins migrating data to the new shard immediately, it can take some time before the cluster balances.
When removing a shard, the balancer migrates all chunks from a shard to other shards. After migrating all data and updating the meta data, you can safely remove the shard.
本文出自 “11462293” 博客,请务必保留此出处http://11472293.blog.51cto.com/11462293/1793822
原文地址:http://11472293.blog.51cto.com/11462293/1793822