标签:
jredis是redis的java客户端,通过sharde实现负载路由,一直很好奇jredis的sharde如何实现,翻开jredis源码研究了一番,所谓sharde其实就是一致性hash算法。其实,通过其源码可以看出一致性hash算法实现还是比较简单的。主要实现类是redis.clients.util.Sharded<R, S>,关键的地方添加了注释:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
|
public class Sharded<R, S extends ShardInfo<R>> { //S类封装了redis节点的信息 ,如name、权重 public static final int DEFAULT_WEIGHT = 1 ; //默认权重为1 private TreeMap<Long, S> nodes; //存放虚拟节点 private final Hashing algo; //hash算法 ...... public Sharded(List<S> shards, Hashing algo, Pattern tagPattern) { this .algo = algo; this .tagPattern = tagPattern; initialize(shards); } private void initialize(List<S> shards) { nodes = new TreeMap<Long, S>(); //基于红黑树实现排序map, 是根据key排序的 ,注意这里key放的是long类型,最多放2^32个 for ( int i = 0 ; i != shards.size(); ++i) { final S shardInfo = shards.get(i); if (shardInfo.getName() == null ) for ( int n = 0 ; n < 160 * shardInfo.getWeight(); n++) { //一个真实redis节点关联多个虚拟节点 , 通过计算虚拟节点hash值,可很好平衡把它分散到2^32个整数上 nodes.put( this .algo.hash( "SHARD-" + i + "-NODE-" + n), shardInfo); } else for ( int n = 0 ; n < 160 * shardInfo.getWeight(); n++) { //一个真实redis节点关联多个虚拟节点 , 通过计算虚拟节点hash值,可很好平衡把它分散到2^32个整数上 nodes.put( this .algo.hash(shardInfo.getName() + "*" + shardInfo.getWeight() + n), shardInfo); } resources.put(shardInfo, shardInfo.createResource()); } } /** * 计算key的hash值查找实际实际节点S * @param key * @return */ public S getShardInfo( byte [] key) { SortedMap<Long, S> tail = nodes.tailMap(algo.hash(key)); //取出比较key的hash大的 if (tail.isEmpty()) { //取出虚拟节点为空,直接取第一个 return nodes.get(nodes.firstKey()); } return tail.get(tail.firstKey()); //取出虚拟节点第一个 } ...... } |
整个算法可总结为:首先生成一个长度为2^32个整数环,通过计算虚拟节点hash值映射到整数环上,间接也把实际节点也放到这个环上(因为虚拟节点会关联上一个实际节点)。然后根据需要缓存数据的key的hash值在整数环上查找,环顺时针找到距离这个key的hash值最近虚拟节点,这样就完成了根据key到实际节点之间的路由了。
一致性hash核心是思想是增加虚拟节点这一层来解决实际节点变动而不破坏整体的一致性。这种增加层的概念来解决问题对于我们来说一点都不陌生,如软件开发中分层设计,操作系统层解决了应用层和硬件的协调工作,java虚拟机解决了跨平台。
还有一个问题值得关注是一个实际节点虚拟多少个节点才是合适呢?认真看过上述代码同学会注意160这个值,这个实际上是经验值,太多会影响性能,太少又会影响不均衡。通过调整weight值,可实现实际节点权重,这个很好理解,虚拟出节点越多,落到这个节点概率越高。
参考资料
http://blog.csdn.net/sparkliang/article/details/5279393
http://my.oschina.net/u/90679/blog/188750
djb2 算法:
1
2
3
4
5
6
7
8
9
10
11
12
|
unsigned long hash(unsigned char *str) { //hash种子 unsigned long hash = 5381; int c; //遍历字符串中每一个字符 while (c = *str++) //对hash种子 进行位运算 hash << 5表示 hash乘以32次方,再加上 hash 表示hash乘以33 //然后再加上字符的ascii码,之后循环次操作 hash = ((hash << 5) + hash) + c; /* hash * 33 + c */ return hash; } |
至于种子为什么选择 5381,通过搜索得到以下结论,该数算一个魔法常量:
5381是个奇数
5381是质数
5381是缺数
二进制分布均匀:001/010/100/000/101
由于本人对算法是一窍不通,以上特点对hash结果会有什么影响实在不懂,希望高手们能解释一下。
Redis算法对djbhash的实现方法如下(以下代码在 src/dict.c ):
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
|
//hash种子,默认为 5381 static uint32_t dict_hash_function_seed = 5381; //设置hash种子 void dictSetHashFunctionSeed(uint32_t seed) { dict_hash_function_seed = seed; } //获取hash种子 uint32_t dictGetHashFunctionSeed( void ) { return dict_hash_function_seed; } /* And a case insensitive hash function (based on djb hash) */ unsigned int dictGenCaseHashFunction( const unsigned char *buf, int len) { //得到hash种子 unsigned int hash = (unsigned int )dict_hash_function_seed; //遍历字符串 while (len--) //使用dbj算法反复乘以33并加上字符串转小写后的ascii码 hash = ((hash << 5) + hash) + ( tolower (*buf++)); /* hash * 33 + c */ return hash; } |
Redis对djbhash做了一个小小的修改,将需要处理的字符串进行了大小写的转换,是的hash算法的结果与大小写无关。
MurmurHash2算法:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
|
uint32_t MurmurHash2( const void * key, int len, uint32_t seed ) { // ‘m‘ and ‘r‘ are mixing constants generated offline. // They‘re not really ‘magic‘, they just happen to work well. const uint32_t m = 0x5bd1e995; const int r = 24; // Initialize the hash to a ‘random‘ value uint32_t h = seed ^ len; // Mix 4 bytes at a time into the hash const unsigned char * data = ( const unsigned char *)key; while (len >= 4) { //每次循环都将4个字节的 字符 转成一个int类型 uint32_t k = *(uint32_t*)data; k *= m; k ^= k >> r; k *= m; h *= m; h ^= k; data += 4; len -= 4; } // Handle the last few bytes of the input array //处理结尾不足4个字节的数据,通过移位操作将其转换为一个int型数据 switch (len) { case 3: h ^= data[2] << 16; case 2: h ^= data[1] << 8; case 1: h ^= data[0]; h *= m; }; // Do a few final mixes of the hash to ensure the last few // bytes are well-incorporated. h ^= h >> 13; h *= m; h ^= h >> 15; return h; } |
unsigned int dictGenHashFunction(const void *key, int len) {
/* ‘m‘ and ‘r‘ are mixing constants generated offline.
They‘re not really ‘magic‘, they just happen to work well. */
uint32_t seed = dict_hash_function_seed;
const uint32_t m = 0x5bd1e995;
const int r = 24;
/* Initialize the hash to a ‘random‘ value */
uint32_t h = seed ^ len;
/* Mix 4 bytes at a time into the hash */
const unsigned char *data = (const unsigned char *)key;
while(len >= 4) {
uint32_t k = *(uint32_t*)data;
k *= m;
k ^= k >> r;
k *= m;
h *= m;
h ^= k;
data += 4;
len -= 4;
}
/* Handle the last few bytes of the input array */
switch(len) {
case 3: h ^= data[2] << 16;
case 2: h ^= data[1] << 8;
case 1: h ^= data[0]; h *= m;
};
/* Do a few final mixes of the hash to ensure the last few
* bytes are well-incorporated. */
h ^= h >> 13;
h *= m;
h ^= h >> 15;
return (unsigned int)h;
}
|
参考资料:
http://lenky.info/archives/2012/12/2150
Redis2.8.9源码 src/dict.h src/dict.c
标签:
原文地址:http://www.cnblogs.com/lsx1993/p/4632442.html