标签:redis集群
Redis 集群是一个提供在多个Redis间节点间共享数据的程序集.
Redis集群并不支持处理多个keys的命令,因为这需要在不同的节点间移动数据,从而达不到像Redis那样的性能,在高负载的情况下可能会导致不可预料的错误.
Redis 集群通过分区来提供一定程度的可用性,在实际环境中当某个节点宕机或者不可达的情况下继续处理命令.
Redis 集群的优势:
自动分割数据到不同的节点上.
整个集群的部分节点失败或者不可达的情况下能够继续处理命令.
Redis 集群没有使用一致性hash, 而是引入了哈希槽的概念.
Redis 集群有16384个哈希槽,每个key通过CRC16校验后对16384取模来决定放置哪个槽.集群的每个节点负责一部分hash槽,举个例子,比如当前集群有3个节点,那么:
节点 A 包含 0 到 5500号哈希槽.
节点 B 包含5501 到 11000 号哈希槽.
节点 C 包含11001 到 16384号哈希槽.
这种结构很容易添加或者删除节点. 比如如果我想新添加个节点D, 我需要从节点 A, B, C中得部分槽到D上. 如果我像移除节点A,需要将A中得槽移到B和C节点上,然后将没有任何槽的A节点从集群中移除即可.
由于从一个节点将哈希槽移动到另一个节点并不会停止服务,所以无论添加删除或者改变某个节点的哈希槽的数量都不会造成集群不可用的状态.
为了使在部分节点失败或者大部分节点无法通信的情况下集群仍然可用,所以集群使用了主从复制模型,每个节点都会有N-1个复制品.
在我们例子中具有A,B,C三个节点的集群,在没有复制模型的情况下,如果节点B失败了,那么整个集群就会以为缺少5501-11000这个范围的槽而不可用.
然而如果在集群创建的时候(或者过一段时间)我们为每个节点添加一个从节点A1,B1,C1,那么整个集群便有三个master节点和三个slave节点组成,这样在节点B失败后,集群便会选举B1为新的主节点继续服务,整个集群便不会因为槽找不到而不可用了
不过当B和B1 都失败后,集群人爱是不可用的.
Redis 并不能保证数据的强一致性. 这意味这在实际中集群在特定的条件下可能会丢失写操作.
第一个原因是因为集群是用了异步复制. 写操作过程:
客户端向主节点B写入一条命令.
主节点B向客户端回复命令状态.
主节点将写操作复制给他得从节点 B1, B2 和 B3.
主节点对命令的复制工作发生在返回命令回复之后, 因为如果每次处理命令请求都需要等待复制操作完成的话, 那么主节点处理命令请求的速度将极大地降低 —— 我们必须在性能和一致性之间做出权衡。
注意:Redis 集群可能会在将来提供同步写的方法。
Redis 集群另外一种可能会丢失命令的情况是集群出现了网络分区, 并且一个客户端与至少包括一个主节点在内的少数实例被孤立。.
举个例子 假设集群包含 A 、 B 、 C 、 A1 、 B1 、 C1 六个节点, 其中 A 、B 、C 为主节点, A1 、B1 、C1 为A,B,C的从节点, 还有一个客户端 Z1
假设集群中发生网络分区,那么集群可能会分为两方,大部分的一方包含节点 A 、C 、A1 、B1 和 C1 ,小部分的一方则包含节点 B 和客户端 Z1 .
Z1仍然能够向主节点B中写入, 如果网络分区发生时间较短,那么集群将会继续正常运作,如果分区的时间足够让大部分的一方将B1选举为新的master,那么Z1写入B中得数据便丢失了.
注意, 在网络分裂出现期间, 客户端 Z1 可以向主节点 B 发送写命令的最大时间是有限制的, 这一时间限制称为节点超时时间(node timeout), 是 Redis 集群的一个重要的配置选项:
二、准备两台主机
192.168.1.3 master.abc.com
192.168.1.4 datanode.abc.com
不特意指定的话,默认都是先在3这台主机操作
三、安装ruby,rubygems
yum -y install gcc openssl-devel libyaml-devel libffi-devel readline-devel zlib-devel gdbm-devel ncurses-devel gcc-c++ automake autoconf
yum -y install ruby rubygems //安装ruby rubygems
//换源
gem source -l
gem source --remove http://rubygems.org/
gem sources -a http://ruby.taobao.org/
gem source -l
gem install redis --version 3.0.0 //安装gem_redis
Successfully installed redis-3.0.0
1 gem installed
Installing ri documentation for redis-3.0.0...
Installing RDoc documentation for redis-3.0.0...
四、安装tcl组件包(安装Redis需要tcl支持)
yum install tcl-devel.x86_64 tcl.x86_64
五、下载redis-3.0.2.tar.gz
tar -xvzf redis-3.0.0.tar.gz
cd redis-3.0.0
make && make install
cd src
cp redis-trib.rb /usr/local/bin
mkdir /etc/redis
mkdir /var/log/redis
五、配置redis
[root@hadoop1 redis-3.0.2]# vim redis.conf
daemonize yes #---默认值no,该参数用于定制redis服务是否以守护模式运行。
port 6379
pidfile /var/run/redis-6379.pid
dbfilename dump-6379.rdb
appendfilename "appendonly-6379.aof"
cluster-config-file nodes-6379.conf
cluster-enabled yes
cluster-node-timeout 5000
appendonly yes
copy配置文件,并修改端口
cp redis.conf /etc/redis/redis-6379.conf
cp redis.conf /etc/redis/redis-6380.conf
cp redis.conf /etc/redis/redis-6381.conf
scp redis.conf 192.168.10.220:/etc/redis/redis-6382.conf
scp redis.conf 192.168.10.220:/etc/redis/redis-6383.conf
scp redis.conf 192.168.10.220:/etc/redis/redis-6384.conf
sed -i "s/6379/6380/g" /etc/redis/redis-6380.conf
sed -i "s/6379/6381/g" /etc/redis/redis-6381.conf
sed -i "s/6379/6382/g" /etc/redis/redis-6382.conf
sed -i "s/6379/6383/g" /etc/redis/redis-6383.conf
sed -i "s/6379/6384/g" /etc/redis/redis-6384.conf
由于4作为从节点,所以只需把82-84配置文件拷过去即可
启动并查看redis
redis-server redis-6379.conf
redis-server redis-6380.conf
redis-server redis-6381.conf
[root@hadoop1 redis]# netstat -tlnp | grep redis
tcp 0 0 0.0.0.0:6379 0.0.0.0:* LISTEN 16852/redis-server
tcp 0 0 0.0.0.0:6380 0.0.0.0:* LISTEN 16858/redis-server
tcp 0 0 0.0.0.0:6381 0.0.0.0:* LISTEN 16862/redis-server
tcp 0 0 0.0.0.0:16379 0.0.0.0:* LISTEN 16852/redis-server
tcp 0 0 0.0.0.0:16380 0.0.0.0:* LISTEN 16858/redis-server
tcp 0 0 0.0.0.0:16381 0.0.0.0:* LISTEN 16862/redis-server
tcp 0 0 :::6379 :::* LISTEN 16852/redis-server
tcp 0 0 :::6380 :::* LISTEN 16858/redis-server
tcp 0 0 :::6381 :::* LISTEN 16862/redis-server
tcp 0 0 :::16379 :::* LISTEN 16852/redis-server
tcp 0 0 :::16380 :::* LISTEN 16858/redis-server
tcp 0 0 :::16381 :::* LISTEN 16862/redis-server
4这台主机同样要启动
redis-server redis-6382.conf
redis-server redis-6383.conf
redis-server redis-6384.conf
[root@hadoop2 redis]# netstat -tlnp | grep redis
tcp 0 0 0.0.0.0:6382 0.0.0.0:* LISTEN 10572/redis-server
tcp 0 0 0.0.0.0:6383 0.0.0.0:* LISTEN 10578/redis-server
tcp 0 0 0.0.0.0:6384 0.0.0.0:* LISTEN 10582/redis-server
tcp 0 0 0.0.0.0:16382 0.0.0.0:* LISTEN 10572/redis-server
tcp 0 0 0.0.0.0:16383 0.0.0.0:* LISTEN 10578/redis-server
tcp 0 0 0.0.0.0:16384 0.0.0.0:* LISTEN 10582/redis-server
tcp 0 0 :::6382 :::* LISTEN 10572/redis-server
tcp 0 0 :::6383 :::* LISTEN 10578/redis-server
tcp 0 0 :::6384 :::* LISTEN 10582/redis-server
tcp 0 0 :::16382 :::* LISTEN 10572/redis-server
tcp 0 0 :::16383 :::* LISTEN 10578/redis-server
tcp 0 0 :::16384 :::* LISTEN 10582/redis-server
六、创建集群
[root@hadoop1 redis]# redis-trib.rb create --replicas 1 192.168.1.3:6379 192.168.1.3:6380 192.168.1.3:6381 192.168.1.4:6382 192.168.1.4:6383 192.168.1.4:6384
>>> Creating cluster
Connecting to node 192.168.1.3:6379: OK
Connecting to node 192.168.1.3:6380: OK
Connecting to node 192.168.1.3:6381: OK
Connecting to node 192.168.1.4:6382: OK
Connecting to node 192.168.1.4:6383: OK
Connecting to node 192.168.1.4:6384: OK
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
192.168.1.4:6382
192.168.1.3:6379
192.168.1.4:6383
Adding replica 192.168.1.3:6380 to 192.168.1.4:6382
Adding replica 192.168.1.4:6384 to 192.168.1.3:6379
Adding replica 192.168.1.3:6381 to 192.168.1.4:6383
M: 178b951fd08ae3250d06719509ea45258c6cef73 192.168.1.3:6379
slots:5461-10922 (5462 slots) master
S: 76ea836e326fbd4b2a242f5da01b9005c131eb46 192.168.1.3:6380
replicates d9a852afad1669adf3561a57dbfa77b250ae32bb
S: 4e54918bfc57e0895d68ad0f6bea1d104b18e0f6 192.168.1.3:6381
replicates 85ad96dbdb49fd901de6f9f1431662c7ab58a208
M: d9a852afad1669adf3561a57dbfa77b250ae32bb 192.168.1.4:6382
slots:0-5460 (5461 slots) master
M: 85ad96dbdb49fd901de6f9f1431662c7ab58a208 192.168.1.4:6383
slots:10923-16383 (5461 slots) master
S: b37d1d090fdfffdeaeccc09f7978eeedc36b342a 192.168.1.4:6384
replicates 178b951fd08ae3250d06719509ea45258c6cef73
Can I set the above configuration? (type ‘yes‘ to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join...
>>> Performing Cluster Check (using node 192.168.1.3:6379)
M: 178b951fd08ae3250d06719509ea45258c6cef73 192.168.1.3:6379
slots:5461-10922 (5462 slots) master
M: 76ea836e326fbd4b2a242f5da01b9005c131eb46 192.168.1.3:6380
slots: (0 slots) master
replicates d9a852afad1669adf3561a57dbfa77b250ae32bb
M: 4e54918bfc57e0895d68ad0f6bea1d104b18e0f6 192.168.1.3:6381
slots: (0 slots) master
replicates 85ad96dbdb49fd901de6f9f1431662c7ab58a208
M: d9a852afad1669adf3561a57dbfa77b250ae32bb 192.168.1.4:6382
slots:0-5460 (5461 slots) master
M: 85ad96dbdb49fd901de6f9f1431662c7ab58a208 192.168.1.4:6383
slots:10923-16383 (5461 slots) master
M: b37d1d090fdfffdeaeccc09f7978eeedc36b342a 192.168.1.4:6384
slots: (0 slots) master
replicates 178b951fd08ae3250d06719509ea45258c6cef73
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
创建一个新的集群, 选项--replicas 1
表示我们希望为集群中的每个主节点创建一个从节点。
之后跟着的其他参数则是这个集群实例的地址列表,3个master,3个slave
七、查看redis集群的状态
root@hadoop1 redis]# redis-trib.rb check 192.168.1.3:6379
Connecting to node 192.168.1.3:6379: OK
Connecting to node 192.168.1.4:6383: OK
Connecting to node 192.168.1.4:6384: OK
Connecting to node 192.168.1.3:6380: OK
Connecting to node 192.168.1.4:6382: OK
Connecting to node 192.168.1.3:6381: OK
>>> Performing Cluster Check (using node 192.168.1.3:6379)
M: 178b951fd08ae3250d06719509ea45258c6cef73 192.168.1.3:6379
slots:5461-10922 (5462 slots) master
1 additional replica(s)
M: 85ad96dbdb49fd901de6f9f1431662c7ab58a208 192.168.1.4:6383
slots:10923-16383 (5461 slots) master
1 additional replica(s)
S: b37d1d090fdfffdeaeccc09f7978eeedc36b342a 192.168.1.4:6384
slots: (0 slots) slave //仔细看一下此刻它是从的
replicates 178b951fd08ae3250d06719509ea45258c6cef73
S: 76ea836e326fbd4b2a242f5da01b9005c131eb46 192.168.1.3:6380
slots: (0 slots) slave
replicates d9a852afad1669adf3561a57dbfa77b250ae32bb
M: d9a852afad1669adf3561a57dbfa77b250ae32bb 192.168.1.4:6382
slots:0-5460 (5461 slots) master
1 additional replica(s)
S: 4e54918bfc57e0895d68ad0f6bea1d104b18e0f6 192.168.1.3:6381
slots: (0 slots) slave
replicates 85ad96dbdb49fd901de6f9f1431662c7ab58a208
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
八、测试redis集群
[root@hadoop1 redis]# redis-cli -c -p 6379 -h 192.168.1.3
192.168.1.3:6379> set tank tank1 设置测试值
-> Redirected to slot [4407] located at 192.168.1.4:6382
OK //直接转向到4 6382端口,数据存到了220 6382
192.168.1.4:6382> set foo bar
-> Redirected to slot [12182] located at 192.168.1.4:6383
OK
192.168.1.4:6383> get foo //可以取到值
"bar"
192.168.1.4:6383> get tank
-> Redirected to slot [4407] located at 192.168.1.4:6382
"tank1"
[root@hadoop2 redis]# redis-cli -c -p 6383 -h 192.168.1.4 //3机器,6383端口
192.168.1.4:6383> get foo
"bar"
192.168.1.4:6383> get tank
-> Redirected to slot [4407] located at 192.168.1.4:6382 //直接转向4 6382端口
"tank1"
[root@hadoop1 redis]# ps -aux | grep redis
Warning: bad syntax, perhaps a bogus ‘-‘? See /usr/share/doc/procps-3.2.8/FAQ
root 16852 0.2 1.3 137440 9532 ? Ssl 19:51 0:04 redis-server *:6379 [cluster]
root 16858 0.2 1.3 137440 9612 ? Ssl 19:51 0:04 redis-server *:6380 [cluster]
root 16862 0.2 1.0 137440 7672 ? Ssl 19:51 0:04 redis-server *:6381 [cluster]
root 17061 0.0 0.1 103256 844 pts/0 S+ 20:23 0:00 grep redis
[root@hadoop1 redis]# kill -9 16852 //关闭3 6379端口,看数据会不会丢失
九、查看集群的情况
[root@hadoop1 redis]# redis-trib.rb check 192.168.1.3:6380
Connecting to node 192.168.1.3:6380: OK
Connecting to node 192.168.1.4:6383: OK
Connecting to node 192.168.1.3:6381: OK
Connecting to node 192.168.1.4:6384: OK
Connecting to node 192.168.1.4:6382: OK
>>> Performing Cluster Check (using node 192.168.1.3:6380)
S: 76ea836e326fbd4b2a242f5da01b9005c131eb46 192.168.1.3:6380
slots: (0 slots) slave
replicates d9a852afad1669adf3561a57dbfa77b250ae32bb
M: 85ad96dbdb49fd901de6f9f1431662c7ab58a208 192.168.1.4:6383
slots:10923-16383 (5461 slots) master
1 additional replica(s)
S: 4e54918bfc57e0895d68ad0f6bea1d104b18e0f6 192.168.1.3:6381
slots: (0 slots) slave
replicates 85ad96dbdb49fd901de6f9f1431662c7ab58a208
M: b37d1d090fdfffdeaeccc09f7978eeedc36b342a 192.168.1.4:6384
slots:5461-10922 (5462 slots) master //6384变成主了
0 additional replica(s)
M: d9a852afad1669adf3561a57dbfa77b250ae32bb 192.168.1.4:6382
slots:0-5460 (5461 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
十、经测试集群是可用的
[root@hadoop1 redis]# redis-cli -c -p 6380 -h 192.168.1.3
192.168.1.3:6380> get foo
-> Redirected to slot [12182] located at 192.168.1.4:6383
"bar"
192.168.1.4:6383> get tank
-> Redirected to slot [4407] located at 192.168.1.4:6382
"tank1"
[root@hadoop2 redis]# redis-cli -c -p 6380 -h 192.168.1.3
192.168.1.3:6380> get foo
-> Redirected to slot [12182] located at 192.168.1.4:6383
"bar"
192.168.1.4:6383> get tank
-> Redirected to slot [4407] located at 192.168.1.4:6382
"tank1"
本文出自 “我的铁屋” 博客,请务必保留此出处http://zouqingyun.blog.51cto.com/782246/1661007
标签:redis集群
原文地址:http://zouqingyun.blog.51cto.com/782246/1661007