码迷,mamicode.com
首页 > 其他好文 > 详细

redis文档翻译_LRU缓存

时间:2015-06-24 22:37:25      阅读:209      评论:0      收藏:0      [点我收藏+]

标签:redis   文档   分布式   

Using Redis as an LRU cache使用Redis作为LRU缓存

When Redis is used as a cache, sometimes it is handy to let it automatically evict old data as you add new one. This behavior is very well known in the community of developers, since it is the default behavior of the popular memcached system.
当使用Redis作为缓存时,有时候为了向缓存添加新数据,要让Redis自动删除旧数据。在社区,众所周知的 memcached  系统就有这种缺省的做法。

LRU is actually only one of the supported eviction methods. This page covers the more general topic of the Redis maxmemory directive that is used in order to limit the memory usage to a fixed amount, and it also covers in depth the LRU algorithm used by Redis, that is actually an approximation of the exact LRU.
LRU事实上是Redis唯一支持逐出的方法。此页包括Redis的maxmemory指令,用于限制Redis使用最大物理内存的大小,也包含Redis深度的LRU算法,实际是近似的LRU算法。


Maxmemory configuration directive  最大内存配置指令

The maxmemory configuration directive is used in order to configure Redis to use a specified amount of memory for the data set. It is possible to set the configuration directive using the redis.conf file, or later using the CONFIG SETcommand at runtime.
maxmemory 指令是用作配置指定Redis使用内存的最大值。可以在redis.conf文件配置,运行过程中还可以使用CONFIG SET 命令设置。

For example in order to configure a memory limit of 100 megabytes, the following directive can be used inside the  redis.conf file.

例如,为了配置限制内存为100M,可以在redis.conf文件中配置:
maxmemory 100mb
Setting maxmemory to zero results into no memory limits. This is the default behavior for 64 bit systems, while 32 bit systems use an implicit memory limit of 3GB.
设置 maxmemory  为0 的结果是没有内存限制。这是64位系统的缺省做法,32位系统是限制3G。

When the specified amount of memory is reached, it is possible to select among different behaviors, called policies. Redis can just return errors for commands that could result in more memory being used, or it can evict some old data in order to return back to the specified limit every time new data is added.
当指定内存数量达到最大限制时,Redis可以选择不同的做法,称为  policies(政策)。
 每当新的数据被添加时,可以只返回一个错误,或者可以逐出一些旧数据让新数据可以被添加。


Eviction policies 逐出政策

The exact behavior Redis follows when the maxmemory limit is reached is configured using the maxmemory-policy configuration directive.
当Redis内存到达上限时,Redis使用  maxmemory-policy  指令配置的政策执行。

The following policies are available:    下面是policies可以配置的 变量
    
  • noeviction: return errors when the memory limit was reached and the client is trying to execute commands that could result in more memory to be used (most write commands, but DEL and a few more exceptions).
  • noeviction:当达到最大内存使用限制并且客户端尝试执行使用更多内存的命令时,将返回一个错误。
  • allkeys-lru: evict keys trying to remove the less recently used (LRU) keys first, in order to make space for the new data added.
  • allkeys-lru:为了新添加的数据使用内存,将逐出最近最少使用的key。
  • volatile-lru: evict keys trying to remove the less recently used (LRU) keys first, but only among keys that have anexpire set, in order to make space for the new data added.
  • volatile-lru: 为了新添加的数据使用内存,将逐出有过期时间设置并且最近最少使用的key。
  • allkeys-random: evict random keys in order to make space for the new data added.
  • allkeys-random:: 为了新添加的数据使用内存,将随机逐出key。
  • volatile-random: evict random keys in order to make space for the new data added, but only evict keys with anexpire set.
  • volatile-random: 为了新添加的数据使用内存,将随机逐出具有过期时间设置的key。
  • volatile-ttl: In order to make space for the new data, evict only keys with an expire set, and try to evict keys with a shorter time to live (TTL) first.
  • 为了新添加的数据使用内存,将逐出有过期时间限制,并且存活时间(TTL)最短的key。
The policies volatile-lru, volatile-random and volatile-ttl behave like noeviction if there are no keys to evict matching the prerequisites.
 volatile-lru, volatile-random 和volatile-ttl 在没有匹配key的时候和noeviction是一样的

To pick the right eviction policy is important depending on the access pattern of your application, however you can reconfigure the policy at runtime while the application is running, and monitor the number of cache misses and hits using the Redis INFO output in order to tune your setup.
依赖于你的应用去选择正确的逐出政策是非常重要的,然而在Redis运行时,你也可以重新配置逐出政策,并且你可以使用INFO命令输出内存使用情况然后重新配置。

In general as a rule of thumb: 一般的配置规则
  • Use the allkeys-lru policy when you expect a power-law distribution in the popularity of your requests, that is, you expect that a subset of elements will be accessed far more often than the rest. This is a good pick if you are unsure.
         使用allkeys-lru,当你希望你的请求是一个 power-law 分布,如此,你期望常驻内存的元素是你可被访问元素的子集。如果你不确定的话,使用allkeys-lru是好的选择。
  • Use the allkeys-random if you have a cyclic access where all the keys are scanned continuously, or when you expect the distribution to be uniform (all elements likely accessed with the same probability).
        如果所有key被周期地访问或者的当逐出分布不统一时(所有元素被访问的概率一样),使用allkeys-random。
  • Use the volatile-ttl if you want to be able to provide hints to Redis about what are good candidate for expiration by using different TTL values when you create your cache objects.
 在创建对象缓存的时候如果你想通过使用TTL的值提供好的逐出候选者,使用volatile-ttl 。

The allkeys-lru and volatile-random policies are mainly useful when you want to use a single instance for both caching and to have a set of persistent keys. However it is usually a better idea to run two Redis instances to solve such a problem.
当你使用一个实例作为缓存并且key都是永久性的时候,使用allkeys-lru和 volatile-random。但是还有更好的做法,可以使用两个Redis实例解决内存不足问题。

It is also worth to note that setting an expire to a key costs memory, so using a policy like allkeys-lru is more memory efficient since there is no need to set an expire for the key to be evicted under memory pressure.
另外注意,设置过期时间是消耗内存的,因此使用像allkeys-lru  的政策比较有内存效率,因为它不需要去为过期的key的逐出增加额外的内存压力。

How the eviction process works  逐出是这样工作的

It is important to understand that the eviction process works like this:

理解逐出工作方式,就像这样:

  • A client runs a new command, resulting in more data added.
        一个客户端运行一个新的命令,导致更多数据被添加。
  • Redis checks the memory usage, and if it is greater than the maxmemory limit , it evicts keys according to the policy.
    Redis检测内存使用,如何超过  maxmemory  限制,就根据政策逐出key。
  • A new command is executed, and so forth.
    新的命令被执行。

So we continuously cross the boundaries of the memory limit, by going over it, and then by evicting keys to return back under the limits.

因此我们持续添加数据达到内存限制,通过上面的步骤,逐出一些key使得内存回到限制之下。

If a command results in a lot of memory being used (like a big set intersection stored into a new key) for some time the memory limit can be surpassed by a noticeable amount.

如果命令的结果是导致很大的内存被使用(比如一个新的key是一个大的set集合),很多时候达到内存限制是显而易见的。


Approximated LRU algorithm  近似的LRU算法

Redis LRU algorithm is not an exact implementation. This means that Redis is not able to pick the best candidate for eviction, that is, the access that was accessed the most in the past. Instead it will try to run an approximation of the LRU algorithm, by sampling a small number of keys, and evicting the one that is the best (with the oldest access time) among the sampled keys.
Redis的LRU算法不是准确的实现。也就是说Redis没有为逐出选择 最好的候选人 ,也就是没有选择过去最后被访问离现在最久的。反而 是去执行一个 近似LRU的算法,通过抽样少量的key,并且逐出抽样中最后被访问离现在最久的key(最老的访问时间)。

However since Redis 3.0 (that is currently in beta) the algorithm was improved to also take a pool of good candidates for eviction. This improved the performance of the algorithm, making it able to approximate more closely the behavior of a real LRU algorithm.
在Redis 3.0(目前的测试版),算法被改进了,使用了一个逐出最佳候选池。改进了算法的性能,使它更加近似真正LRU算法。

What is important about the Redis LRU algorithm is that you are able to tune the precision of the algorithm by changing the number of samples to check for every eviction. This parameter is controlled by the following configuration directive:
算法中,关于逐出检测的样品数量,你可以自己去调整。配置参数是:
maxmemory-samples 5
The reason why Redis does not use a true LRU implementation is because it costs more memory. However the approximation is virtually equivalent for the application using Redis. The following is a graphical comparison of how the LRU approximation used by Redis compares with true LRU.
Redis没有使用真正实现LRU算是的原因是,因为消耗更多的内存。然而对于使用Redis的应用来说,事实上是等价的。下面是Redis的LRU算法和真正LRU算法的比较:

技术分享

The test to generate the above graphs filled a Redis server with a given number of keys. The keys were accessed from the first to the last, so that the first keys are the best candidates for eviction using an LRU algorithm. Later more 50% of keys are added, in order to force half of the old keys to be evicted.
给出配置数量的key生成上面的图表。key从第一行到最后一行被访问,那么第一个key是LUR算法中最好的逐出候选者。之后有50%的key被添加,那么一半的旧key被逐出。

You can see three kind of dots in the graphs, forming three distinct bands.

在上图中你可以看见3个明显的区别:

  • The light gray band are objects that were evicted.        
            浅灰色带是被逐出的对象。
  • The gray band are objects that were not evicted.
        灰色带是没有被逐出的对象。
  • The green band are objects that were added.
        绿色带是被添加的对象。
In a theoretical LRU implementation we expect that, among the old keys, the first half will be expired. The Redis LRU algorithm will instead only probabilistically expire the older keys.
LRU理论实现是在所有的旧key中前一半被逐出。Redis使用的是近似过期的key被逐出。

As you can see Redis 3.0 does a better job with 5 samples compared to Redis 2.8, however most objects that are among the latest accessed are still retained by Redis 2.8. Using a sample size of 10 in Redis 3.0 the approximation is very close to the theoretical performance of Redis 3.0.

如你所见,3.0的工作比2.8更好,然而在2.8版本中,大多数最新访问对象的仍然保留。在3.0使用样品为10 时,性能非常接近理论上的LRU算法。

Note that LRU is just a model to predict how likely a given key will be accessed in the future. Moreover, if your data access pattern closely resembles the power law, most of the accesses will be in the set of keys that the LRU approximated algorithm will be able to handle well.

注意:LRU仅仅是一个预测模式,给出的key很可能在未来被访问。此外,如果你的数据访问模式类似于幂律(线性的),大多数key都可能被访问那么这个LRU算法的处理就是非常好的。

In simulations we found that using a power law access pattern, the difference between true LRU and Redis approximation were minimal or non-existent.

在实战中 ,我们发现使用幂律(线性的)的访问模式,在真正的LRU算法和Redis的LRU算法之间差异很小或者不存在差异。

However you can raise the sample size to 10 at the cost of some additional CPU usage in order to closely approximate true LRU, and check if this makes a difference in your cache misses rate.

你可以提升样品大小配置到10,它将接近真正的LRU算法,并且有不同错过率,但是要消耗更多的CPU。

To experiment in production with different values for the sample size by using the CONFIG SET maxmemory-samples <count> command, is very simple.

在调试时使用不同的样品大小去调试非常简单,使用命令CONFIG SET maxmemory-samples <count>  实现。

redis文档翻译_LRU缓存

标签:redis   文档   分布式   

原文地址:http://blog.csdn.net/guobangli/article/details/46627561

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!