Which maxmemory policies are compatible with redis expiration mechanisms?
Is it only volatile-ttl? Does noeviction stop old records from death?
See here from redis.conf:
MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
is reached. You can select among five behaviors:
volatile-lru -> remove the key with an expire set using an LRU algorithm
allkeys-lru -> remove any key according to the LRU algorithm
volatile-random -> remove a random key with an expire set
allkeys-random -> remove a random key, any key
volatile-ttl -> remove the key with the nearest expire time (minor TTL)
noeviction -> don't expire at all, just return an error on write operations
Note: with any of the above policies, Redis will return an error on write
operations, when there are no suitable keys for eviction.
At the date of writing these commands are: set setnx setex append
incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
getset mset msetnx exec sort
The default is:
maxmemory-policy noeviction
If you keep the policy at the default 'noeviction' or if you choose any of the volatile-* ones without actually having expirable keys in the database, the data will remain in Redis indefinitely. Do remember, however, that if you do not delete data from Redis and keep adding more, you'll eventually run out of memory.
Related
I have keys I want to keep indefinitely in redis provided I have enough memory. However, if redis runs low on memory, then I'd like it to remove the oldest keys first. I looked at the "eviction policy" options and it appears redis doesn't support this out of the box. https://support.redislabs.com/hc/en-us/articles/203290657-What-eviction-policies-do-you-support-
How could I implement this myself using commands available as part of the redis-client api?
Here's some pseudocode that might work to give a flavor for what I need:
1. Get the first N keys from a list sorted by key date asc.
2. Delete the oldest keys.
3. Repeat until memory is no longer constrained.
The eviction policy determines what happens when a database reaches its memory limit. To make room for new data, older data is evicted (removed) according to the selected policy.
You can select the policies from the reference link below based on your requirement. The one I am using in the example below is "allkeys-lru"
reference link
Example -
127.0.0.1:6379> CONFIG SET maxmemory-policy allkeys-lru
OK
Example in Python -
import redis
client = redis.Redis(host='localhost', port=6379, db=0)
client.config_set('maxmemory-policy', "allkeys-lru")
We are using redis for storing cart data. We see that some of the carts that are older than a month aren't available any longer. I assume the data would have been persisted and should be available any time. Are there any settings that I must review to check why some of the old data is getting deleted? There is no TTL set when storing the data.
Maybe It reaches your redis maxmemory, take a look at the two config in redis.conf which maxmemory and maxmemory-policy
When the maxmemory is reached, the redis follows the action specified by maxmemory-policy which could be allkeys-lru or noeviction. if the policy is lru, the older data will be dropped.
as the redis docs said:
noeviction: return errors when the memory limit was reached and the client is trying to execute commands that could result in more memory to be used (most write commands, but DEL and a few more exceptions).
allkeys-lru: evict keys by trying to remove the less recently used (LRU) keys first, in order to make space for the new data added.
I want to use REDIS following perfect LRU eviction policy. For experimentation purpose I set the maxmemory size of the Redis.This maxmemory REDIS can have maximum 10 key/value pair of fix data size. I set "maxmemory-policy :allkeys-lru" and maxmemory-samples :10 . When I insert new key/value pair it does not remove the oldest visited key/value pair.why is it not removing the oldest key when the sample size is same as the maximum number of keys it can have.
So what should be the behaviour of allkeys-lru eviction policy when maxmemory-samples=maximum number of possible key/value pair in redis ?
My problem is: i have a set of values that each of them has to have an expire value.
code:
set a:11111:22222 someValue
expire a:11111:22222 604800 \\usually equal a week
In a perfect world i would have put all those values in a hash and give each of them it's appropriate expire value, but redis does not allow expire on a hash fields.
problem is that i also have a process that need to get all those keys about once an hour
keys a:*
this command is really expensive and according to redis documentation can cause performance issues. I have about 25000-30000 keys at each given moment.
Does someone knows how can i solve such a problem?
thumbs up it guarantee (-;
Roy
Let me propose an alternative solution.
Rather than asking Redis to scan all the keys, why not perform a background dump, and parse the dump to extract the keys? This way, there is zero impact on the Redis instance itself.
Parsing the dump file is not as scary as it sounds, because you can use the excellent redis-rdb-tools package:
https://github.com/sripathikrishnan/redis-rdb-tools
You can either convert the dump file into a json file, and then parse the json file, or use the Python API to extract the keys by yourself.
As you've already mentioned, using keys is not a good solution to get your keys:
Warning: consider KEYS as a command that should only be used in production environments with extreme care. It may ruin performance when it is executed against large databases. This command is intended for debugging and special operations, such as changing your keyspace layout. Don't use KEYS in your regular application code. If you're looking for a way to find keys in a subset of your keyspace, consider using sets.
Source: Redis docs for KEYS
As the docs are suggesting, you should build your own indices!
A common way of building an index is to use a sorted set. You can read more on how it's working on my question over here.
Building references to your a:* keys using a sorted set, will also allow you to only select the required keys in relation to a date or any other int value, in case you're filtering the results!
And yes: it would be awesome if hashes could expire. Sadly it looks like its not going to happen, but there are in fact creative alternatives to take care about it by yourself.
Why don't you use a sorted set.
Here is some data creation sequence.
redis 127.0.0.1:6379> setex a:11111:22222 604800 someValue
OK
redis 127.0.0.1:6379> zadd user:index 1385112435 a:11111:22222 // 1384507635 + 604800
(integer) 1
redis 127.0.0.1:6379> setex a:11111:22223 604800 someValue2
OK
redis 127.0.0.1:6379> zadd user:index 1385113289 a:11111:22223 // 1384508489 + 604800
(integer) 1
redis 127.0.0.1:6379> zrangebyscore user:index 1385112435 1385113289
1) "a:11111:22222"
2) "a:11111:22223"
This is no select performance issue.
but, It spends more memory and insert cost.
How accurate is the dbsize command in redis?
I've noticed that the count of keys returned by dbsize does not match the number of actual keys returned by the keys command.
Here's an example:
redis-cli dbsize
(integer) 3057
redis-cli keys "*" | wc -l
2072
Why is dbsize so much higher than the actual number of keys?
I would say it is linked to key expiration.
Key/value stores like Redis or memcached cannot afford to define a physical timer per object to expire. There would be too many of them. Instead they define a data structure to easily track items to be expired, and multiplex all the expiration events to a single physical timer. They also tend to implement a lazy strategy to deal with these events.
With Redis, when an item expires, nothing happens. However, before each item access, a check is systematically done to avoid returning expired items, and potentially delete the item. On top of this lazy strategy, every 100 ms, a scavenger algorithm is triggered to physically expire a number of items (i.e. remove them from the main dictionary). The number of considered keys at each iteration depends on the expiration workload (the algorithm is adaptative).
The consequence is Redis may have a backlog of items to expire at a given point in time, when you have a steady flow of expiration events.
Now coming back to the question, the DBSIZE command just return the size of the main dictionary, so it includes expired items that have not yet been removed. The KEYS command walks through the whole dictionary, accessing individual keys, so it excludes all expired items. The number of items may therefore not match.