I'm newbie to Redis, reading the book < Redis in Action >, and in section 2.1 ("Login and cookie caching") there is a clean_sessions function:
QUIT = False
LIMIT = 10000000
def clean_session:
while not QUIT:
size = conn.zcard('recent:')
if size <= LIMIT:
time.sleep(1)
continue
# find out the range in `recent:` ZSET
end_index = min(size-LIMIT, 100)
tokens = conn.zrange('recent:', 0, end_index-1)
# delete corresponding data
session_keys = []
for token in tokens:
session_keys.append('viewed:' + token)
conn.delete(*session_keys)
conn.hdel('login:', *tokens)
conn.zrem('recent:', *tokens)
It deletes login token and corresponding data if there is more than 10 million records, the question is:
why delete 100 records at most per time?
why not just delete size - LIMIT records at once?
is there some performance consideration?
Thanks, all responses are appreciated :)
I guess there are multiple reasons for that choice.
Redis is a single-threaded event loop. It means a large command (for instance a large zrange, or a large del, hdel or zrem) will be processed faster than several small commands, but with an impact on the latency for the other sessions. If a large command takes one second to execute, all the clients accessing Redis will be blocked for one second as well.
A first reason is therefore to minimize the impact of these cleaning operations on the other client processes. By segmenting the activity in several small commands, it gives a chance to other clients to execute their commands as well.
A second reason is the size of the communication buffers in Redis server. A large command (or a large reply) may take a lot of memory. If millions of items are to be cleaned out, the reply of the lrange command or the input of the del, hdel, zrem commands can represent megabytes of data. Past a certain limit, Redis will close the connection to protect itself. So it is better to avoid dealing with very large commands or very large replies.
A third reason is the memory of the Python client. If millions of items have to be cleaned out, Python will have to maintain very large list objects (tokens and session_keys). They may or may not fit in memory.
The proposed solution is incremental: whatever the number of items to delete, it will avoid consuming a lot of memory on both client and Redis sides. It will also avoid to hit the communication buffer limit (resulting in the connection to be closed), and will limit the impact on the performance of the other processes accessing Redis.
Note that the 100 value is arbitrary. A smaller value will allow for better latencies at the price of a lower session cleaning throughput. A larger value will increase the throughput of the cleaning algorithm at the price of higher latencies.
It is actually a classical trade-off between the throughput of the cleaning algorithm, and the latency of other operations.
Related
I have an application that needs to use Redis with the following requirements:
A producer storing tens of millions of string records up to 128 bytes each
Indexing the records as each worker needs to access records from its own determined range X to Y in order for multiple workers to be able to process in parallel
Deleting the processed records and storing the results back in redis under a different index
Which redis data type is optimal for this?
I am considering ordered sets, where I would write original strings in one set and results in another, though I have read somewhere that they come with a 64 byte overhead and I'd like to save on memory as much as possible as that allows me to process more records. Another alternative is a simple SET key value where I would index let's say 0-100,000,000 as records to be processed and 100,000,000-200,000,000 as the corresponding result records.
Does anyone know how much memory overhead exists for each solution or can even propose a better one?
I want to delete multiple redis keys using a single delete command on redis client.
Is there any limit in the number of keys to be deleted?
i will be using del key1 key2 ....
There's no hard limit on the number of keys, but the query buffer limit does provide a bound. Connections are closed when the buffer hits 1 GB, so practically speaking this is somewhat difficult to hit.
Docs:
https://redis.io/topics/clients
However! You may want to take into consideration that Redis is single-threaded: a time-consuming command will block all other commands until completed. Depending on your use-case this may make a good case for "chunking" up your deletes into groups of, say, 1000 at a time, because it allows other commands to squeeze in between. (Whether or not this is tolerable is something you'll need to determine based on your specific scenario.)
Redis allows storing data in 16 different 'databases' (0 to 15). Is there a way to get utilized memory & disk space per database. INFO command only lists number of keys per database.
No, you can not control each database individually. These "databases" are just for logical partitioning of your data.
What you can do (depends on your specific requirements and setup) is spin multiple redis instances, each one does a different task and each one has its own redis.conf file with a memory cap. Disk space can't be capped though, at least not in Redis level.
Side note: Bear in mind that the 16 database number is not hardcoded - you can set it in redis.conf.
I did it by calling dump on all the keys in a Redis DB and measuring the total number of bytes used. This will slow down your server and take a while. It seems the size dump returns is about 4 times smaller than the actual memory use. These number will give you an idea of which db is using the most space.
Here's my code:
https://gist.github.com/mathieulongtin/fa2efceb7b546cbb6626ee899e2cfa0b
Does someone know about the internals of Redis LRU based eviction / deletion.
How does Redis ensure that the older (lesser used) keys are deleted first (in case we do not have volatile keys and we are not setting TTL expiration)?
I know for sure that Redis has a configuration parameter "maxmemory-samples" that governs a sample size that it uses for removing keys - so if you set a sample size of 10 then it samples 10 keys and removes the oldest from amongst these.
What I don't know is whether it sample these key's completely randomly, or does it somehow have a mechanism that allows it to automatically sample from an equivalent of an "older / less used generation"?
This is what I found at antirez.com/post/redis-as-LRU-cache.html - the whole point of using a "sample three" algorithm is to save memory. I think this is much more valuable than precision, especially since this randomized algorithms are rarely well understood. An example: sampling with just three objects will expire 666 objects out of a dataset of 999 with an error rate of only 14% compared to the perfect LRU algorithm. And in the 14% of the remaining there are hardly elements that are in the range of very used elements. So the memory gain will pay for the precision without doubts.
So although Redis samples randomly (implying that this is not actual LRU .. and as such an approximation algorithm), the accuracy is relatively high and increasing the sampling size will further increase this. However, in case someone needs exact LRU (there is zero tolerance for error), then Redis may not be the correct choice.
Architecture ... as they say ... is about tradeoffs .. so use this (Redis LRU) approach to tradeoff accuracy for raw performance.
Since v3.0.0 (2014) the LRU algorithm uses a pool of 15 keys, populated with the best candidates out of the different samplings of N keys (where N is defined by maxmemory-samples).
Every time a key needs to be evicted, N new keys are selected randomly and checked against the pool. If they're better candidates (older keys), they're added in it, while the worst candidates (most recent keys) are taken out, keeping the pool at a constant size of 15 keys.
At the end of the round, the best eviction candidate is selected from the pool.
Source: Code and comments in evict.c file from Redis source code
I'm currently migrating some data to Redis and I'm considering using a sorted set to store approximately 1.4e6 items (with associated scores/counts). Is this number of items in a set likely to exceed a practical limit, making it too painful to use the set? I plan on running 64 bit redis, so available memory for the data should not be a problem. Does anyone have experience with a sorted set this size? If so, how are your insertion and query times for the set?
It depends what you want to do with the set. The simple operations are mostly O(log n) which means that they take only twice as long for a million item set as they do for a thousand item set. Unless you have something seriously broken in your config like a memory limit smaller than the set, performance shouldn't be a problem.
Where you need to be careful is with operations on multiple sets, particularly union - that will take a thousand times longer for the million item set. In practical terms this isn't necessarily a problem though - either it will be fast enough for your purposes anyway (redis has commands documented as too slow for production use that are still best measured in milliseconds) or you can adjust the order of operations to avoid running union on really large sets.
Our site has a sorted set with about 2 milions items (email addresses) with integer scores and it took up about 320MB in memory size.