Why redis max expired keys is the maximum written keys of per second divided by 4 - redis

Why redis max expired keys is the maximum written keys of per second divided by 4.
The redis cycle deletion strategy is based on probability.I think even if the keys sampled each time are different, the maximum number of expired keys must be 1/4 of the total memory.
Why not give expired time sorting? Which is better than probability.I feel that based on probability is an unreliable solution.
I didn't find a reasonable explanation from books and official documents.
Official doucument: How Redis expires keys

Related

Redis multi Key or multi Hash field

I have about 300k row data like this Session:Hist:[account]
Session:Hist:100000
Session:Hist:100001
Session:Hist:100002
.....
Each have 5-10 childs [session]:[time]
b31c2a43-e61b-493a-b8d4-ff0729fe89de:1846971068807
5552daa2-c9f6-4635-8a7c-6f027b4aa1a3:1846971065461
.....
I have 2 options:
Using Hash, key is Session:Hist:[account], field is [session], value is [time]
Using Hash flat all account, key is Session:Hist, field is [account]:[session], value is [time]
My Redis have 1 master, 4-5 Slave, using to store & push session (about 300k *5 in 2h) every days, and clear at end of day!
So the question is which options is better for performance (faster sync master-slave/smaller memory/faster for huge request), thanks for your help!
Comparing the two options mentioned, option #2 is less optimal.
According to official Redis documentation:
It is worth noting that small hashes (i.e., a few elements with small values) are encoded in special way in memory that make them very memory efficient.
More details here.
So having one huge hash with key Session:Hist would affect memory consumption. It would also affect clustering (sharding) since you would have one hash (hot-spot) located on one instance which would get hammered.
Option #1 does not suffer from the problems mentioned above. As long as you have many well-distributed (i.e. all accounts have similar count of sessions vs a few accounts being dominant with huge amount of sessions) hashes keyed as Session:Hist:[account].
If, however, there is a possibility for uneven distribution of sessions into accounts, you could try (and measure) the efficiency of option 1a:
Key: Session:Hist:[account]:[session - last two characters]
field: [session's last two characters]
value: [time]
Example:
Key: Session:Hist:100000:b31c2a43-e61b-493a-b8d4-ff0729fe89
field: de
value: 1846971068807
This way, each hash will only contain up to 256 fields (assuming last 2 characters of session are hex, all possible combinations would be 256). This would be optimal if redis.conf defines hash-max-zipmap-entries 256.
Obviously option 1a would require some modifications in your application but with proper bench-marking (i.e. memory savings) you could decide if it's worth the effort.

In Redis, how many HSET keys's keys is too many?

I've carefully read https://redis.io/topics/memory-optimization but I'm still confused. Basically, it says to cap the number of keys in each hash map (HSET). But what about the number of keys in each HSET.
If I have 1,000,000 keys for a certain prefix. Each one with a unique value. Suppose they're integer looking like "12345689". If I "shard" the keys by taking the first two characters (e.g. "12") and the remainder as the "sub key" (e.g. "3456789"), then for each hash I'm going to have 1,000,000 / 100 = 10,000 keys each (theoretically). Is that too many?
My (default) config is:
redis-store:6379> config get hash-*
1) "hash-max-ziplist-entries"
2) "512"
3) "hash-max-ziplist-value"
4) "64"
So, if I shard up each 1,000,000 keys per prefix, I'll have less than 512. Actually, I'll have 100 (e.g. "12" or "99"). But what about within each one? There'll theoretically be 10,000 keys each. Does that mean I break the limit and can't benefit from the space optimization that hash maps offer?
You can use such formula to calculate HASH internal data overhead for each key:
3 * next_power(n) * size_of(pointer)
There n is number of keys in your HASH. I think you are using x64 version of Redis so size_of(pointer) is 8. So for each 10,000 keys in your HASH your would have at least 240,000 bytes of overhead.
UPDATED
Please keep in mind hash-max-ziplist-entries is not the silver bullet. Please look at article here Under the hood of Rdis #2 — ziplist could be calculated as 21 * n and in same time: saving up to х10 RAM you got the write speed subsidence up to 30 times and up to 100 times in reading. So with total amount with 1,000,000 entries in HASH you could catch the critical breakdown with perfomance
You can read more about Redis HASH internals Under the hood of Redis #1.
After some extensive research I've finally understood how hash-max-ziplist-entries works.
https://www.peterbe.com/plog/understanding-redis-hash-max-ziplist-entries
Basically, it's just 1 hash map or if you need to break it up into multiple hash maps if within you need to store more keys than hash-max-ziplist-entries is set to.

Real time analytic processing system design

I am designing a system that should analyze large number of user transactions and produce aggregated measures (such as trends and etc).
The system should work fast, be robust and scalable.
System is java based (on Linux).
The data arrives from a system that generate log files (CSV based) of user transactions.
The system generates a file every minute and each file contains the transactions of different users (sorted by time), each file may contain thousands of users.
A sample data structure for a CSV file:
10:30:01,user 1,...
10:30:01,user 1,...
10:30:02,user 78,...
10:30:02,user 2,...
10:30:03,user 1,...
10:30:04,user 2,...
.
.
.
The system I am planning should process the files and perform some analysis in real-time.
It has to gather the input, send it to several algorithms and other systems and store computed results in a database. The database does not hold the actual input records but only high level aggregated analysis about the transactions. For example trends and etc.
The first algorithm I am planning to use requires for best operation at least 10 user records, if it can not find 10 records after 5 minutes, it should use what ever data available.
I would like to use Storm for the implementation, but I would prefer to leave this discussion in the design level as much as possible.
A list of system components:
A task that monitors incoming files every minute.
A task that read the file, parse it and make it available for other system components and algorithms.
A component to buffer 10 records for a user (no longer than 5 minutes), when 10 records are gathered, or 5 minute have passed, it is time to send the data to the algorithm for further processing.
Since the requirement is to supply at least 10 records for the algorithm, I thought of using Storm Field Grouping (which means the same task gets called for the same user) and track the collection of 10 user's records inside the task, of course I plan to have several of these tasks, each handles a portion of the users.
There are other components that work on a single transaction, for them I plan on creating other tasks that receive each transaction as it gets parsed (in parallel to other tasks).
I need your help with #3.
What are the best practice for designing such a component?
It is obvious that it needs to maintain the data for 10 records per users.
A key value map may help, Is it better to have the map managed in the task itself or using a distributed cache?
For example Redis a key value store (I never used it before).
Thanks for your help
I had worked with redis quite a bit. So, I'll comment on your thought of using redis
#3 has 3 requirements
Buffer per user
Buffer for 10 Tasks
Should Expire every 5 min
1. Buffer Per User:
Redis is just a key value store. Although it supports wide variety of datatypes, they are always values mapped to a STRING key. So, You should decide how to identify a user uniquely incase you need have per user buffer. Because In redis you will never get an error when you override a key new value. One solution might be check the existence before write.
2. Buffer for 10 Tasks: You obviously can implement a queue in redis. But restricting its size is left to you. Ex: Using LPUSH and LTRIM or Using LLEN to check the length and decide whether to trigger your process. The key associated with this queue should be the one you decided in part 1.
3. Buffer Expires in 5 min: This is a toughest task. In redis every key irrespective of underlying datatype it value has, can have an expiry. But the expiry process is silent. You won't get notified on expiry of any key. So, you will silently lose your buffer if you use this property. One work around for this is, having an index. Means, the index will map a timestamp to the keys who are all need to be expired at that timestamp value. Then in background you can read the index every minute and manually delete the key [after reading] out of redis and call your desired process with the buffer data. To have such an index you can look at Sorted Sets. Where timestamp will be your score and set member will be the keys [unique key per user decided in part 1 which maps to a queue] you wish to delete at that timestamp. You can do zrangebyscore to read all set members with specified timestamp
Overall:
Use Redis List to implement a queue.
Use LLEN to make sure you are not exceeding your 10 limit.
Whenever you create a new list make an entry into index [Sorted Set] with Score as Current Timestamp + 5 min and Value as the list's key.
When LLEN reaches 10, remember to read then remove the key from the index [sorted set] and from the db [delete the key->list]. Then trigger your process with data.
For every one min, generate current timestamp, read the index and for every key, read data then remove the key from db and trigger your process.
This might be my way to implement it. There might be some other better way to model your data in redis
For your requirements 1 & 2: [Apache Flume or Kafka]
For your requirement #3: [Esper Bolt inside Storm. In Redis for accomplishing this you will have to rewrite the Esper Logic.]

Accuracy of redis dbsize command

How accurate is the dbsize command in redis?
I've noticed that the count of keys returned by dbsize does not match the number of actual keys returned by the keys command.
Here's an example:
redis-cli dbsize
(integer) 3057
redis-cli keys "*" | wc -l
2072
Why is dbsize so much higher than the actual number of keys?
I would say it is linked to key expiration.
Key/value stores like Redis or memcached cannot afford to define a physical timer per object to expire. There would be too many of them. Instead they define a data structure to easily track items to be expired, and multiplex all the expiration events to a single physical timer. They also tend to implement a lazy strategy to deal with these events.
With Redis, when an item expires, nothing happens. However, before each item access, a check is systematically done to avoid returning expired items, and potentially delete the item. On top of this lazy strategy, every 100 ms, a scavenger algorithm is triggered to physically expire a number of items (i.e. remove them from the main dictionary). The number of considered keys at each iteration depends on the expiration workload (the algorithm is adaptative).
The consequence is Redis may have a backlog of items to expire at a given point in time, when you have a steady flow of expiration events.
Now coming back to the question, the DBSIZE command just return the size of the main dictionary, so it includes expired items that have not yet been removed. The KEYS command walks through the whole dictionary, accessing individual keys, so it excludes all expired items. The number of items may therefore not match.

Redis Internals - LRU Implementation For Sampling

Does someone know about the internals of Redis LRU based eviction / deletion.
How does Redis ensure that the older (lesser used) keys are deleted first (in case we do not have volatile keys and we are not setting TTL expiration)?
I know for sure that Redis has a configuration parameter "maxmemory-samples" that governs a sample size that it uses for removing keys - so if you set a sample size of 10 then it samples 10 keys and removes the oldest from amongst these.
What I don't know is whether it sample these key's completely randomly, or does it somehow have a mechanism that allows it to automatically sample from an equivalent of an "older / less used generation"?
This is what I found at antirez.com/post/redis-as-LRU-cache.html - the whole point of using a "sample three" algorithm is to save memory. I think this is much more valuable than precision, especially since this randomized algorithms are rarely well understood. An example: sampling with just three objects will expire 666 objects out of a dataset of 999 with an error rate of only 14% compared to the perfect LRU algorithm. And in the 14% of the remaining there are hardly elements that are in the range of very used elements. So the memory gain will pay for the precision without doubts.
So although Redis samples randomly (implying that this is not actual LRU .. and as such an approximation algorithm), the accuracy is relatively high and increasing the sampling size will further increase this. However, in case someone needs exact LRU (there is zero tolerance for error), then Redis may not be the correct choice.
Architecture ... as they say ... is about tradeoffs .. so use this (Redis LRU) approach to tradeoff accuracy for raw performance.
Since v3.0.0 (2014) the LRU algorithm uses a pool of 15 keys, populated with the best candidates out of the different samplings of N keys (where N is defined by maxmemory-samples).
Every time a key needs to be evicted, N new keys are selected randomly and checked against the pool. If they're better candidates (older keys), they're added in it, while the worst candidates (most recent keys) are taken out, keeping the pool at a constant size of 15 keys.
At the end of the round, the best eviction candidate is selected from the pool.
Source: Code and comments in evict.c file from Redis source code