Redis key eviction clarification - redis

Im evaluating the opensource redis and came to a confusion regarding the max memory settings and eviction policies.
Suppose,my maxmemory is 4GB and data size also 4GB and set maxmemory-policy allkeys-lru. What my question is when new keys are written to Redis the evicted keys will be lost from system permanently or will be stored in disk so that I can reuse when ever required.
Thank you,

Related

Redis replication for large data to new slave

I have a redis master which has 30 GB of data and the memory there is 90 GB. We have this setup as we have less writes and more reads. Normally, we would have a 3X db size RAM machine.
The problem here is, one slave went corrupt and later on when we added it back using sentinel. it got stuck in wait_bgsave state on master (after seeing the info on master)
The reason was that :
client-output-buffer-limit slave 256mb 64mb 60
This was set on master and since max memory is not available it breaks replication for the new slave.
I saw this question Redis replication and client-output-buffer-limit where similar issue is being discussed but i have a broader scope of question.
We can't use a lot of memory. So, what are the possible ways to do replication in this context to prevent any failure on master (wrt. memory and latency impacts)
I have few things on mind:
1 - Should i do diskless replication - will it have any impact on latency of writes and reads?
2 - Should i just copy the dump file from another slave to this new slave and restart redis. ? will that work.
3 - Should i increase the output-buffer-limit slave to a greater limit? If yes, then how much? I want to do this for sometime till replication happens and then revert it back to normal setting? I am skeptic about this approach.
You got this problem, because you have a slow replica, and it cannot read the replication data as fast as needed.
In order to solve the problem, you can try to increase the client-output-buffer-limit buffer limit. Also you can try to disable persistence on replica when it syncing from master, and enable persistence after that. By disabling persistence, replica might consume the data faster. However, if the bandwidth between master and replica is really small, you might need to consider re-deploy your replica to make it near the master, and have a large bandwidth.
1 - Should i do diskless replication - will it have any impact on latency of writes and reads?
IMHO, I think it has nothing to do with diskless replication.
2 - Should i just copy the dump file from another slave to this new slave and restart redis. ? will that work.
NO, it won't work.
3 - Should i increase the output-buffer-limit slave to a greater limit? If yes, then how much? I want to do this for sometime till replication happens and then revert it back to normal setting?
YES, you can try to increase the limit. And in your case, since your data size is 30G, so a hard limit of 30G should slove the problem. However, that's too much, and might have other impact. You need to do some benchmark to get a right limit.
YES, you can dynamically change this setting by the CONFIG SET command.

Large amount of redis keys are evicted unexpectedly even though memory not reach max configuration

I am experiencing a very strange case happen in production with redis, a large amount of redis keys are evicted unexpectedly even though memory not reach max configuration.
Current redis setting is max mem = 7GB, volatile-ttl.
Most of the keys are set a TTL when store to Redis.
Below graph showing a large drop in redis key eventhough memory at the time was only 3.5GB (<< 7GB)
According to my understanding, Redis would evict keys only when memory reach max-mem. And even when it does, it will only drop keys gradually according to the need for inserting new keys.
Thank you very much!

When does Redis read key from AOF persistence?

I might be wrong but still asking this question. ;-)
So I am planning to use redis as a persistent storage(Primary Storage). I am having AOF enabled.I know redis will load this data during server start up. Let us say I have 10GB data and 5 GB ram, If I try to search for a key which is not loaded in RAM, will it check AOF and load that data to RAM by offloading any unused keys?
You cannot have less memory than data size in Redis. In your example Redis would run out of memory during start-up. You find more answers here: http://redis.io/topics/faq

Redis in-memory data Storage

Redis is a database in-memory but persistent on disk meanwhile.
Q1: So I wonder does this mean that when redis server starts, it will automatically load all the data on the disk into memory?
Q2: And when writing data to redis, will it both update in the memory and the disk?
Can anyone please help me answer my two questions?
Q1: So I wonder does this mean that when redis server starts, it will
automatically load all the data on the disk into memory?
Yes, depending on the configuration, Redis performs snapshots of memory to disk and, when Redis is restarted it can take latest snapshot and take it to memory again automatically.
Q2: And when writing data to redis, will it both update in the memory
and the disk?
Redis prioritizes writes on memory and writes to disk are done in a separate thread. The answer then is yes, it writes data to both memory and disk, but it might happen that a server failure may produce a data loss since it's not mandatory to Redis to persist data to disk.
Check official docs about persistence to learn more about the topic.

How to make Redis choose LRU eviction policy for only some of the keys?

Is there a way to make Redis choose a LRU (least recently used) eviction policy for only specific keys? I want a set of keys to be persistent and never be evicted if there's not enough memory. On the other hand, I want another set of keys to be freely evicted if there's low memory.
Redis has an eviction policy which might be good for your case.
You can set the maxmemory-policy to volatile-lru which causes Redis to:
remove the key with an expire set using an LRU algorithm
Which means that keys that are not set with TTL are not volatile, and therefor will not be evicted but keys that have TTL will be removed by Least-Recently-Used order.
Actually, volatile-lru is the default policy, so all you have to do is to make sure TTL is set for the keys you are willing to lose when memory is getting full.
Edit: Since version 3.0 the default eviction policy is "noeviction". (changelog)