Redis taking too much memory - redis

I setup a Redis (version 4.0.6) sentinel cluster in two centos 6 VMs. Both master and slave Redis server has maxmemory set to 10GB and maxmemory_policy as volatile-lru.
The problem is, both servers are taking alot of memory.
Master
used_memory:8959732536
used_memory_human:8.34G
used_memory_rss:14763728896
used_memory_rss_human:13.75G
used_memory_peak:10002148536
used_memory_peak_human:9.32G
used_memory_peak_perc:89.58%
used_memory_overhead:1344839894
used_memory_startup:761776
used_memory_dataset:7614892642
used_memory_dataset_perc:85.00%
total_system_memory:20957556736
total_system_memory_human:19.52G
used_memory_lua:37888
used_memory_lua_human:37.00K
maxmemory:10000000000
maxmemory_human:9.31G
maxmemory_policy:volatile-lru
mem_fragmentation_ratio:1.65
mem_allocator:jemalloc-3.6.0
active_defrag_running:0
lazyfree_pending_objects:0
Slave
used_memory:8927665872
used_memory_human:8.31G
used_memory_rss:16422535168
used_memory_rss_human:15.29G
used_memory_peak:10000009472
used_memory_peak_human:9.31G
used_memory_peak_perc:89.28%
used_memory_overhead:1340505548
used_memory_startup:761792
used_memory_dataset:7587160324
used_memory_dataset_perc:84.99%
total_system_memory:20957556736
total_system_memory_human:19.52G
used_memory_lua:37888
used_memory_lua_human:37.00K
maxmemory:10000000000
maxmemory_human:9.31G
maxmemory_policy:volatile-lru
mem_fragmentation_ratio:1.84
mem_allocator:jemalloc-3.6.0
active_defrag_running:0
lazyfree_pending_objects:0
Redis is taking 14064.8 MB and 15664.2 MB on master and slave respectively.
I do have alot of data stored in redis. Most of them has expiry set to them and some have no expiry.
The problem is even after setting max memory set to 10 GB why is redis taking around 15GB in the VM.
I see that used memory is below 10GB and the rss memory is 15GB.
I did run MEMORY PURGE which clears some of the rss memory but it gets re populated within a few minutes and keeps growing.
Any suggestion on how I can control the memory consumption or a permanent solution for this issue. Should I increase RAM in the VM? if yes how much RAM should I add to handle this situation.

RSS memory will always be larger than the actual memory used by Redis for the dataset. It appears that, in your case, you're also suffering from memory fragmentation so you should consider enabling the active defragmentor.
That said, allocating more RAM to your servers will allow them to reach higher fragmentation rates, so the more you add the longer it will take to reach memory pressure. Since fragmentation is usage dependent, it is hard to say accurately how much more you'll need, but in most case fragmentation plateaus after a while so that should give you some indication.

Related

Redis replication for large data to new slave

I have a redis master which has 30 GB of data and the memory there is 90 GB. We have this setup as we have less writes and more reads. Normally, we would have a 3X db size RAM machine.
The problem here is, one slave went corrupt and later on when we added it back using sentinel. it got stuck in wait_bgsave state on master (after seeing the info on master)
The reason was that :
client-output-buffer-limit slave 256mb 64mb 60
This was set on master and since max memory is not available it breaks replication for the new slave.
I saw this question Redis replication and client-output-buffer-limit where similar issue is being discussed but i have a broader scope of question.
We can't use a lot of memory. So, what are the possible ways to do replication in this context to prevent any failure on master (wrt. memory and latency impacts)
I have few things on mind:
1 - Should i do diskless replication - will it have any impact on latency of writes and reads?
2 - Should i just copy the dump file from another slave to this new slave and restart redis. ? will that work.
3 - Should i increase the output-buffer-limit slave to a greater limit? If yes, then how much? I want to do this for sometime till replication happens and then revert it back to normal setting? I am skeptic about this approach.
You got this problem, because you have a slow replica, and it cannot read the replication data as fast as needed.
In order to solve the problem, you can try to increase the client-output-buffer-limit buffer limit. Also you can try to disable persistence on replica when it syncing from master, and enable persistence after that. By disabling persistence, replica might consume the data faster. However, if the bandwidth between master and replica is really small, you might need to consider re-deploy your replica to make it near the master, and have a large bandwidth.
1 - Should i do diskless replication - will it have any impact on latency of writes and reads?
IMHO, I think it has nothing to do with diskless replication.
2 - Should i just copy the dump file from another slave to this new slave and restart redis. ? will that work.
NO, it won't work.
3 - Should i increase the output-buffer-limit slave to a greater limit? If yes, then how much? I want to do this for sometime till replication happens and then revert it back to normal setting?
YES, you can try to increase the limit. And in your case, since your data size is 30G, so a hard limit of 30G should slove the problem. However, that's too much, and might have other impact. You need to do some benchmark to get a right limit.
YES, you can dynamically change this setting by the CONFIG SET command.

Behaviour of redis client-output-buffer-limit during resynchronization

I'm assuming that during replica resynchronisation (full or partial), the master will attempt to send data as fast as possible to the replica. Wouldn't this mean the replica output buffer on the master would rapidly fill up since the speed the master can write is likely to be faster than the throughput of the network? If I have client-output-buffer-limit set for replicas, wouldn't the master end up closing the connection before the resynchronisation can complete?
Yes, Redis Master will close the connection and the synchronization will be started from beginning again. But, please find some details below:
Do you need to touch this configuration parameter and what is the purpose/benefit/cost of it?
There is a zero (almost) chance it will happen with default configuration and pretty much moderate modern hardware.
"By default normal clients are not limited because they don't receive data
without asking (in a push way), but just after a request, so only asynchronous clients may create a scenario where data is requested faster than it can read." - the chunk from documentation .
Even if that happens, the replication will be started from beginning but it may lead up to infinite loop when slaves will continuously ask for synchronization over and over. Redis Master will need to fork whole memory snapshot (perform BGSAVE) and use up to 3 times of RAM from initial snapshot size each time during synchronization. That will be causing higher CPU utilization, memory spikes network utilization (if any) and IO.
General recommendations to avoid production issues tweaking this configuration parameter:
Don't decrease this buffer and before increasing the size of the buffer make sure you have enough memory on your box.
Please consider total amount of RAM as snapshot memory size (doubled for copy-on-write BGSAVE process) plus the size of any other buffers configured plus some extra capacity.
Please find more details here

Redis memory usage vs space taken up by back ups

I'm looking at Redis backed up rdb files for a web application. There are 4 such files (for 4 different redis servers working concurrently), sizes being: 13G + 1.6G + 66M + 14M = ~15G
However, these same 4 instances seem to be taking 43.8GB of memory (according to new relic). Why such a large discrepancy between how much space redis data takes up in mem vs disk? Could it be a misconfiguration and can the issue be helped?
I don't think there is any problem.
First of all, the data is stored in compressed format in rdb file so that the size is less than what it is in memory. How small the rdb file is depends on the type of data, but it can be around 20-80% of the memory used by redis
Another reason your memory usage could be more than the actual usage(you can compare the memory from new relic to the one obtained from redis-cli info memory command) is because of memory fragmentation. Whenever redis needs more memory, it will get the memory allocated from the OS, but will not release it easilyly(when the key expires or is deleted). This is not a big issue, as redis will ask for more memory only after using the extra memory that it has. You can also check the memory fragmentation using redis-cli info memory command.

When does Redis read key from AOF persistence?

I might be wrong but still asking this question. ;-)
So I am planning to use redis as a persistent storage(Primary Storage). I am having AOF enabled.I know redis will load this data during server start up. Let us say I have 10GB data and 5 GB ram, If I try to search for a key which is not loaded in RAM, will it check AOF and load that data to RAM by offloading any unused keys?
You cannot have less memory than data size in Redis. In your example Redis would run out of memory during start-up. You find more answers here: http://redis.io/topics/faq

Can redis be configured to save only to disk and not in memory?

I am facing some scaling issues with my redis instances and was wondering if there's a way to configure redis to save data only to disk (and not hold it in memory). That way I could just increase disk space and not RAM.
Right now my instances are getting stuck and just hang when they reach the memory limit.
Thanks!
No - Redis, atm, is an in-memory database. That means that all data that it manages resides first and foremost in RAM.