I am running a cron which is setting approximate 1.5 million keys to my redis cluster in 1 minute. These keys which i am setting to have length of approximate 68 bytes. My configuration of redis cluster:
Number of master 2.
Replication factor 1.
Eviction Policy - volatile-lru.
Maxmemory of one node is 8 GB
Average TTL of keys is about 300 seconds. As the keys are not expiring the cluster is getting full after some time. How can we resolve this?
Related
we have many datacenters but datacenter1 is the main.
the master in datacenter1 is being monitored by sentinel so if the master goes down one the replicas will become master and also all data is being synced continuously.
we want to have one Redis replica in each datacenter, replicate all data from datacenter1 but without the ability to become master. (always get data from data center 1 and just replica 1 have the ability to become master but other replicas must not be able)
is there a Redis config for this or any idea?
Redis Multi Datacenter
Redis config [1] has a replica-priority parameter which should serve your purpose.
The replica priority is an integer number published by Redis in the INFO
output. It is used by Redis Sentinel in order to select a replica to promote
into a master if the master is no longer working correctly.
A replica with a low priority number is considered better for promotion, so
for instance if there are three replicas with priority 10, 100, 25 Sentinel
will pick the one with priority 10, that is the lowest.
However a special priority of 0 marks the replica as not able to perform the
role of master, so a replica with priority of 0 will never be selected by
Redis Sentinel for promotion.
By default the priority is 100.
The idea can be setting lower replica-priority value to replicas in datacenter1 and higher value to replicas in other datacenters.
[1] redis.conf file of Redis version 6.2.6: https://github.com/redis/redis/blob/6.2.6/redis.conf
I am using a cluster enabled Redis setup with 1 primary node and 1 read replica, and I am attempting to store ~13 million unique entries, but I am noticing that the "count" in each of the 5 primary nodes of my cluster is only ~1.6 million (~8 million total) entries. However, when attempting to hit the cache with each of the 13 million entries, there are definitely not 5 million cache misses. In fact, there are only ~5 cache misses per 3 million entries. There are no TTL issues here as the TTL for all entries is set to months from now.
Is there a good explanation for the seemingly 5 million missing entries?
Our programmer forgot to set key expire time.
Now Redis has fifty million keys not expired and these keys used about 15 GB.
(the maxmemory setting is 17 GB , allkeys-lru).
My Questions:
If we set maxmemory to 14 GB smaller these keys total size , can we enforce Redis to evict these keys?
If Redis do so , can we read/write as normal while evicting?
(Or it may not even be possible to do anything)
We also use rdb dump. Should we stop it before reduce maxmemory?
Thank you.
I went through the redis cluster documentation and there it is written that there are 16384 slots in a redis cluster (cluster mode enabled). Does this mean that there can be a maximum of only 16384 master nodes in a cluster?
If yes, then how do we scale beyond 16384 master nodes?
If no, then how will it work since at least one pair of two master will be assigned the same hash slot?
The key space is split into 16384 slots, effectively setting an upper limit for the cluster size of 16384 master nodes (however the suggested max size of nodes is in the order of ~ 1000 nodes).
For more information Refer this: https://redis.io/topics/cluster-spec
Hope this helps :)
I want to use REDIS following perfect LRU eviction policy. For experimentation purpose I set the maxmemory size of the Redis.This maxmemory REDIS can have maximum 10 key/value pair of fix data size. I set "maxmemory-policy :allkeys-lru" and maxmemory-samples :10 . When I insert new key/value pair it does not remove the oldest visited key/value pair.why is it not removing the oldest key when the sample size is same as the maximum number of keys it can have.
So what should be the behaviour of allkeys-lru eviction policy when maxmemory-samples=maximum number of possible key/value pair in redis ?