REDIS as pefect LRU - redis

I want to use REDIS following perfect LRU eviction policy. For experimentation purpose I set the maxmemory size of the Redis.This maxmemory REDIS can have maximum 10 key/value pair of fix data size. I set "maxmemory-policy :allkeys-lru" and maxmemory-samples :10 . When I insert new key/value pair it does not remove the oldest visited key/value pair.why is it not removing the oldest key when the sample size is same as the maximum number of keys it can have.
So what should be the behaviour of allkeys-lru eviction policy when maxmemory-samples=maximum number of possible key/value pair in redis ?

Related

Redis is not expiring keys

I am running a cron which is setting approximate 1.5 million keys to my redis cluster in 1 minute. These keys which i am setting to have length of approximate 68 bytes. My configuration of redis cluster:
Number of master 2.
Replication factor 1.
Eviction Policy - volatile-lru.
Maxmemory of one node is 8 GB
Average TTL of keys is about 300 seconds. As the keys are not expiring the cluster is getting full after some time. How can we resolve this?

Can We set Redis maxmemory smaller then none expire keys size to enforce key eviction (allkeys-lru)

Our programmer forgot to set key expire time.
Now Redis has fifty million keys not expired and these keys used about 15 GB.
(the maxmemory setting is 17 GB , allkeys-lru).
My Questions:
If we set maxmemory to 14 GB smaller these keys total size , can we enforce Redis to evict these keys?
If Redis do so , can we read/write as normal while evicting?
(Or it may not even be possible to do anything)
We also use rdb dump. Should we stop it before reduce maxmemory?
Thank you.

Redis - Loss of old data

We are using redis for storing cart data. We see that some of the carts that are older than a month aren't available any longer. I assume the data would have been persisted and should be available any time. Are there any settings that I must review to check why some of the old data is getting deleted? There is no TTL set when storing the data.
Maybe It reaches your redis maxmemory, take a look at the two config in redis.conf which maxmemory and maxmemory-policy
When the maxmemory is reached, the redis follows the action specified by maxmemory-policy which could be allkeys-lru or noeviction. if the policy is lru, the older data will be dropped.
as the redis docs said:
noeviction: return errors when the memory limit was reached and the client is trying to execute commands that could result in more memory to be used (most write commands, but DEL and a few more exceptions).
allkeys-lru: evict keys by trying to remove the less recently used (LRU) keys first, in order to make space for the new data added.

Which maxmemory policies allow expiration in Redis?

Which maxmemory policies are compatible with redis expiration mechanisms?
Is it only volatile-ttl? Does noeviction stop old records from death?
See here from redis.conf:
MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
is reached. You can select among five behaviors:
volatile-lru -> remove the key with an expire set using an LRU algorithm
allkeys-lru -> remove any key according to the LRU algorithm
volatile-random -> remove a random key with an expire set
allkeys-random -> remove a random key, any key
volatile-ttl -> remove the key with the nearest expire time (minor TTL)
noeviction -> don't expire at all, just return an error on write operations
Note: with any of the above policies, Redis will return an error on write
operations, when there are no suitable keys for eviction.
At the date of writing these commands are: set setnx setex append
incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
getset mset msetnx exec sort
The default is:
maxmemory-policy noeviction
If you keep the policy at the default 'noeviction' or if you choose any of the volatile-* ones without actually having expirable keys in the database, the data will remain in Redis indefinitely. Do remember, however, that if you do not delete data from Redis and keep adding more, you'll eventually run out of memory.

Increase the default memory and hash, set key limit of redis

I am writing a JAR file that fetches large amount of data from Oracle db and stores in Redis. The details are properly stored, but the set key and hash key I have defined in the jar are getting limited in redis db. There should nearly 200 Hash and 300 set keys. But, I am getting only 29 keys when giving keys * in redis. Please help on how to increase the limit of the redis memory or hash or set key storage size.
Note: I changed the
hash-max-zipmap-entries 1024
hash-max-zipmap-value 64
manually in redis.conf file. But, its not reflecting. Anywhere it needs to be changed?
There is no limit about the number of set or hash keys you can put in a Redis instance, except for the size of the memory (check the maxmemory, and maxmemory-policy parameters).
The hash-max-zipmap-entries parameter is completely unrelated: it only controls memory optimization.
I suggest using a MONITOR command to check which queries are sent to the Redis instance.
hash-max-zipmap-value keeps the hash key value pair system in redis optimized as the searching for the keys in these hashes follow an amortized N and therefore longer keys will in turn increase the latency of the system.
These settings are available in redis.conf.
If one enters keys more then the specified number then the hash key value pair will be converted to basic key value pair structure internally and thereby will not be able to provide the advantage in memory which hashes provide so.