We are using redis for storing cart data. We see that some of the carts that are older than a month aren't available any longer. I assume the data would have been persisted and should be available any time. Are there any settings that I must review to check why some of the old data is getting deleted? There is no TTL set when storing the data.
Maybe It reaches your redis maxmemory, take a look at the two config in redis.conf which maxmemory and maxmemory-policy
When the maxmemory is reached, the redis follows the action specified by maxmemory-policy which could be allkeys-lru or noeviction. if the policy is lru, the older data will be dropped.
as the redis docs said:
noeviction: return errors when the memory limit was reached and the client is trying to execute commands that could result in more memory to be used (most write commands, but DEL and a few more exceptions).
allkeys-lru: evict keys by trying to remove the less recently used (LRU) keys first, in order to make space for the new data added.
Related
We have two separate set of keys in one Redis instance (set1 and set2). All keys in both sets have an expire time set.
If Redis instance hits max memory cap, we want keys from set1 (and only from it!) be evicted to free some memory, but we need to have a guarantee that keys from set2 will not be evicted until their time limit and, thus, will always expire in a normal way.
Is there any possibility to achieve it?
Thanx in advance!
Redis doesn't provide this finely grained of a level of control over cache invalidation. You're restricted to the following options:
noeviction: New values aren’t saved when memory limit is reached. When a database uses replication, this applies to the primary database
allkeys-lru: Keeps most recently used keys; removes least recently used (LRU) keys
allkeys-lfu: Keeps frequently used keys; removes least frequently used (LFU) keys
volatile-lru: Removes least recently used keys with the expire field set to true.
volatile-lfu: Removes least frequently used keys with the expire field set to true.
allkeys-random: Randomly removes keys to make space for the new data added.
volatile-random: Randomly removes keys with expire field set to true.
volatile-ttl: Removes keys with expire field set to true and the shortest remaining time-to-live (TTL) value.
The best you could do would be to set the policy to noeviction and then write your own cache-invalidation process. Or maybe set it to volatile-ttl and then have set2 be non-volatile keys that you remove manually. A fair bit of work and possibly not worth it.
The documentation describing these options also provides some good insight into how Redis actually removes things and might be worth perusing.
I need to find out whether a data has been modified in a row, so that next time when it gets the value it knows that this particular row has been modified in Redis Database? Somewhat like an SCN value in Sql but in Redis database. Is that possible?
While Redis does not expose a last modification timestamp for keys, you can easily achieve what you are looking for by storing that information on different keys, (possibly) named after the keys you are tracking: for every modification of the key named key, for example, you would also immediately set the key named key_modified_on with the current timestamp.
To make the operation transaction-like, you could use a MULTI/EXEC transaction (or even a Lua script, if needed):
var transaction = database.CreateTransaction();
transaction.StringSetAsync("mykey", "myvalue");
transaction.StringSetAsync("mykey_modified_on", DateTime.UtcNow.ToString("O"));
await transaction.ExecuteAsync();
With that being said, Redis exposes an idle time (not requested by read or write operations) for each key through the OBJECT command, provided (according to the documentation) maxmemory-policy is set to an LRU policy or noeviction and maxmemory is set. In that case, you can just use the KeyIdleTimeAsync() method:
var idleTime = await database.KeyIdleTimeAsync("mykey");
I have keys I want to keep indefinitely in redis provided I have enough memory. However, if redis runs low on memory, then I'd like it to remove the oldest keys first. I looked at the "eviction policy" options and it appears redis doesn't support this out of the box. https://support.redislabs.com/hc/en-us/articles/203290657-What-eviction-policies-do-you-support-
How could I implement this myself using commands available as part of the redis-client api?
Here's some pseudocode that might work to give a flavor for what I need:
1. Get the first N keys from a list sorted by key date asc.
2. Delete the oldest keys.
3. Repeat until memory is no longer constrained.
The eviction policy determines what happens when a database reaches its memory limit. To make room for new data, older data is evicted (removed) according to the selected policy.
You can select the policies from the reference link below based on your requirement. The one I am using in the example below is "allkeys-lru"
reference link
Example -
127.0.0.1:6379> CONFIG SET maxmemory-policy allkeys-lru
OK
Example in Python -
import redis
client = redis.Redis(host='localhost', port=6379, db=0)
client.config_set('maxmemory-policy', "allkeys-lru")
there are two systems sharing a redis database, one system just read the redis, the other update it.
the read system is so busy that the redis can't handle it, to reduce the count of requests to redis, I find "mget", but I also find "multi".
I'm sure mget will reduce the number of requests, but will "multi" do the same? I think "multi" will force the redis server to keep some info about this transaction and collect requests in this transaction from the client one by one, so the total number of requests sent stays the same, but the results returned will be combined together, right?
So If I just read KeyA, keyB, keyC in "multi" when the other write system changed KeyB's value, what will happen?
Short Answer: You should use MGET
MULTI is used for transaction, and it won't reduces the number of requests. Also, the MULTI command MIGHT be deprecated in the future, since there's a better choice: lua scripting.
So If I just read KeyA, keyB, keyC in "multi" when the other write system changed KeyB's value, what will happen?
Since MULTI (with EXEC) command ensures transaction, all of the three GET commands (read operations) executes atomically. If the update happens before the read operation, you'll get the old value. Otherwise, you'll get the new value.
By the way, there's another option to reduce RTT: PIPELINE. However, in your case, MGET should be the best option.
I am writing a JAR file that fetches large amount of data from Oracle db and stores in Redis. The details are properly stored, but the set key and hash key I have defined in the jar are getting limited in redis db. There should nearly 200 Hash and 300 set keys. But, I am getting only 29 keys when giving keys * in redis. Please help on how to increase the limit of the redis memory or hash or set key storage size.
Note: I changed the
hash-max-zipmap-entries 1024
hash-max-zipmap-value 64
manually in redis.conf file. But, its not reflecting. Anywhere it needs to be changed?
There is no limit about the number of set or hash keys you can put in a Redis instance, except for the size of the memory (check the maxmemory, and maxmemory-policy parameters).
The hash-max-zipmap-entries parameter is completely unrelated: it only controls memory optimization.
I suggest using a MONITOR command to check which queries are sent to the Redis instance.
hash-max-zipmap-value keeps the hash key value pair system in redis optimized as the searching for the keys in these hashes follow an amortized N and therefore longer keys will in turn increase the latency of the system.
These settings are available in redis.conf.
If one enters keys more then the specified number then the hash key value pair will be converted to basic key value pair structure internally and thereby will not be able to provide the advantage in memory which hashes provide so.