While poking through Redis I came across the WATCH command -- which is a facet of Redis transactions.
The related section on Transactions explains in a bit more detail, how WATCH can work with other Redis concurrency commands.
However, one thing that confuses me is: What happens if I call WATCH on keys, but don't (for whatever reason) UNWATCH them? Is there some cache of WATCH'd keys that fills and then starts discarding older WATCH'd keys? Will this cause latency issues?
Any comments would be helpful :)
What happens if I call WATCH on keys, but don't (for whatever reason) UNWATCH them?
These keys will be added to a watched key list.
Is there some cache of WATCH'd keys that fills and then starts discarding older WATCH'd keys?
NO. Watched keys won't be removed unless you unwatch it (calls UNWATCH, EXEC or DISCARD), or the connection is closed.
Will this cause latency issues?
If there're keys watched, each time you modify a key (no matter whether it 's a watched key or not), Redis costs some CPU circle to handle key watch related stuff. So you'd better unwatch keys ASAP.
Related
Backstory: The keyspace of the Redis database in question reports a large amount of expired keys and memory usage is maxed out. The application using this database is experiencing (rare) intermittent timeouts and I thought (in my limited knowledge) perhaps it is because Redis is having to eject expired keys each time a new key is created.
So to my question: how do I tell Redis to remove all the expired keys?
Secondarily -- is it possible to access/see expired keys with redis-cli?
Here's a slice of the INFO I'm looking at:
maxmemory_policy:allkeys-lru
expired_keys:24326586
evicted_keys:134022997
keyspace_hits:2684031719
keyspace_misses:186380210
slave_expires_tracked_keys:0
active_defrag_key_hits:0
active_defrag_key_misses:0
db2:keys=12994468,expires=3193,avg_ttl=1891176
Answer for myself, posterity, and any other Redis newbies out there. I was looking at the wrong "database". I was under the WRONG impression that Redis only had single table but looking at my question you see "db2". I searched into that and found that Redis can have up to 16 databases identified by a zero-based index. In this case:
SELECT 2
That selects "db2" and now doing a DBSIZE gives a more accurate output.
Oye -- so the problem is that the keys are still there! Otherwise when Redis expires a key it deletes it.
Whoops! I'm leaving my question because someone else might think to ask the same thing and be on the wrong route.
I will be storing key-value pairs in Redis but the number of keys will be just 4. Since there will be multiple processes updating the values parallelly, I plan to use Redis transactions using WATCH, MULTI and EXEC commands.
My algorithm is something like this:
GET key
WATCH key
MULTI
SET key new_val
EXEC
My main concern is that, since WATCH uses optimistic locking, when I will have multiple processes (much more than the number of keys, which are only 4) trying to update values, the transaction failure rate will be very high.
Is this correct? Is there a way to prevent this?
It is unclear why you'd need a transaction here unless you're doing something in your application with the reply for GET key. I'll therefore assume that you are using the value for something meaningful, otherwise, you can drop the transaction semantics and just call SET key new_val.
Optimistic locking is mainly intended to be used in cases where there's low contention for the resources. Since the use case that you're describing is clearly the opposite, it would probably result in high failure rates. This isn't saying that Redis and your application will not work, but it does mean there a potential for a lot of wasted effort.
I would advise that you consider switching if possible to using Redis' server-side Lua scripts. These are blocking, atomic, and let you read and meaningfully manipulate the data in Redis programmatically. See the EVAL command for details.
Problem:
I want to set a TTL on a key (to avoid it lasting forever) but I do NOT want to have that specific key to be evicted.
When I am setting the TTL I know when it will be safe to expire that cache, but it is NOT safe to expire the cache before this time, and eviction presents the risk of having this cache expire to early.
Context:
I am using Redis to cache an object in multiple languages, if the underlying data changes however I want to remove all associated caches from Redis.
The way I went around and sorted this problem was to create a SET on Redis that contains a reference to the keys in every language. My concern is that if that SET is evicted - I loose the reference to the other keys, and risk having them persist on the cache when they shouldn't.
What I am looking for
A Redis command that looks something like
PLEASE_DO_NOT_EVICT key
while not preventing that key from expiring after the TTL runs out.
Thanks very much for taking your time to reading and answering!
While I could use wildcard matching to find all of the associated keys, this is WAY slower than SMEMBERS, and I am doing this in an environment where every MS counts, as these objects are created and deleted very frequently, so this query happens very often.
Not having a TTL in these objects means they start building up in memory which is undersirable. And they do tend to stop being referenced after a while
Having a no eviction policy seems risky, and I would very much want to
When creating:
SADD 'object:id:group', 'object:id:spanish'
SETEX 'object:id:spanish', 'Este es el object en espaniol', 100
EXPIRE 'object:id:group', 100
When expiring the group because the object changed:
SMEMBERS 'object:id:group'
=> 'object:id:spanish', 'object:id:english'
DELETE 'object:id:spanish', 'object:id:english'
DELETE 'object:id:group'
You can set the maxmemory-policy to its default value of "noeviction". In this mode, no keys are evicted.
I use redis "flushdb" to flush all data in Redis but caused redis-server went away, I wondered the problem cound be cleaning a large amount of keys. So is there any idea for flushing Redis in a smoothly way? Maybe with more time for flushing all data?
flushall is "delete all keys" as described here: http://redis.io/commands/flushall
Delete operations are blocking operations.
Large delete operations may block redis 1minute or more. (e.g. you delete a 16GB hash with tons of keys)
You should write a Script which uses cursors to do this.
//edit:
I found my old answer here and wanted to be more specific providing resources:
Large number of keys, use SCAN to iterate over them with a cursor and do a gracefully cleanup in smaller batches.
Large hash, use either UNLINK command to async delete or HSCAN to iterate over it with a cursor and do a gracefully cleanup.
I'm using MSETNX (http://redis.io/commands/msetnx) as a locking system, whereby all keys are locked only if no locks already exist.
If a machine holding a lock dies, that lock will be stuck locked - this is a problem.
My ideal answer would be that all keys expire in 15 seconds by default, so even if a machine dies it's held locks will auto-reset in a short time. This way I don't have to call expire on every key I set.
Is this possible in any way?
To build a reliable lock that is high available please check this document: http://redis.io/topics/distlock
The algorithm is still in beta but was stress-tested in a few sessions and is likely to be far more reliable than a single-instance approach anyway.
There are reference implementations for a few languages (linked in the doc).
Redis doesn't have a built-in way to do MSETNX and expire all keys together atomically. Nor can you set a default expiry tube for keys.
You could consider instead:
1. Using a WATCH/MULTI/EXEC block that wraps multiple 'SET key value EX 15 NX', or
2. Doing this using a Lua server-side script.