Redis - decrease TTL by single command - redis

Let's say I have redis record with TTL = 1 hour.
Then some event occurs and I want to reset the TTL of this item to min(current-TTL, 5min). So it would decrease the TTL to 5 mins. if it is not already lower.
The underlying use-case is, there can be too frequent invalidation of the cache and the "old" cache is almost as good as the fresh, if it is not older then 5mins. from the first change.
I know I can fetch the TTL in one command and update it with second, but would prefer to set it by single command for various reasons. Is there a way?
Edit: There will be many keys I would need to decrease by single command. So I would like to avoid data round-trips between redis and client library for each record.

There is no single command to do that, but you can wrap the logic in a server-side Lua script and invoke that with a single command. Refer to the EVAL command for more information.

Related

Queries frequency stats in Redis?

In my application I am implementing a server-side cache using Redis (for mySQL database). When data change in the database, I want to completely clear the cache to invalidate the old data.
However, I would like to see some statistics about how often are different keys queried in Redis, so that I can sort of manually pre-fetch frequently queried data for them to be available immediately after clearing the cache.
Is there any way how to see these statistics directly in Redis? Or what is a common solution to this problem?
You can use the object command.
OBJECT FREQ returns the logarithmic access frequency counter of
the object stored at the specified key. This subcommand is available
when maxmemory-policy is set to an LFU policy.
https://redis.io/commands/object
redis-cli --hotkeys can do the help for redis-cli version 4.x and above.

Is it possible to see all the requests processed by redis?

I want to get all the commands processed by redis without using MONITOR command because MONITOR command is used to get info of all commands at present.But this is not my case.I want to know the commands processed for last 2 days.Is it possible to see the last 2 days commands processed by redis?
No, that is not possible. You might be able to get close if you have AOF persistence enabled and it hasn't been rewritten during that time.

Should I always use pipelining when there are more than 1 command in Redis?

I am new to Redis, and a little bit confused when I should use pipelining or should I use it all the time when there are more than 1 command to be sent?
For example, if I want to send 10 SET commands to Redis server at a time, should I simply run the 10 commands one by one or should I pipeline them?
Are there any disadvantage to pipeline 10 SET commands instead of sending them one by one?
when I should use pipelining
Pipeline is used to reduce RTT, so that you can improve the performance, when you need to send many commands to Redis.
should I use it all the time when there are more than 1 command to be sent?
It depends. You should discuss it case by case.
if I want to send 10 SET commands to redis server at a time, should I simply run the 10 commands one by one or should I pipeline them?
Pipline these commands will be much faster than sending 10 commands. However, in this particular case, the best choice is using the MSET command.
Are there any disadvantage to pipeline 10 SET commands instead of sending them one by one?
With pipeline, Redis needs to consume more memory to hold the result of all these piped commands before sending them to client. So if you pipe too many commands, that's might be a problem.

Is the data always available with a Rename in Redis?

When I run a rename command, I think it does something like this,
Use new name for new data
remove reference for old name
remove old data (this can take some time if it’s large)
For clients accessing this data, is there ever a time where any of these happen?
The key does not exist
The data is not in a good state
Redis hangs during access
What steps are performed during a Redis rename command?
Since Redis has single threaded execution of commands, the rename will be atomic, so the answer to 1 and 2 are no. The thing about it "removing old data" is only if the destination key already points to a large structure that it needs to delete (Redis will clobber it.) The original data object will not be copied. Only hash table entries pointing to it might be moved around. Since rehashing in Redis is incremental, this should essentially be constant time.
Redis will always "hang" on slow commands due to the single threaded command execution. So for 3, it can always be yes depending on what you're doing, but in this case, only if you're doing significantly large implicit delete.
Edit: as of Redis 4.0 you can actually specify the config option lazyfree-lazy-server-del yes (default is no) and the server will actually delete asynchronously for side-effect deletes such as this. In other words, instead of delete blocking, the object will be queued for background deletion. This would effectively make RENAME constant time. See sample cfg: https://raw.githubusercontent.com/antirez/redis/4.0/redis.conf

Safely setting keys with StackExchange.Redis while allowing deletes

I am trying to use Redis as a cache that sits in front of an SQL database. At a high level I want to implement these operations:
Read value from Redis, if it's not there then generate the value via querying SQL, and push it in to Redis so we don't have to compute that again.
Write value to Redis, because we just made some change to our SQL database and we know that we might have already cached it and it's now invalid.
Delete value, because we know the value in Redis is now stale, we suspect nobody will want it, but it's too much work to recompute now. We're OK letting the next client who does operation #1 compute it again.
My challenge is understanding how to implement #1 and #3, if I attempt to do it with StackExchange.Redis. If I naively implement #1 with a simple read of the key and push, it's entirely possible that between me computing the value from SQL and pushing it in that any number of other SQL operations may have happened and also tried to push their values into Redis via #2 or #3. For example, consider this ordering:
Client #1 wants to do operation #1 [Read] from above. It tries to read the key, sees it's not there.
Client #1 calls to SQL database to generate the value.
Client #2 does something to SQL and then does operation #2 [Write] above. It pushes some newly computed value into Redis.
Client #3 comes a long, does some other operation in SQL, and wants to do operation #3 [Delete] to Redis knowing that if there's something cached there, it's no longer valid.
Client #1 pushes its (now stale) value to Redis.
So how do I implement my operation #1? Redis offers a WATCH primitive that makes this fairly easy to do against the bare metal where I would be able to observe other things happened on the key from Client #1, but it's not supported by StackExchange.Redis because of how it multiplexes commands. It's conditional operations aren't quite sufficient here, since if I try saying "push only if key doesn't exist", that doesn't prevent the race as I explained above. Is there a pattern/best practice that is used here? This seems like a fairly common pattern that people would want to implement.
One idea I do have is I can use a separate key that gets incremented each time I do some operation on the main key and then can use StackExchange.Redis' conditional operations that way, but that seems kludgy.
It looks like question about right cache invalidation strategy rather then question about Redis. Why i think so - Redis WATCH/MULTI is kind of optimistic locking strategy and this kind of
locking not suitable for most of cases with cache where db read query can be a problem which solves with cache. In your operation #3 description you write:
It's too much work to recompute now. We're OK letting the next client who does operation #1 compute it again.
So we can continue with read update case as update strategy. Here is some more questions, before we continue:
That happens when 2 clients starts to perform operation #1? Both of them can do not find value in Redis and perform SQL query and next both of then write it to Redis. So we should have garanties that just one client would update cache?
How we can be shure in the right sequence of writes (operation 3)?
Why not optimistic locking
Optimistic concurrency control assumes that multiple transactions can frequently complete without interfering with each other. While running, transactions use data resources without acquiring locks on those resources. Before committing, each transaction verifies that no other transaction has modified the data it has read. If the check reveals conflicting modifications, the committing transaction rolls back and can be restarted.
You can read about OCC transactions phases in wikipedia but in few words:
If there is no conflict - you update your data. If there is a conflict, resolve it, typically by aborting the transaction and restart it if still need to update data.
Redis WATCH/MULTY is kind of optimistic locking so they can't help you - you do not know about your cache key was modified before try to work with them.
What works?
Each time your listen somebody told about locking - after some words you are listen about compromises, performance and consistency vs availability. The last pair is most important.
In most of high loaded system availability is winner. Thats this means for caching? Usualy such case:
Each cache key hold some metadata about value - state, version and life time. The last one is not Redis TTL - usually if your key should be in cache for X time, life time
in metadata has X + Y time, there Y is some time to garantie process update.
You never delete key directly - you need just update state or life time.
Each time your application read data from cache if should make decision - if data has state "valid" - use it. If data has state "invalid" try to update or use absolete data.
How to update on read(the quite important is this "hand made" mix of optimistic and pessisitic locking):
Try set pessimistic locking (in Redis with SETEX - read more here).
If failed - return absolete data (rememeber we still need availability).
If success perform SQL query and write in to cache.
Read version from Redis again and compare with version readed previously.
If version same - mark as state as "valid".
Release lock.
How to invalidate (your operations #2, #3):
Increment cache version and set state "invalid".
Update life time/ttl if need it.
Why so difficult
We always can get and return value from cache and rarely have situatiuon with cache miss. So we do not have cache invalidation cascade hell then many process try to update
one key.
We still have ordered key updates.
Just one process per time can update key.
I have queue!
Sorry, you have not said before - I would not write it all. If have queue all becomes more simple:
Each modification operation should push job to queue.
Only async worker should execute SQL and update key.
You still need use "state" (valid/invalid) for cache key to separete application logic with cache.
Is this is answer?
Actualy yes and no in same time. This one of possible solutions. Cache invalidation is much complex problem with many possible solutions - one of them
may be simple, other - complex. In most of cases depends on real bussines requirements of concrete applicaton.