Can I rotate cache expiration in a Redis cluster - redis

I have a redis cluster with several replica nodes, holding a cache of a time-consuming complex db query. The cache expires every minute, and lately with enough traffic volume I've had client timeouts while the cache is rebuilding and waiting for that complex db query to complete.
What I'd like to do is set it up so one node expires every even minute while the other expires every odd minute, this way if one node is rebuilding the cache, the other node can serve the cache. Does Redis have such a feature, or is there a recommended workaround for a scenario like this? I couldn't find any docs on this. Thank you!

In a Redis cluster, the primary will expire the key and instruct its replicas to expire it too by issuing a DEL command to them on the cluster bus.
If you want the value to be always available for your clients, then you need to a process to refresh your key at the cadence you want and then use the expiration cache-miss scenario as a failback in case the refresh process failed.
If you really want to use two keys and have them expire every other minute, you can use EXPIREAT or its precise version PEXPIREAT. But this sounds unnecessary.
You can use TTL (or PTTL) to consult on the time left for a key.
If your clients access the cache key on bursts and you just want to avoid some of them getting a cache-miss every minute and therefore timing out, you can get the value and the TTL and if it is lower than a reasonable time you then trigger the query to refresh after responding to your client immediately.
You can use a simple Lua Script so you query both the value and the TTL of the key with one request to the Redis server. You can also do the same with pipelining or transactions, I just like to promote Lua scripting as it is a more powerful tool.
local val = {}
val[1] = redis.call('GET', KEYS[1])
if val[1] then
val[2] = redis.call('PTTL', KEYS[1])
return val
else
return false
end
You use as
EVAL "local val = {} val[1] = redis.call('GET', KEYS[1]) if val[1] then val[2] = redis.call('PTTL', KEYS[1]) return val else return false end" 1 data
1) "queryResult"
2) (integer) 1664
You get both the value and the TTL and then you can trigger a proactive refresh if your cache key is close to expire.

Related

why REDIS SERVER clears memory as i push data?

As I am taking the learning curve in REDIS I developd a simple application that consumes market data of goverment bonds, and for each price, it routine ask a webservice for bonds analytics at that price.
Analytics is provided by a api webservice that might be hitted several times as prices arrives every second. The response is a json payload like this one {"md":2.9070078608390455,"paridad":0.7710514176999993,"price":186.0,"ticker":"GO26","tir":0.10945225427543438,"vt":241.22904871224668, "price":185}
My strategy with REDIS is to cache that payload in string format with a key formet by ticker + price (i.e "GO26185). That way I reduce service hits and also query time response. So from here, if a value is not on REDIS, i ask to the APi. If not, i ask to REDIS.
The problem i have is that when running this routine, as long as i PUSH different KEY VALUE pair on REDIS, the one I already have in memory disapears.
i,em. (dbsize, increases as soon as i push information, but decreases when there are no new values).
Although I set expiration to one day (in seconds):
await client.set(
rediskey,
JSON.stringify(response.data)
,{
EX: 86399,
}
);
Is there any configuration I might be messing to tell redis to persist that data and avoid clearing the cache randomly?
Just to clarify, a glance on how SET keys dissapears while registering new ones:
127.0.0.1:6379> dbsize
(integer) 946;
127.0.0.1:6379> dbsize
(integer) 1046;
127.0.0.1:6379> dbsize
(integer) 1048;
127.0.0.1:6379> dbsize
(integer) 1048;
127.0.0.1:6379> dbsize
(integer) 0 << Here all my keys have dissapeared
I am replying my own answer. The problem was that I didn't block redis port and a hacker was connecting to my redis server, causing it to reset. Seems it was using the replicatio nodes.

Redis - any way to trigger an event when a value is no longer being actively written to?

I have a use case where I'm streaming and processing live data into an Elasticache Redis cluster. In essence, I want to kick off an event when all events of a certain type have completed (i.e. the size of a value is no longer growing over the course of 60 seconds).
For example:
foo [event1]
foo [event1, event2]
foo [event1, event2]
foo [event1, event2] -> triggers some event if this key/value is constant for 60 seconds.
Is this at all possible?
I would suggest that as part of all "changing" commands also set a key with a 60-second ttl. You can then subscribe to the expiration of that key using redis keyspace notifications.

How to have one Redis client wait for all the other Redis clients to respond?

I have one Redis server, and multiple Redis clients. Each Redis client is a WebSocket+HTTP server that, amongst others, manages WebSocket connections. These WebSocket+HTTP servers are hidden behind a load balancer.
The WebSocket+HTTP servers provide a GET /health HTTP endpoint. I would like this endpoint to provide the total number of current WebSocket connections, across the whole cluster.
When one hits GET /health, then obviously, the load balancer will dispatch the request to only one WebSocket+HTTP server instance.
How can I make one WebSocket+HTTP server instance ask for all the other instances how many WebSocket connections they currently manage?
I thought of the following steps:
The instance uses CLIENT LIST to know how many Redis clients there currently are (say n);
The instance then publishes WEBSOCKET_CONNECTION_COUNT_REQUEST to Redis (with the assumption that all Redis clients are subscribed to this event);
The instance finally waits for n WEBSOCKET_CONNECTION_COUNT_RESPONSEs, sums up the counts, and returns it over HTTP.
What do you think about the above approach? Isn't it a bit too convoluted? I have the feeling I'm maybe a bit overengineering...
I initially thought that instances could INCR/DECR a count inside the Redis storage, but I'm not sure how to handle instances being killed (as the count should then be decremented accordingly). I think an ad-hoc solution would be preferable. Still open to ideas though.
I'd use a sorted set, where the members are WS server ids and the score is the timestamp of their last "ping".
Have each WS "ping" periodically (e.g. every 10 seconds) by updating that sorted set with its id. You can use a Lua script to get the time from the server and set the member's score to make everything nice and atomic:
redis.replicate_commands()
local t = redis.call('TIME')
return redis.call('ZADD', KEYS[1], tonumber(t[0]), ARGV[1])
So if your sorted set is called "wsservers" and the WS's id is foo, you can call the script after loading it with EVALSHA <script-sha1> 1 wsservers foo.
To return the count, all you need to do is a range on the sorted set of the last period (i.e. 11 seconds) and count the results. You can also use this opportunity for trimming old dead servers. Of course, a Lua script is my preferred approach, and this does both tasks w/o having to actually send the raw WS members down the line to the calling client:
local t = redis.call('TIME')
local live = redis.call('ZRANGE', KEYS[1], tonumber(t[0])-11, '+inf')
redis.call('ZREMRANGEBYSCORE', KEYS[1], '-inf', tonumber(t[0])-11)
return #live

Alternatives to slow DEL large key

There is async UNLINK in the upcoming Redis 4, but until then, what are some good alternatives to implementing DELete of large set keys with no or minimal blocking?
Is RENAME to some unique name followed by EXPIRE 1 second a good solution? RENAME first so that the original key name becomes available for use. Freeing the memory right away is not of immediate concern, Redis can do async garbage collection when it can.
EXPIRE will not eliminate the delay, only delay it until the server actually expires the value (note that Redis uses an approximate expiration algorithm). Once the server gets to actually expiring the value, it will issue a DEL command that will block the server until the value is deleted.
If you are unable to use v4's UNLINK, the best way you could go about deleting a large set is by draining it incrementally. This can be easily accomplished with a server-side Lua script to reduce the bandwidth, such as this one:
local target = KEYS[1]
local count = tonumber(ARGV[1]) or 100
local reply = redis.call('SPOP', target, count)
if reply then
return #reply
else
return nil
end
To drain, call repeatedly the script above with the key-to-be-deleted's name, and with or without a count argument, until you get a nill Redis reply.

How to automatically remove an expired key from a set?

127.0.0.1:6379> keys *
1) "trending_showrooms"
2) "trending_hashtags"
3) "trending_mints"
127.0.0.1:6379> sort trending_mints by *->id DESC LIMIT 0 12
1) "mint_14216"
2) "mint_14159"
3) "mint_14158"
4) "mint_14153"
5) "mint_14151"
6) "mint_14146"
The keys are expired but the keys are inside set. I have to remove the expire keys automatically in redis
You can't set a TTL on individual members within the SET.
This blog post dives a bit deeper on the issue and provides a few workarounds.
https://quickleft.com/blog/how-to-create-and-expire-list-items-in-redis/
Hope that helps.
Please ready this page entirely: https://redis.io/topics/notifications
Summing up, you must have a sentinel program listening to PUB/SUB messages, and you must alter the redis.conf file to enable keyevent expire notifications:
in redis.conf:
notify-keyspace-events Ex
In order to enable the feature a non-empty string is used, composed of
multiple characters, where every character has a special meaning
according to the following table
E Keyevent events, published with __keyevent#<db>__ prefix.
x Expired events (events generated every time a key expires)
Then the sentinel program must listen to the channel __keyevent#0__:del, if your database is 0. Change the database number if using any other than zero.
Then when you subscribe to the channel and receive the key which is expiring, you simply issue a SREM trending_mints key to remove it from the set.
IMPORTANT
The expired events are generated when a key is accessed and is found
to be expired by one of the above systems, as a result there are no
guarantees that the Redis server will be able to generate the expired
event at the time the key time to live reaches the value of zero.
If no command targets the key constantly, and there are many keys with
a TTL associated, there can be a significant delay between the time
the key time to live drops to zero, and the time the expired event is
generated.
Basically expired events are generated when the Redis server deletes
the key and not when the time to live theoretically reaches the value
of zero.
So keys will be deleted due to expiration, but the notification is not guaranteed to occur in the moment TTL reaches zero.
ALSO, if your sentinel program misses the PUB/SUB message, well... that's it, you won't be notified another time! (this is also on the link above)