why REDIS SERVER clears memory as i push data? - redis

As I am taking the learning curve in REDIS I developd a simple application that consumes market data of goverment bonds, and for each price, it routine ask a webservice for bonds analytics at that price.
Analytics is provided by a api webservice that might be hitted several times as prices arrives every second. The response is a json payload like this one {"md":2.9070078608390455,"paridad":0.7710514176999993,"price":186.0,"ticker":"GO26","tir":0.10945225427543438,"vt":241.22904871224668, "price":185}
My strategy with REDIS is to cache that payload in string format with a key formet by ticker + price (i.e "GO26185). That way I reduce service hits and also query time response. So from here, if a value is not on REDIS, i ask to the APi. If not, i ask to REDIS.
The problem i have is that when running this routine, as long as i PUSH different KEY VALUE pair on REDIS, the one I already have in memory disapears.
i,em. (dbsize, increases as soon as i push information, but decreases when there are no new values).
Although I set expiration to one day (in seconds):
await client.set(
rediskey,
JSON.stringify(response.data)
,{
EX: 86399,
}
);
Is there any configuration I might be messing to tell redis to persist that data and avoid clearing the cache randomly?
Just to clarify, a glance on how SET keys dissapears while registering new ones:
127.0.0.1:6379> dbsize
(integer) 946;
127.0.0.1:6379> dbsize
(integer) 1046;
127.0.0.1:6379> dbsize
(integer) 1048;
127.0.0.1:6379> dbsize
(integer) 1048;
127.0.0.1:6379> dbsize
(integer) 0 << Here all my keys have dissapeared

I am replying my own answer. The problem was that I didn't block redis port and a hacker was connecting to my redis server, causing it to reset. Seems it was using the replicatio nodes.

Related

Can I rotate cache expiration in a Redis cluster

I have a redis cluster with several replica nodes, holding a cache of a time-consuming complex db query. The cache expires every minute, and lately with enough traffic volume I've had client timeouts while the cache is rebuilding and waiting for that complex db query to complete.
What I'd like to do is set it up so one node expires every even minute while the other expires every odd minute, this way if one node is rebuilding the cache, the other node can serve the cache. Does Redis have such a feature, or is there a recommended workaround for a scenario like this? I couldn't find any docs on this. Thank you!
In a Redis cluster, the primary will expire the key and instruct its replicas to expire it too by issuing a DEL command to them on the cluster bus.
If you want the value to be always available for your clients, then you need to a process to refresh your key at the cadence you want and then use the expiration cache-miss scenario as a failback in case the refresh process failed.
If you really want to use two keys and have them expire every other minute, you can use EXPIREAT or its precise version PEXPIREAT. But this sounds unnecessary.
You can use TTL (or PTTL) to consult on the time left for a key.
If your clients access the cache key on bursts and you just want to avoid some of them getting a cache-miss every minute and therefore timing out, you can get the value and the TTL and if it is lower than a reasonable time you then trigger the query to refresh after responding to your client immediately.
You can use a simple Lua Script so you query both the value and the TTL of the key with one request to the Redis server. You can also do the same with pipelining or transactions, I just like to promote Lua scripting as it is a more powerful tool.
local val = {}
val[1] = redis.call('GET', KEYS[1])
if val[1] then
val[2] = redis.call('PTTL', KEYS[1])
return val
else
return false
end
You use as
EVAL "local val = {} val[1] = redis.call('GET', KEYS[1]) if val[1] then val[2] = redis.call('PTTL', KEYS[1]) return val else return false end" 1 data
1) "queryResult"
2) (integer) 1664
You get both the value and the TTL and then you can trigger a proactive refresh if your cache key is close to expire.

Experimenting with key volatility in redis

Need to clear a concept about redis EXPIRE operation.
Imagine I write the following code:
HMSET myself name "Sam" age "21"
EXPIRE myself 60
This sets the hash myself={'name':'Sam','age':'21'} (using python dictionary to illustrate the concept). Moreover, it sets myself to expire after 60 seconds.
What happens to the EXPIRE setting if I perform a couple of operations on myself? E.g.:
HINCRBY myself age 1
HSET myself gender f
Will EXPIRE remain intact, or will it be removed? And taking it a step further, do us redis coders have any control on whether EXPIRE stays or not in such cases?
Expire will remain, and the TTL will continue to decrease.
From Redis doc :
altering the field value of a hash with HSET (...) will leave the timeout untouched
As Maurice Meyer said above, you can use TTL myself to get the remaining Time To Live of the key mysef, and so use it for your experiments.

How to automatically remove an expired key from a set?

127.0.0.1:6379> keys *
1) "trending_showrooms"
2) "trending_hashtags"
3) "trending_mints"
127.0.0.1:6379> sort trending_mints by *->id DESC LIMIT 0 12
1) "mint_14216"
2) "mint_14159"
3) "mint_14158"
4) "mint_14153"
5) "mint_14151"
6) "mint_14146"
The keys are expired but the keys are inside set. I have to remove the expire keys automatically in redis
You can't set a TTL on individual members within the SET.
This blog post dives a bit deeper on the issue and provides a few workarounds.
https://quickleft.com/blog/how-to-create-and-expire-list-items-in-redis/
Hope that helps.
Please ready this page entirely: https://redis.io/topics/notifications
Summing up, you must have a sentinel program listening to PUB/SUB messages, and you must alter the redis.conf file to enable keyevent expire notifications:
in redis.conf:
notify-keyspace-events Ex
In order to enable the feature a non-empty string is used, composed of
multiple characters, where every character has a special meaning
according to the following table
E Keyevent events, published with __keyevent#<db>__ prefix.
x Expired events (events generated every time a key expires)
Then the sentinel program must listen to the channel __keyevent#0__:del, if your database is 0. Change the database number if using any other than zero.
Then when you subscribe to the channel and receive the key which is expiring, you simply issue a SREM trending_mints key to remove it from the set.
IMPORTANT
The expired events are generated when a key is accessed and is found
to be expired by one of the above systems, as a result there are no
guarantees that the Redis server will be able to generate the expired
event at the time the key time to live reaches the value of zero.
If no command targets the key constantly, and there are many keys with
a TTL associated, there can be a significant delay between the time
the key time to live drops to zero, and the time the expired event is
generated.
Basically expired events are generated when the Redis server deletes
the key and not when the time to live theoretically reaches the value
of zero.
So keys will be deleted due to expiration, but the notification is not guaranteed to occur in the moment TTL reaches zero.
ALSO, if your sentinel program misses the PUB/SUB message, well... that's it, you won't be notified another time! (this is also on the link above)

Set a value to a key with ttl

Is it possible to setnx a key with a value and a ttl in single command in redis
I am trying to implement locking in redis and http://redis.io/commands/hsetnx seems like the best way to do that. It is atomic and returns 0 if a key is already present. Is it possible to HSETNX with TTL
e.g.
HSETNX myhash mykey "myvalue" 10
#and key expires after 10 seconds, and a subsequent HSETNX after 10 seconds returns a value 1 i.e. it behaves as if mykey is not present in myhash
The main problem is that Redis have no support for fields expiration in hashmaps.
You can only expire the entire hashmap by calling EXPIRE on myhash.
So, you should reconsider using ordinary Redis strings instead of hashmaps, because they support SETEX operation.
It'll work fine unless you want to take advantage of using HGETALL, HKEYS or HVALS on your hashmap myhash:
SETEX mynamespace:mykey 10 "myvalue"
mynamespace is not a hashmap here, it simply a prefix, but in most cases it works just as hashmaps do. The only difference is that there is no efficient way to tell which keys are stored in the given namespace or to get them all with a single command.

Autoincrement in Redis

I'm starting to use Redis, and I've run into the following problem.
I have a bunch of objects, let's say Messages in my system. Each time a new User connects, I do the following:
INCR some global variable, let's say g_message_id, and save INCR's return value (the current value of g_message_id).
LPUSH the new message (including the id and the actual message) into a list.
Other clients use the value of g_message_id to check if there are any new messages to get.
Problem is, one client could INCR the g_message_id, but not have time to LPUSH the message before another client tries to read it, assuming that there is a new message.
In other words, I'm looking for a way to do the equivalent of adding rows in SQL, and having an auto-incremented index to work with.
Notes:
I can't use the list indexes, since I often have to delete parts of the list, making it invalid.
My situation in reality is a bit more complex, this is a simpler version.
Current solution:
The best solution I've come up with and what I plan to do is use WATCH and Transactions to try and perform an "autoincrement" myself.
But this is such a common use-case in Redis that I'm surprised there is not existing answer for it, so I'm worried I'm doing something wrong.
If I'm reading correctly, you are using g_message_id both as an id sequence and as a flag to indicate new message(s) are available. One option is to split this into two variables: one to assign message identifiers and the other as a flag to signal to clients that a new message is available.
Clients can then compare the current / prior value of g_new_message_flag to know when new messages are available:
> INCR g_message_id
(integer) 123
# construct the message with id=123 in code
> MULTI
OK
> INCR g_new_message_flag
QUEUED
> LPUSH g_msg_queue "{\"id\": 123, \"msg\": \"hey\"}"
QUEUED
> EXEC
Possible alternative, if your clients can support it: you might want to look into the
Redis publish/subscribe commands, e.g. cients could publish notifications of new messages and subscribe to one or more message channels to receive notifications. You could keep the g_msg_queue to maintain a backlog of N messages for new clients, if necessary.
Update based on comment: If you want each client to detect there are available messages, pop all that are available, and zero out the list, one option is to use a transaction to read the list:
# assuming the message queue contains "123", "456", "789"..
# a client detects there are new messages, then runs this:
> WATCH g_msg_queue
OK
> LRANGE g_msg_queue 0 100000
QUEUED
> DEL g_msg_queue
QUEUED
> EXEC
1) 1) "789"
2) "456"
3) "123"
2) (integer) 1
Update 2: Given the new information, here's what I would do:
Have your writer clients use RPUSH to append new messages to the list. This lets the reader clients start at 0 and iterate forward over the list to get new messages.
Readers need to only remember the index of the last message they fetched from the list.
Readers watch g_new_message_flag to know when to fetch from the list.
Each reader client will then use "LRANGE list index limit" to fetch the new messages. Suppose a reader client has seen a total of 5 messages, it would run "LRANGE g_msg_queue 5 15" to get the next 10 messages. Suppose 3 are returned, so it remembers the index 8. You can make the limit as large as you want, and can walk through the list in small batches.
The reaper client should set a WATCH on the list and delete it inside a transaction, aborting if any client is concurrently reading from it.
When a reader client tries LRANGE and gets 0 messages it can assume the list has been truncated and reset its index to 0.
Do you really need unique sequential IDs? You can use UUIDs for uniqueness and timestamps to check for new messages. If you keep the clocks on all your servers properly synchronized then timestamps with a one second resolution should work just fine.
If you really do need unique sequential IDs then you'll probably have to set up a Flickr style ticket server to properly manage the central list of IDs. This would, essentially, move your g_message_id into a database with proper transaction handling.
You can simulate auto-incrementing a unique key for new rows. Simply use DBSIZE to get the current number of rows, then in your code, increment that number by 1, and use that number as the key for the new row. It's simple and atomic.