I make a program that can convert numbers to some Serial numbers by some rules and check if Serial numbers is used. And I use redis to do the check work.
First, get num1 in slave. when result is not nil , it means serial number is used , so return 'used'.
Second, if result is nil, set num1 in master and return 'new' (once return , the nums mean 'used')
The problem is that master will crash before it finish the process of sync with slave , so the number maybe not in slave. At this time get num1 in slave , it return 'new',but the num1 is used.
how to ensure data consistency between master and slave in redis?
Read about the WAIT command - it allows you to specify the number of slaves that were updated with the most recent change before taking further action.
Redis uses asynchronous replication and it is not possible to ensure the slave actually received a given write. There will always be a window for data loss.
Replication Docs
Related
I want to cache timeseries data stored in mysql.
For cache stategy I use next logic: split all data to a blocks by one hour, then when user request data I try to range from it from redis, then for all blocks that do not contain in result load a block from mysql db and push it to redis timeseries, if no data exist by this block - push single empty value at the end of block.
Problem: My algorithm implies that if at least one value selected for block therefore all block were chached. The problem occurs when redis reached memory limit and purge data.
Question: Can I specify purge strategy that guarantee if one value of block were purged then all data from block were purged?
I didn't find any solution.
Application 1 set a value in Redis.
And we have two instance of application 2 which are running and we would like only one instance should read this value from Redis (please note application 2 takes around 30 sec to 1 min process data )
Can Instance-1 application 2 acquire lock redis key which is created by application 1 , so that instance-2 of application 2 will not read and do the same operation ?
No, there's no concept of record lock in Redis. If you need to achieve some sort of locking you have to use another set of data structures to mimic that behavior. For example
List: You can use a list and then POP the item from the list or...
Redis Stream: Using Redis Stream with ConsumerGroup so that each consumer in your Group only sees a portion of the whole data the needs to be processed and it guarantees you that, when an item is delivered to a consumer, it is not going to be delivered to another one.
I have one Redis server, and multiple Redis clients. Each Redis client is a WebSocket+HTTP server that, amongst others, manages WebSocket connections. These WebSocket+HTTP servers are hidden behind a load balancer.
The WebSocket+HTTP servers provide a GET /health HTTP endpoint. I would like this endpoint to provide the total number of current WebSocket connections, across the whole cluster.
When one hits GET /health, then obviously, the load balancer will dispatch the request to only one WebSocket+HTTP server instance.
How can I make one WebSocket+HTTP server instance ask for all the other instances how many WebSocket connections they currently manage?
I thought of the following steps:
The instance uses CLIENT LIST to know how many Redis clients there currently are (say n);
The instance then publishes WEBSOCKET_CONNECTION_COUNT_REQUEST to Redis (with the assumption that all Redis clients are subscribed to this event);
The instance finally waits for n WEBSOCKET_CONNECTION_COUNT_RESPONSEs, sums up the counts, and returns it over HTTP.
What do you think about the above approach? Isn't it a bit too convoluted? I have the feeling I'm maybe a bit overengineering...
I initially thought that instances could INCR/DECR a count inside the Redis storage, but I'm not sure how to handle instances being killed (as the count should then be decremented accordingly). I think an ad-hoc solution would be preferable. Still open to ideas though.
I'd use a sorted set, where the members are WS server ids and the score is the timestamp of their last "ping".
Have each WS "ping" periodically (e.g. every 10 seconds) by updating that sorted set with its id. You can use a Lua script to get the time from the server and set the member's score to make everything nice and atomic:
redis.replicate_commands()
local t = redis.call('TIME')
return redis.call('ZADD', KEYS[1], tonumber(t[0]), ARGV[1])
So if your sorted set is called "wsservers" and the WS's id is foo, you can call the script after loading it with EVALSHA <script-sha1> 1 wsservers foo.
To return the count, all you need to do is a range on the sorted set of the last period (i.e. 11 seconds) and count the results. You can also use this opportunity for trimming old dead servers. Of course, a Lua script is my preferred approach, and this does both tasks w/o having to actually send the raw WS members down the line to the calling client:
local t = redis.call('TIME')
local live = redis.call('ZRANGE', KEYS[1], tonumber(t[0])-11, '+inf')
redis.call('ZREMRANGEBYSCORE', KEYS[1], '-inf', tonumber(t[0])-11)
return #live
I have multiple writers overwriting the same key in redis. How do I guarantee that only the chosen one write last?
Can I perform write synchronisation in Redis withour synchronise the writers first?
Background:
In my system a unique dispatcher send works to do to various workers. Each worker then write the result in Redis overwrite the same key. I need to be sure that only the last worker that receive work from the dispatcher writes in Redis.
Use an ordered set (ZSET): add your entry with a score equal to the unix timestamp, then delete all but the top rank.
A Redis Ordered set is a set, where each entry also has a score. The set is ordered according to the score, and the position of an element in the ordered set is called Rank.
In order:
Remove all the entries with score equal or less then the one you are adding(zremrangebyscore). Since you are adding to a set, in case your value is duplicate your new entry would be ignored, you want instead to keep the entry with highest rank.
Add your value to the zset (zadd)
delete by rank all the entries but the one with HIGHEST rank (zremrangebyrank)
You should do it inside a transaction (pipeline)
Example in python:
# timestamp contains the time when the dispatcher sent a message to this worker
key = "key_zset:%s"%id
pipeline = self._redis_connection.db.pipeline(transaction=True)
pipeline.zremrangebyscore(key, 0, t) # Avoid duplicate Scores and identical data
pipeline.zadd(key, t, "value")
pipeline.zremrangebyrank(key, 0, -2)
pipeline.execute(raise_on_error=True)
If I were you, I would use redlock.
Before you write to that key, you acquire the lock for it, then update it and then release the lock.
I use Node.js so it would look something like this, not actually correct code but you get the idea.
Promise.all(startPromises)
.bind(this)
.then(acquireLock)
.then(withLock)
.then(releaseLock)
.catch(handleErr)
function acquireLock(key) {
return redis.rl.lock(`locks:${key}`, 3000)
}
function withLock(lock) {
this.lock = lock
// do stuff here after get the lock
}
function releaseLock() {
this.lock.unlock()
}
You can use redis pipeline with Transaction.
Redis is single threaded server. Server will execute commands syncronously. When Pipeline with transaction is used, server will execute all commands in pipeline atomically.
Transactions
MULTI, EXEC, DISCARD and WATCH are the foundation of transactions in Redis. They allow the execution of a group of commands in a single step, with two important guarantees:
All the commands in a transaction are serialized and executed sequentially. It can never happen that a request issued by another client is served in the middle of the execution of a Redis transaction. This guarantees that the commands are executed as a single isolated operation.
A simple example in python
with redis_client.pipeline(transaction=True) as pipe:
val = int(pipe.get("mykey"))
val = val*val%10
pipe.set("mykey",val)
pipe.execute()
127.0.0.1:6379> keys *
1) "trending_showrooms"
2) "trending_hashtags"
3) "trending_mints"
127.0.0.1:6379> sort trending_mints by *->id DESC LIMIT 0 12
1) "mint_14216"
2) "mint_14159"
3) "mint_14158"
4) "mint_14153"
5) "mint_14151"
6) "mint_14146"
The keys are expired but the keys are inside set. I have to remove the expire keys automatically in redis
You can't set a TTL on individual members within the SET.
This blog post dives a bit deeper on the issue and provides a few workarounds.
https://quickleft.com/blog/how-to-create-and-expire-list-items-in-redis/
Hope that helps.
Please ready this page entirely: https://redis.io/topics/notifications
Summing up, you must have a sentinel program listening to PUB/SUB messages, and you must alter the redis.conf file to enable keyevent expire notifications:
in redis.conf:
notify-keyspace-events Ex
In order to enable the feature a non-empty string is used, composed of
multiple characters, where every character has a special meaning
according to the following table
E Keyevent events, published with __keyevent#<db>__ prefix.
x Expired events (events generated every time a key expires)
Then the sentinel program must listen to the channel __keyevent#0__:del, if your database is 0. Change the database number if using any other than zero.
Then when you subscribe to the channel and receive the key which is expiring, you simply issue a SREM trending_mints key to remove it from the set.
IMPORTANT
The expired events are generated when a key is accessed and is found
to be expired by one of the above systems, as a result there are no
guarantees that the Redis server will be able to generate the expired
event at the time the key time to live reaches the value of zero.
If no command targets the key constantly, and there are many keys with
a TTL associated, there can be a significant delay between the time
the key time to live drops to zero, and the time the expired event is
generated.
Basically expired events are generated when the Redis server deletes
the key and not when the time to live theoretically reaches the value
of zero.
So keys will be deleted due to expiration, but the notification is not guaranteed to occur in the moment TTL reaches zero.
ALSO, if your sentinel program misses the PUB/SUB message, well... that's it, you won't be notified another time! (this is also on the link above)