Redis supports pub/sub.
How can I have a client retrieve the last value and subscribe to changes, in such a way that I don't miss messages.
Here is the problem with get + subscribe:
I get the last value from Redis
subscribe to changes
before I store the last value from 1. I receive an update, and therefore update my cache with the update
I naively proceed to store the the value from 1. overwritting the value from 3
Related
How should you handle the fact that events received via webhooks can be received in random order ?
For instance, given the following ordered event:
A: invoiceitem.created (with quantity of 1)
B: invoiceitem.updated (with quantity going from 1 to 3)
C: invoiceitem.updated (with quantity going from 3 to 2)
How do you make sure receiving C-A-B does not result in corrupted data (ie with a quantity of 2 instead of 3)?
You could reject the webhook if the previous_attributes in Event#data do not correspond to the current state, but then you are stuck if your local model was updated already, as you will never find yourself in the state expected by the webhook.
Or you can just use treat any webhook as a hint to retrieve and update an object. You just disregard the data sent by the webhook and always retrieve it.
Even if you receive events ordered as update/delete/create it should work, as update would in fact create the object, delete would delete it, and create would fail to retrieve the object and do nothing.
But it feels like a waste of resources to retrieve data each time when the webhook offers it as event data.
This question was asked before but the answers don't cover the above solutions.
Thanks
If your application is sensitive to changes like this that can occur close in time, you really should just use the event as a signal to retrieve the object, as #koopajah noted in their comment. That's the only way to ensure you have the latest state.
I'm taking snapshots for the current session, but I only want to see the latest event records every time. So is there a way to empty the channel's sub-buffers manually?
I am trying to make a notification system with Redis rather than using MySQL which is what I use for the rest of the system. The reason for this is that I don't really need to save that much data so it can be saved in memory and I want it to be lightweight and fast.
The notifications will be kept temporarily. What I mean by that is that I do not want to save all notifications, but more like 50 latest unseen notifications for each user. So first thing I thought about was to use a linked list with a capped length of 50.
I would need to save this information for the notification:
postId
commentId
type
time
userId
username
image
So perhaps a JSON serialized string like this:
{"postId":1,"commentId":10,"type":1,"time":1462960058,"userId":2,"username":"Alexander","image":"ntfpRrgx.png"}
The notifications would be output like this on the client side:
Alexander commented on your post.
Alexander replied to your comment.
Where the type determines what kind of notification it is. I can handle "type" checks client side and output notification format accordingly. But here is the part I am having difficult with.
1) I need to be able to save the notifications in an ordered way so that I know which notification is newest.
2) I need to be able to know when a notification has been seen, so that it is not registered as not seen anymore.
3) I need to have a count of unseen notifications that I can show to the user. And If the user clicks on a notification, I need to mark that as a seen notification and decrement the count of unseen notifications.
4) I need to be able to mark all notifications as marked seen if the user wishes to do that.
5) I need to be able to get a subset of the notifications, whether seen or unseen, like an offset and limit on MySQL. For example, the user sees the newest 5 notifications, but he could click a next button and see the next 5, and the next 5 and so on.
I have no idea how to do all of this on Redis.
The key for the list or set could be user:1:notification. I know a list is sorted, and we can add and remove from the head and tail. But how do I achieve all these points?
1: You can use redis sorted sets (zset) operations and use timestamp as a score, and event id (or the entire event json) as a member.
ZADD my-set-key timestamp event-id
Then to get a page newest items you use zrevrange command. If you choose to use event id as a member, then you need additional structure to store event fields. I would recommend HSET eventid, field, value.
2: You can remove an item by member (event-id)
ZREM my-set-key event-id
3: Assuming your zset only keeps unseen, then you can use ZCARD to get size of the set
ZCARD my-set-key
4: You can remove an entire set in one shot using
DELETE my-set-key
5: You can paginate using zrange/zrevrange:
ZREVRANGE my-set-key start-position to-position
If you need to keep both seen and unseen items, then you need an extra zset where you only add, but don't remove once an item is seen
Node.js & Redis:
I have a LIST (users:waiting) storing a queue of users waiting to join games.
I have SORTED SET (games:waiting) of games waiting for users. This is updated by the servers every 30s with a new date. This way I can ensure if a server crashes, the game is no longer used. If the server is running and fills up, it'll remove itself from the sorted set.
Each game has a SET (game:id:users) containing the users that are in it. Each game can accept no more than 6 players.
Multiple servers are using BRPOP to pick up users from the LIST (users:waiting).
Once a server has a user id, it gets the waiting games ids, then proceeds to run SCARD on their game:id:users SET. If the result of this is less than 6, it adds them to the set.
The problem:
If multiple servers are doing this at once, we could end up with more than 6 users being added to a set at a time. For example if one server requests SCARD and immediately after another runs SADD, the number in the set will have increased but the first server won't know.
Is there anyway of preventing this?
You need transactions, which redis supports: http://redis.io/topics/transactions
in your case in particular, you want to pay attention to the watch command: http://redis.io/topics/transactions#cas
I'm making a game that uses redis to store game state. It keeps track of locations and players pretty well, but I don't have a good way to clean up inactive players.
Every time a player moves (it's a semi-slow moving game. Think 1-5 frames per second), I update a hash with the new location and remove the old location key.
What would be the best way to keep track of active players? I've thought of the following
Set some key on the user to expire. Update every heartbeat or move. Problem is the locations are stored in a hash, so if the user key expires the player will still be in the same spot.
Same, but use pub/sub to listen for the expiration and finish cleaning up (seems overly complicated)
Store heartbeats in a sorted set, have a process run every X seconds to look for old players. Update score every heartbeat.
Completely revamp the way I store locations so I can use expire.. somehow?
Any other ideas?
Perhaps use separate redis data structures (though same database) to track user activity
and user location.
For instance, track users currently online separately using redis sets:
[my code snippet is in python using the redis-python bindings, and adapted from example app in Flask (python micro-framework); example app and the framework both by Armin Ronacher.]
from redis import Redis as redis
from time import time
r1 = redis(db=1)
when the function below is called, it creates a key based on current unix time in minutes
and then adds a user to a set having that key. I would imagine you would want to
set the expiry at say 10 minutes, so at any given time, you have 10 keys live
(one per minute).
def record_online(player_id):
current_time = int(time.time())
expires = now + 600 # 10 minutes TTL
k1 = "playersOnline:{0}".format(now//60)
r1.sadd(k1, player_id)
r1.expire(k1, expires)
So to get all active users just union all of the live keys (in this example, that's 10
keys, a purely arbitrary number), like so:
def active_users(listOfKeys):
return r1.sunion(listOfKeys)
This solves your "clean-up" issues because of the TTL--the inactive users would not appear in your live keys because they constantly recycle--i.e., in active users are only keyed to old timestamps which don't persist in this example (but perhaps are written to a permanent store by redis before expiry). In any event, this clears inactive users from your active redis db.