I'm new at Redis. I'm designing a pub/sub mechanism, in which there's a specific channel for every client (business client) that has at least one user (browser) connected. Those users then receive information of the client to which they belong.
I need Redis because I have a distributed system, so there exists a backend which pushes data to the corresponding client channels and then exists a webapp which has it's own server (multiple instances) that holds the users connections (websockets).
Resuming:
The backend is the publisher and webapp server is the subscriber
A Client has multiple Users
One channel per Client with at least 1 User connected
If Client doesn't have connected Users, then no channel exists
Backend pushes data to every existing Client channel
Webapp Server consumes data only from the Client channels that correspond to the Users connected to itself.
So, in order to reduce work, from my Backend I don't want to push data to Clients that don't have Users connected. So it seems that I need way to share the list of connected Users from my Webapp to my Backend, so that the Backend can decide which Clients data push to Redis. The obvious solution to share that piece of data would be the same Redis instance.
My approach is to have a key in Redis with something like this:
[USERS: User1/ClientA/WebappServer1, User2/ClientB/WebappServer1,
User3/ClientA/WebappServer2]
So here comes my question...
How can I overcome stale data if for example one of my Webapps nodes crashes and it doesn't have the chance to remove the list of connected Users to it from Redis?
Thanks a lot!
Firstly, good luck with the overall project - sounds challenging and fun :)
I'd use a slightly different design to keep track of my users - have each Client/Webapp maintain a set (possibly sorted with login time as score) of their users. Set a TTL for the set and have the client/webapp reset it periodically, or it will expire if the owning process crashes.
Related
We want to track each users turn usage seperately. I inspected Turn Rest API, AFAIU it is just used to authorize the user which already exists in Coturn db. This is the point I couldn't realize exactly. I can use an ice server list which includes different username-credential peers. But I must have initialized these username-credential peers on coturn db before. Am I right? If I am right, I do not have an idea how to do this. Detection of user's request to use turn from frontend -> Should generate credentials like this CoTURN: How to use TURN REST API? which I am already achieved -> if this is a new user, my backend should go to my EC2 instance and run "turnadmin create user command" somehow -> then I will let the WebRTC connection - > then track the usage of specific user and send it back to my backend somehow.
Is this a true scenario? If not, how should it be? Is there another way to manage multiple users and their data usage? Any help would be appreciated.
AFAIU to get the stats data, we must use redis database. I tried to used it, I displayed the traffic data (with psubscribe turn/realm//user//allocation//traffic ) but the other subscribe events have never worked ( psubscribe turn/realm//user//allocation//traffic/peer or psubscribe turn/realm//user//allocation/*/total_traffic even if the allocation is deleted). So, I tried to get past traffic data from redis db but I couldn't find how. At redis, KEYS * command gives just "status" events.
Even if I get these traffic data, I couldn't realize how to use it with multi users. Currently in our project we have one user(in terms of coturn) and other users are using turn over this user.
BTW we tried to track the usage where we created peer connection object from RTCPeerConnection interface. I realized that incoming bytes are lower than the redis output. So I think there is a loss and I think I should calculate it from turn side.
I'm designing a Java Spring based real-time notifications system & chat system using Redis & WebSockets(with sockJS and STOMP). Requirement is for each user to subscribe to a unique channel (channel name will be user id). This is because notifications can be targeted to a single user and chat conversation can be 1-on-1. The very reason im using redis is to get an event triggered in the corresponding application server(there are many) where the user is connected via WebSocket. As I understand, when a publish happens to say "user1" - and if I want to get the "onMessage handler" fired for just that target user:
Do I need to maintain 1 redis connection per user ?
is it okay to open 15k connections at a time with 15k unique subscriptions for those many users connected to the system at once?
Since you have tagged the question with Redisson, I presume you are using it already. If your choice of WebSocket framework is flexible, i.e. not limited to be SockJS with STOMP, you could consider the netty-socketio project. It is written by the author of Redisson and the integration between the two can't be any more natural.
Netty-socketio is fully compatible wit the popular SocketIO client side JS library and it is used by plenty companies commercially.
It doesn't require one redis connection per user and there are people whose usage are known to have already exceeded your requirement.
This is mentioned in the project's README file.
Customer feedback in 2014:
"To stress test the solution we run 30 000 simultaneous websocket clients and managed to peak at total of about 140 000 messages per second with less than 1 second average delay." (c) Viktor Endersz - Kambi Sports Solutions
I would like to have one server and a few clients. The Server will be my own Java application that uses CacheFactory. I will be reading all my static data from a database and populating the cache even before it is requested by any client. While the cache is getting populated in the server, it would also be spreading among all clients that are connected to the server. Once the cache population is complete, I would like to give a green signal to all clients to start requesting data. Is there something I need to do so that the server sends an event to the clients or the clients generate an event signallig the completion of cache pre-heating?
Thanks,
Yash
One way to accomplish this would be to create a region on the server and the client (say /server-ready) for notification only. The client will register interest for all keys in this region. The client will also register a CacheListener for this region.
When the server is done loading data, you can put an entry in the server-ready region. The event will be sent to the client and afterCreate() method on the CacheListener will be invoked, which could serve as a notification to your clients that the server is done populating data.
I understand that, roughly speaking, Trello uses Redis for a transient data store.
Is anyone able to elaborate further on the part it plays in the application?
We use Redis on Trello for ephemeral data that we would be okay with losing. We do not persist the data in Redis to disk, and we use it allkeys-lru, so we only store things there can be kicked out at any time with only very minor inconvenience to users (e.g. momentarily seeing an incorrect user status). That being said, we give it more than 5x the space it needs to store its actual working set and choose from 10 keys for expiry, so we really never see anything get kicked out that we're using.
It's our pubsub server. When a user does something to a board or a card, we want to send a message with that delta to all websocket-connected clients that are subscribed to the object that changed, so all of our Node processes are subscribed to a pubsub channel that propagates those messages, and they propagate that out to the appropriately permissioned and subscribed websockets.
We SORT OF use it to back socket.io, but since we only use the websockets, and since socket.io is too chatty to scale like we need it to at the moment, we have a patch that disables all but the one channel that is necessary to us.
For our users who don't have websockets, we have to keep a list of the actions that have happened on each object channel since the user's last poll request. For that we use a list which we cap at the most recent 100 elements, and an auxilary counter of how many elements have been added to the list since it was created. So when we're answering a poll request from such a browser, we can check the last element it reports that it has seen, and only send down any messages that have been added to the queue since then. So that gets a poll request down to just a permissions check and a single Redis key check in most cases, which is very fast.
We store some ephemeral data about the active status of connected users in Redis, because that data changes frequently and it is not necessary to persist it to disk.
We store short-lived keys to support OAuth logins in Redis.
We love Redis; once you have an instance of it up and running, you want to use it for all kinds of things. The only real trouble we have had with it is with slow-consuming clients eating up the available space.
We use MongoDB for our more traditional database needs.
Trello uses Redis with Socket.IO (RedisStore) for scaling, with the following two features:
key-value store, to set and get values for a connected client
as a pub-sub service
Resources:
Look at the code for RedisStore in Socket.IO here: https://github.com/LearnBoost/socket.io/blob/master/lib/stores/redis.js
Example of Socket.IO with RedisStore: http://www.ranu.com.ar/2011/11/redisstore-and-rooms-with-socketio.html
Everytime the client/browser connects to
Mochiweb server, it creates new process of Loop, doesn't it? So, if I want
to transfer a message from one client to another (typical chat system) I
should use the self() of Loop to store all connected clients PIDs, shouldn't
I?
If something(or everything) is wrong so far plz explain me briefly how the
system works, where is server process and where is client process?
How to send a message to the Loop process of client using its PID? I mean where to
put the "receive" in the Loop?
Here's a good article about a Mochiweb Web Chat implemention. HTTP Clients don't have PID's as HTTP is a stateless protocol. You can use cookies to connect a request to a unique visitor of the chat room.
First, do your research right. Check out this article , and this one and then this last one.
Let the mochiweb processes bring chat data into your other application server (could be a gen_server, a worker in your OTP app with many supervisors, other distributed workers e.t.c). You should not depend on the PID of the mochiweb process. Have another way of uniquely identifying your users. Cookies, Session ids, Auth tokens e.t.c. Something managed only by your application. Let the mochiweb processes just deliver chat data to your servers as soon as its available. You could do some kind of queuing in mnesia where every user has a message queue into which other users post chat messages. Then the mochiweb processes just keep asking mnesia if there is a message available for the user at each connection. In summary, it will depend on the chat methodology: HTTP Long polling/COMET, REST/ Server push/Keep-alive connections blur blur blur....Just keep it fault tolerant and do not involve mochiweb processes in the chat engine, just let mochiweb be only transport and do your chat jungle behind it!
You can use several data structures to avoid using PIDs for identity. Take an example of a queue(). Imagine you have a replicated mnesia Database with RAM Table in which you have implemented a clearly uniquely identifiable queue() per user. A process (mochiweb process), holding a connection to the user only holds an identity of this users session. It then uses this identity to keep checking into his queue() in Mnesia at regular intervals (if you are intending to it this way -- keeping mochiweb processes alive as long as the users session). Then it means that no matter which Process PID a user is connected through, as long as the process has the users identity, then it can fetch (read) messages from his message queue(). This would consequently result in having the possibility of a user having multiple client sessions. The same process can use this identity to dump messages from this user into other users' queues().