Redis clients broadcast problems (in the context of Socket.IO) - redis

So I've read some articles about scaling Socket.IO. For various reasons I don't want to use built-in Socket.IO scaling mechanism (mostly it seems to be inefficient, since it publishes a lot more stuff to Redis then required from my point of view).
So I've came up with this simple idea:
Each Socket.IO server creates Redis pub/sub/store clients, connects to Redis and subscribes to a channel. Now, when I want to broadcast data I just publish it to Redis and all other Socket.IO servers get it and push it to users.
There is a problem, though (which I think is also a problem for Socket.IO built-in mechanism). Let's say I want to know the number of all connected users. There are at least two ways of doing that:
Server A publishes give_me_clients to Redis. Then each Socket.IO server counts connections and publishes number_of_clients. Server A grabs this data, combines it and sends it to the client.
Each server updates number_of_clients_for::ID_HERE in Redis whenever user connects/disconnects to the server. Then Server A just fetches data and combines it. Might be more efficient.
There are problems with these solutions though:
Server A is not aware of other servers. Therefore he does not know when he should stop listening to number_of_clients. One could fix it with making Server A aware of other servers: whenever a server connects to Redis he publishes new_server (Server A grabs the data and stores it in memory). But what to do, when Redis - Socket.IO connection breaks? Is there a way for Redis to notify clients that one of the client disconnected?
Actually the same as above. When a Socket.IO server crashes how to clear number_of_clients data?
So the real question is: can Redis notify (publish to chanel) clients that the connection with one of them has just ended??

After a lot of testing it seems, that Redis does not have such functionality. Also I've found out, that scaling Socket.IO is really a pain.
So I've switched from Socket.IO to WS (see this link). It is low level (but perfect for my use) and it only supports WebSockets (in all major versions). But then again I only want to support WebSockets and FlashSocket (which I have to imlement manually, but that's fine).
The advantage is that I can easily create cluster with such servers. HAProxy works with such servers almost out of the box (some minor tuning). Servers can easily communicate on a local net (with UDP or central TCP server if the cluster is big).
The disadvantage is that one have to manually implement some cool features like heartbeats, broadcasting, rooms, etc. Also you want have long-polling fallback, but that's fine in my case. Scaling is still more important, imho.

Related

Can the clients of a RabbitMQ cluster reconnect to another Node if the one they are connected to fails?

I feel like I am missing something very fundamental here.
I can bring up a RabbitMQ cluster with three nodes (rabbit1, rabbit2 and rabbit3) without an issue. Then when I start writing my microservices it seems like each client connects to only one rabbit instance. So let's say I have all my services connect to rabbit1.
If rabbit1 then goes down will my entire infrastructure blow up? Do the services have a way of switching to another rabbit node? It seems like they cannot, in which case, what is the point of having a cluster?
In case someone else runs into this and has trouble (like myself) finding this in the documentation, RabbitMQ does not manage client connection auto-recovery. From the docs:
Some client libraries provide a mechanism for automatic recovery from
network connection failures... Other clients may consider network
failure recovery to be a responsibility of the application.
So first check if you library offers auto-recovery, if not you'll have to implement it yourself.

Redis connection settings for app "surviving" redis connectivity issues

I'm using azure redis cache for certain performance monitoring services. Basically when events like page loads, etc occur, I send a fire and forget command to redis to record the event. My goal is for my app to function fine whether or not it can contact the redis server. I'm looking for a best practice for this scenario. I would be OK with losing some events if necessary. I've been finding that even though I'm using fire and forget, the app staggers when the web server runs into high latency or connectivity issues with the server.
I'm using StackExchange.Redis. Any best practice configuration options/programming practices for this scenario?
The way I was implementing a singleton pattern on the connection turned out to be blocking requests. Once I fixed this my app behaves as I want (e.g. it still functions when redis connection dies).

Are CCS servers load-balancing upstreams, in case of multiple app servers?

Here's my scenario.
Client devices send upstream to CCS, which forwards them, as intended, to my app server (C libstrophe + stunnel), which acknowledges and treats them.
If i launch 2 instances of my app server (2 xmpp persistent channels), i see upstreams from the devices dispatched to server instance #1 OR server #2, never both, and apparently well balanced.
From a scalability stand point, it would be a nice feature to use in production, but as i found no validation of this behavior in the docs, i wonder if i can rely on it?
Thanks for your help.

Redis PUBLISH/SUBSCRIBE limits

I'm considering Redis for a section of the architecture of a new project. It will consist of a lot of clients (node.js connections) SUBSCRIBING to particular keys with one process PUBLISHING to those keys as needed.
I'm curious about the limits of the PUBLISH/SUBSCRIBE commands and how to mitigate those. An obvious limit is the amount of file descriptors open on the machine with Redis so at some point I'll need to implement Master-Slave or Consistent Hashing to multiple Redis instances.
Does anyone have any solutions about how to scale this architecture with Redis' PubSub?
Redis PubSub scales really easily since the Master/Slave replication automatically publishes to all slaves.
The easiest way is to load balance the connections to node.js with for instance HAProxy, run a Redis slave on each webserver that syncs with a single master that publishes the messages.
I can't give you exact numbers since that greatly depends on the underlying system, but this should scale extremely well. And you don't need to manage the clients and which server they connect to manually. You obviously need some way to handle session state, so you might need to do that anyway, but that's a lot easier to do in the load balancer than in your application.

Faye or Redis Pubsub

I thought I understood this technology, but maybe I don't. What's the difference between the two? Why would you choose one over the other?
Usecase: ~Realtime updates.
I'm the author of Faye. Conceptually, Faye and Redis pub/sub do very similar things, indeed the latest release of Faye can use Redis as a back-end. As Tom says, Redis is appropriate for inter-process messaging within your server cluster since the Redis client will get access to your whole Redis database.
Faye is more appropriate if you want to provide a publicly accessible pub/sub service over the web, for example to power the UI for your website. It only does pub/sub, not any other storage like Redis provides, and works over HTTP and WebSocket rather than over a raw TCP socket. It also allows for user-defined client- and server-side extensions to expand the messaging protocol it uses.
Redis publish/subscribe is a very simple system for internal use in a server cluster - it requires an open connection to redis (unauthenticated and giving complete access to everything in redis).
Obviously this is the most efficient way to handle scenarios where this is appropriate, but if you need authentication, reliable delivery, or http connections you will need to add a more complete messaging system on top of redis. Faye is one of the options in this space.