How to subscribe redis channel from nuxt app - vue.js

I have a nuxtjs frontend app and a php backend running on a different server.
I'm setting up a real time chat. The backend publish to a redis server once a new message has been sent (i.e "hey I received a new message for room "foo", get it and go get notify other recipients"). So I'm supposed to subscribe a specific redis channel and then notify others.
My thinking is more about the way I should approach this.
Do I need something like socket.io to talk to my redis server ?
Should I use only redis.createClient() to initiate a redis instance and then subscribe (dynamically) to each and every new room I can already be in or added ?

Related

Load balancing WebSocket with Redis and RabbitMQ

Consider a small chat server. In this server, the actual processing of messages is done by nodes of a service called "chat". Communications of this service along with a "user" service are then aggregated via a "gateway" service in front that is the only service that actually communicates with the users and is in charge of passing requests received to other services via the RabbitMQ channel they share.
In a system designed like this, each user is connected to one of the instances of the "gateway" service and when sending and receiving messages indirectly communicates with the private "chat" or "user" services behind. To load balance this, we have an Nginx reverse-proxy on the edge that tries to distribute requests to different "gateway" instances. But since WebSocket connection is real-time, "chat" instances should also be able to send messages to the right instance of the "gateway" in charge of that specific user for user-specific messages and to all "gateway" instances for site-wide messages. This is a problem since with RabbitMQ I don't believe we can target a specific subscriber and even if we could, we don't know to which instance that specific user is connected right now.
Therefore, since we are using Socket.io for WebSocket connection, I am thinking of adding a new Redis node to the stack to allow this communication between different instances of the "gateway" service. This is directly supported by Socket.io and works alright and removes all sorts of limitations imposed by the RabbitMQ, however, we are still using RabbitMQ to route a message from a "chat" instance to a "gateway" instance that then will propagate through the Redis service and when the right "gateway" instance having access to the user is found, delivered to them.
This adds unnecessary lag to user-specific outbound messages. So here I am asking if anyone has a better idea of how this problem should be approached and how to decrease this lag.
Personally, I have this idea of adding Socket.io to "chat" services (with no client access) and use its backend to send the message directly to the Redis store so that the instance of the "gateway" connected to it can route it directly to the user, going over the whole RabbitMQ thing for this type of messages.
It might be important to mention that none of these services are here just to do this specific thing, RabbitMQ is heavily used for communication between different services acting as the message broker and the "gateway" service works with multiple other services for data aggregation, authentication and data validation and transformation. The above example was a simplified version of the problem at hand with the minimum number of moving parts that I could easily describe here.
Edit: To send messages directly to socket.io redis store, the following library can be used apparently not to load the whole socket.io library:
https://github.com/socketio/socket.io-redis-emitter

max channel limit and max subscription limit

I am trying to use Redis pub sub for managing websockets from a client chat app.
Every unique user creates a channel for every user called uuid_channel. So i will have thousands of channels created which a group of servers are listening to which has websocket connection open to clients.
When messages are published to these channels in redis, subscribers on a particular server will get the notification and send it back via websocket.
Is it ok to do this use case with Redis pub sub?

RabbitMQ Connect/Disconnect Notifications

I am new to RabbitMQ and I am working on an application that will receive information from many devices and route all messages into a couple of queues depending on the MQTT topic. I was able to get all of this working easily, but now I am looking into how to push a message to a queue when a client connects or disconnects from RabbitMQ in order to update the current status of the client in my database. Is there a way to do this?
Event Exchange Plugin
Client connection, channels, queues, consumers, and other parts of the system naturally generate events. For example, when a connection is accepted, authenticated and access to the target virtual host is authorised, it will emit an event of type connection_created. When a connection is closed or fails for any reason, a connection_closed event is deleted.
Unfortunately the rabbitmq_event_exchange is created after importing bindings from definition.json. Which means that the amq.rabbitmq.event cannot be bound to a queue via the configuration and must be bound after the start.

how to checks if RabbitMQ server is alive using the REST API

I am totally new to spring framework. I am trying to create a project where I can have the connectivity to the rabbitMq and I even before I publish the message, I want to check if the queues are alive or not. Is this possible to ping the queue to see if it is alive or not?
RabbitMQ have the management API. You can use it to check the status of queue,exchange,binding.
If you are working on PHP. Then here is the libarary which can be used.

Using APNs in a messaging app

I'm working on a messaging app (something like WhatsApp) and I have a dilemma about implementing it's main functionality - sending message from client1 to client2.
The thing is I'm using a centralized server design, where clients uses NSURLConnection to send messages to the server, the server doesn't keep and manage open sockets and can't send a message for one of the clients, so clients have a timer and query the server every 2 seconds to see if a new message is waiting for them.
The problem with this approach is that querying the server every 2 second seem to kill the battery very fast, so I thought maybe instead of client querying the server, to use APNS so when client1 send a message to the server, the server will send a push notification to client2, then client2 will fetch the data from the server.
Will this approach work with a massive messaging app requiring massive push notification uses?
Yes. I would say this approach is okay and will perform well.
You could also create a socket connection when your application is running in front. But the APNS-way (your preferred way) will also work when the user has quit your app.
APNS can handle huge load. There where only very few delays as far as i noticed.
The PUSH-System on iOS is just a HTTP Connection to apple which keeps the response-channel open for some hours (like loading a webpage for some hours).
It will use around +10% of your battery.
So best would be to not create another keep-alive HTTP/Socket connection and to re-use apples channel (APNS) to save the endusers battery.
In your app you will receive the Push-Notification and you can parse the JSON-Data and then pull/sync with your own server.
You should also take in mind what to do, when your app is not running in foreground (then you might display the received message as APNS messages as WhatsApp does).