I am trying to use Redis pub sub for managing websockets from a client chat app.
Every unique user creates a channel for every user called uuid_channel. So i will have thousands of channels created which a group of servers are listening to which has websocket connection open to clients.
When messages are published to these channels in redis, subscribers on a particular server will get the notification and send it back via websocket.
Is it ok to do this use case with Redis pub sub?
Related
I have multiple gRPC server instances run behind a load balancer and, a large number of clients each client subscribe to one of the instances.
I have a use case when I need to stream a message from server to group of clients and am wondering can I store all the client's streams in central DB ie. Redis then when I want to stream a message one of the instances will fetch all the stream connections belonging to the clients' group and use them to send a message?
I have a nuxtjs frontend app and a php backend running on a different server.
I'm setting up a real time chat. The backend publish to a redis server once a new message has been sent (i.e "hey I received a new message for room "foo", get it and go get notify other recipients"). So I'm supposed to subscribe a specific redis channel and then notify others.
My thinking is more about the way I should approach this.
Do I need something like socket.io to talk to my redis server ?
Should I use only redis.createClient() to initiate a redis instance and then subscribe (dynamically) to each and every new room I can already be in or added ?
I am new to RabbitMQ and I am working on an application that will receive information from many devices and route all messages into a couple of queues depending on the MQTT topic. I was able to get all of this working easily, but now I am looking into how to push a message to a queue when a client connects or disconnects from RabbitMQ in order to update the current status of the client in my database. Is there a way to do this?
Event Exchange Plugin
Client connection, channels, queues, consumers, and other parts of the system naturally generate events. For example, when a connection is accepted, authenticated and access to the target virtual host is authorised, it will emit an event of type connection_created. When a connection is closed or fails for any reason, a connection_closed event is deleted.
Unfortunately the rabbitmq_event_exchange is created after importing bindings from definition.json. Which means that the amq.rabbitmq.event cannot be bound to a queue via the configuration and must be bound after the start.
i have a TCP server (listener) software written in C#. Many devices (approximately 5000) will connect to server asynchronously and send/receive messages to/from server. Now, i have 2 questions.
I have to send reply messages to every received message. Which way should i use? Asynchronous (asap when message received) or synchronous (sending replies using a reply task).
How can i strain test my server? I can communicate with 1-2 computers successfully but i don't know that my software works fine for 5000 devices.
Judging from what your saying, your server or listener is expected to be available to respond to multiple requests at any given time. The key is how has it been implemented ? Does the server support multi client response, in other words can it fulfill requests of multiple clients at the same time ? May be using multiple threads etc ! Or does it use a queue to keep track of all requests and then serve them in a orderly fashion, or does it use some other method to serve requests !
I'm trying to get some feedback on the recommendations for a service 'roster' in my specific application. I have a server app that maintains persistant socket connections with clients. I want to further develop the server to support distributed instances. Server "A" would need to be able to broadcast data to the other online server instances. Same goes for all other active instances.
Options I am trying to research:
Redis / Zookeeper / Doozer - Each server instance would register itself to the configuration server, and all connected servers would receive configuration updates as it changes. What then?
Maintain end-to-end connections with each server instance and iterate over the list with each outgoing data?
Some custom UDP multicast, but I would need to roll my own added reliability on top of it.
Custom message broker - A service that runs and maintains a registry as each server connects and informs it. Maintains a connection with each server to accept data and re-broadcast it to the other servers.
Some reliable UDP multicast transport where each server instance just broadcasts directly and no roster is maintained.
Here are my concerns:
I would love to avoid relying on external apps, like zookeeper or doozer but I would use them obviously if its the best solution
With a custom message broker, I wouldnt want it to become a bottleneck is throughput. Which would mean I might have to also be able to run multiple message brokers and use a load balancer when scaling?
multicast doesnt require any external processes if I manage to roll my own, but otherwise I would need to maybe use ZMQ, which again puts me in the situation of depends.
I realize that I am also talking about message delivery, but it goes hand in hand with the solution I go with.
By the way, my server is written in Go. Any ideas on a best recommended way to maintain scalability?
* EDIT of goal *
What I am really asking is what is the best way to implement broadcasting data between instances of a distributed server given the following:
Each server instance maintains persistent TCP socket connections with its remote clients and passes messages between them.
Messages need to be able to be broadcasted to the other running instances so they can be delivered to relavant client connections.
Low latency is important because the messaging can be high speed.
Sequence and reliability is important.
* Updated Question Summary *
If you have multiple servers / multiple end points that need to pub/sub between each other, what is a recommended mode of communication between them? One or more message brokers to re-pub messages to a roster of the discovered servers? Reliable multicast directly from each server?
How do you connect multiple end points in a distributed system while keeping latency low, speed high, and delivery reliable?
Assuming all of your client-facing endpoints are on the same LAN (which they can be for the first reasonable step in scaling), reliable UDP multicast would allow you to send published messages directly from the publishing endpoint to any of the endpoints who have clients subscribed to the channel. This also satisfies the low-latency requirement much better than proxying data through a persistent storage layer.
Multicast groups
A central database (say, Redis) could track a map of multicast groups (IP:PORT) <--> channels.
When an endpoint receives a new client with a new channel to subscribe, it can ask the database for the channel's multicast address and join the multicast group.
Reliable UDP multicast
When an endpoint receives a published message for a channel, it sends the message to that channel's multicast socket.
Message packets will contain ordered identifiers per server per multicast group. If an endpoint receives a message without receiving the previous message from a server, it will send a "not acknowledged" message for any messages it missed back to the publishing server.
The publishing server tracks a list of recent messages, and resends NAK'd messages.
To handle the edge case of a server sending only one message and having it fail to reach a server, server can send a packet count to the multicast group over the lifetime of their NAK queue: "I've sent 24 messages", giving other servers a chance to NAK previous messages.
You might want to just implement PGM.
Persistent storage
If you do end up storing data long-term, storage services can join the multicast groups just like endpoints... but store the messages in a database instead of sending them to clients.