I found this answer to determine how to broadcast a message from a controller to a channel:
How to broadcast a message from a Phoenix Controller to a Channel?
What I need however is to publish a message to a channel from a mix task or potentially a Toniq worker. In this case the process may or may not be on the same machine.
Now if I understand correctly channels use either some in memory pub/sub or can be backed by other distributed queues.
If this is correct how could I have the channels backed by redis and have my worker/task publish to redis that would trigger a broadcast to the connected sockets?
Related
I'm looking at turning a monolith application into a microservice-oriented application and in doing so will need a robust messaging system for interprocesses-communication. The idea is for the microserviceprocesses to be run on a cluster of servers for HA, with requests to be processed to be added on a message queue that all the applications can access. I'm looking at using Redis as both a KV-store for transient data and also as a message broker using the ServiceStack framework for .Net but I worry that the concept of eventual consistency applied by Redis will make processing of the requests unreliable. This is how I understand Redis to function in regards to Mq:
Client 1 posts a request to a queue on node 1
Node 1 will inform all listeners on that queue using pub/sub of the existence of
the request and will also push the requests to node 2 asynchronously.
The listeners on node 1 will pull the request from the node, only 1 of them will obtain it as should be. An update of the removal of the request is sent to node 2 asynchronously but will take some time to arrive.
The initial request is received by node 2 (assuming a bit of a delay in RTT) which will go ahead and inform listeners connected to it using pub/sub. Before the update from node 1 is received regarding the removal of the request from the queue a listener on node 2 may also pull the request. The result being that two listeners ended up processing the same request, which would cause havoc in our system.
Is there anything in Redis or the implementation of ServiceStack Redis Mq that would prevent the scenario described to occur? Or is there something else regarding replication in Redis that I have misunderstood? Or should I abandon the Redis/SS approach for Mq and use something like RabbitMQ instead that I have understood to be ACID-compliant?
It's not possible for the same message to be processed twice in Redis MQ as the message worker pops the message off the Redis List backed MQ and all Redis operations are atomic so no other message worker will have access to the messages that have been removed from the List.
ServiceStack.Redis (which Redis MQ uses) only supports Redis Sentinel for HA which despite Redis supporting multiple replicas they only contain a read only view of the master dataset, so all write operations like List add/remove operations can only happen on the single master instance.
One notable difference from using Redis MQ instead of specific purpose MQ like Rabbit MQ is that Redis doesn't support ACK's, so if the message worker process that pops the message off the MQ crashes then it's message is lost, as opposed to Rabbit MQ where if the stateful connection of an un Ack'd message dies the message is restored by the RabbitMQ server back to the MQ.
I am trying to create a Subscriber in my Spring Boot Application. My objective is that the publisher will send multiple messages to a topic and I have to get those message and process them .I noticed that the "handleMessage" of both Paho and Apache ActiveMq will process 1 message at a time. Is it possible to make it concurrent??
I have tried the following
Replaced Paho with ActiveMq
Provided concurrency in my listenercontainer
Provided prefetch in my subscribe URL
Please let me know if there is any way to make my MQTT subscriber to take multiple messages concurrently.
Thank You
If you supply your own thread pool you can have the handleMessage method pass the incoming message off to the threadpool to process and then pass the next message off to the pool.
I am working on redis SMQ persistence. My questions here is, While publisher publishing the messages, consumer has stopped suddenly. When consumer connects again, is it possible to subscribe messages from where it has stopped?
No - Redis' Pub/Sub has no persistence, and once a message has been published, it is sent only to the connected subscribed clients. Afterwards, the message is gone forever.
With standard Pub/Sub you can use Lua scripts to persist your message. You need to check whether you have a listener on channel or not. If not then storing your message with channel key on redis . When the subscriber cames back it checks if there is anything for him based on channel key. Second option is to use Redis Stream. Check this gist.
Plz use 2 redis connections: 1 pubsub, second - LPOP/RPOP
The idea is:
I have N WCF services which connected and subscribed to the same Redis message channel. These services use this channel to exchange messages to sync some caches and other data.
How each service can ignore its own messages? I.e. how to publish to all but me?
It looks like Redis PUB/SUB doesn't support such filtration. So, the solution is to use set of individual channels for every publisher and common channel for subscription synchronization between them. Here is an golang example of no-echo chat application.
I am looking to implement rabbitmq on google compute engine to handle messages on my android and ios messaging app. I have heard that rabbitmq can be quite power hungry, so i am wondering what the best solution to combat this is?
Do i use a different protocol like MQTT or so i use something like GCM to handle the connection to and from the apps and let rabbitmq just handle queuing the messages?
You would never want make a direct connection from mobile device to your RabbitMQ server, especially if the app on the device is a consumer. RabbitMQ consumers have to poll RabbitMQ continuously to check if there are messages pending for them. You would want a web-server to handle actual HTTP POST/GET of messages from devices. The webserver will do two things:
Save the message to DB (along with the source and intended destination info)
queue APN/GCM push messages to a RabbitMQ (the broker here) exchange
you will need to build a daemon to monitor RabbitMQ for these push messages that have been queued. The daemon's sole task would be to connect or maintain a connection to Apple's or Google's push messaging services and notify your apps that they have a message pending. If a device is notified of a pending message, it contacts the webserver to consume the message