I am looking to implement rabbitmq on google compute engine to handle messages on my android and ios messaging app. I have heard that rabbitmq can be quite power hungry, so i am wondering what the best solution to combat this is?
Do i use a different protocol like MQTT or so i use something like GCM to handle the connection to and from the apps and let rabbitmq just handle queuing the messages?
You would never want make a direct connection from mobile device to your RabbitMQ server, especially if the app on the device is a consumer. RabbitMQ consumers have to poll RabbitMQ continuously to check if there are messages pending for them. You would want a web-server to handle actual HTTP POST/GET of messages from devices. The webserver will do two things:
Save the message to DB (along with the source and intended destination info)
queue APN/GCM push messages to a RabbitMQ (the broker here) exchange
you will need to build a daemon to monitor RabbitMQ for these push messages that have been queued. The daemon's sole task would be to connect or maintain a connection to Apple's or Google's push messaging services and notify your apps that they have a message pending. If a device is notified of a pending message, it contacts the webserver to consume the message
Related
We're using RabbitMQ in a new project. We'll have IoT devices communicating with queues.
For the devices to send info to the cloud we don't see any issues, however sometimes we need to deliver messages from our backend to the IoT devices. For this we let the devices open an exclusive queue. This works perfectly, as long as the devices are online. When they aren't, the queue is closed and no messages can be send to it anymore.
Is there a way to keep the queue open, so messages are kept until the IoT device comes back online?
Vice-versa: Is there some way to have guaranteed delivery starting at the IoT device. For example: energy measurements every 15 minutes. If the connection drops, messages should be stored on disk (to prevent message loss in case of power cut). They are sent later on when the connection comes back online. Does a service or client library exist that implements this or do we need to develop this ourselves?
Is there a way to keep the queue open, so messages are kept until the
IoT device comes back online?
Use a regular queue, and make sure it's durable if you'd like it to survive RabbitMQ restarts.
Is there some way to have guaranteed delivery starting at the IoT
device.
That depends on the library you are using, but you don't tell us what library nor what protocol you're using (AMQP vs MQTT, for instance).
Some libraries offer automatic reconnect and re-creation of topology (queues, exchanges, etc) but I'm not aware of any that offer local storage of messages until the broker is available again. You'll have to code that yourself.
Please carefully read the documentation with regard to publisher confirmations and consumer acknowledgements, as those are both necessary for reliable messaging link.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
Our Cloud has several exchanges and credentials called CredentialsBucket assigned to a set of IoT devices. When an IoT device register, we provide them this credentials that includes a durable queue and exchange. When the IoT device push messages, it goes to Cloud through the exchange where we do additional security check using HMAC.
When Cloud send a message, it send it directly to his queue (no persistent messages in our case) and the IoT device do the same kind of security check.
I have a webpage connecting to a rabbit mq broker using javascript/websockets that are exposed by a spring app deployed in tomcat. Messages are produced 1 per second by an external application and are rendered on the webpage. The javascript subscription is durable.
The issue I'm experiencing is that when the network connection is broken on the javascript client for a period of time (say 60 seconds), the first ~24 seconds of messages are missing. I've looked through the logs of the app deployed in tomcat and the missing messages seem to be up until the following log statement:
org.springframework.messaging.simp.stomp.StompBrokerRelayMessageHandler - DEBUG - TCP connection to broker closed in session 14
I think this is the point at which the endpoint realises the javascript client is disconnected and decides to close the connection to the broker resulting in future messages queueing up.
My question is how can I ensure that the messages between the time the network is severed and the time the endpoint realises the client is disconnected are not lost? Should the endpoint put the messages back on the queue somehow? Maybe there's a way to make it transactional?
Thanks in advance.
The RabbitMQ team monitors this mailing list and only sometimes answers questions on StackOverflow.
Your Tomcat application should not acknowledge messages from RabbitMQ until it confirms that your Javascript client has received them. This way, any messages that aren't ack-ed by the JS client won't be ack-ed by Tomcat, and RabbitMQ will re-deliver them.
I don't know how your JS app and Tomcat interact, but you may have to implement your own ack process there.
I wish to run an experiment in which the publisher loses connection with the broker and then enqueues messages in its own queue and then when it regains connectivity it sends all its queued messages to the broker. How can I I do this since if I call close connection, I can no longer send(raises an exception). A trick that I can think of is to use a network of two brokers and simulate the above by breaking the connection between the two brokers. Is there an API call that I can use to do the above?
This is very much like facebook messenger or whatsapp acting as a publisher and enqueuing our to-send messages if we are offline and sending them once we are connected.
There is plenty of solutions you could use to break the connection in order to test, here is a non-comprehensive list :
Make a script that can set/unset a firewall rule on your environement blocking the connection port
If you are working with VMs, you can suspend/resume the one running Activemq, you can even automate it with tools like vagrant (vagrant suspend, then vagrant up)
Tweak the connection manualy accessing the activemq jmx
Develop an activemq plugin able to trash connections on demand (or maybe there is one ?)
Now in order to have the behavior you wish to obtain there is two options :
1) Make sure your connection is failover so it can be reestablished, and store your message on disk before sending them with your producer.
2)Produce to a local broker embbeded in your app, and connect this one to the remote broker.
The idea is:
I have N WCF services which connected and subscribed to the same Redis message channel. These services use this channel to exchange messages to sync some caches and other data.
How each service can ignore its own messages? I.e. how to publish to all but me?
It looks like Redis PUB/SUB doesn't support such filtration. So, the solution is to use set of individual channels for every publisher and common channel for subscription synchronization between them. Here is an golang example of no-echo chat application.
I'm currently in the middle of developing a webapplication which needs a websocket connection to receive notifications of events from the server.
The clients are separated in groups and all the clients in a group must receive the same event notifications.
I thought that ActiveMQ could probably support this model, using different queues for each group of clients. It would also be relatively easy to push events to ActiveMQ using stomp, and then use stomp-over-websockets for the clients.
The problem I see is that messages should not be consumed by only one client, but distributed to all the clients connected to the queue.
Also the queue should not be stored. If a client is not connected when the event is generated, then it will never receive it.
I don't know ActiveMQ that much, so I'm not sure if this is possible or if there is another easy solution that could be used instead of writing my own message server.
Thanks
ActiveMQ 5.4.1 supports WebSockets natively (just like Stomp, JMS, etc.).
There is the concept of queues (you mentioned these), but also of topics.
In a queue, a single message will be received by exactly one consumer, in a topic
it goes to all the subscribers. See: http://activemq.apache.org/how-does-a-queue-compare-to-a-topic.html
There are some Stomp-WebSocket JS libraries floating around. Kaazing has a bundle that includes ActiveMQ and supports JMS API/Stomp protocol over WebSockets with support for older browsers, different client technologies, and Cross-Site security.
Look at Pusher, otherwise you'll need something that supports topic based pub/sub. You could look at Redis or RabbitMQ