I'd like to use MQTT to send control information to a device, but I'm concerned that leaving the MQTT client and cell data connection up (basically in long-polling mode) is somehow bad. Either from a data charges, network usage, battery life, or some other aspect?
Another approach might be to send an SMS to the device when it has a message to pick up - but that seems to defeat the purpose of MQTT and also introduces a long delay while dialing and setting up the GPRS connection.
Is there any reason I should be concerned on this approach?
I think this approach is quite valid - think of it this way: Your App's long polling transfers a very small volume of data, as long as it just polls, so
the data usage should be miniscule
the battery is impacted only for the data sent in addition to the keepalive, which is at least an order of magnitude higher than the long polling
as a reference: ActiveSync, which runs all the time, is nothing else than a fancy form of long polling
You may want to look at MQTT-SN, which is designed to run over UDP, and therefore does not need an active connection. Real Small Message Broker is an implementation of a MQTT-SN broker, and will bridge to Mosquitto.
The other approach is to use the retain flag on messages, that way a control app can send the message and the device will get it as soon as it reconnects, regards less of if the app is still online. In all cases, the user experience on the app side should differentiate between the request being sent and it being honored, or refused, so you will need tri-state controls (on, off, pending).
Related
I am using IoTHub device client SDK on an embedded device. The application will send telemetry message to iot hub periodically. The iot device connect to a wireless router and wireless connect to internet via WAN port.
When the wireless router lost internet connection, iot device will not get notified immediately about the disconnection. It takes about 60s to get notified, before that iot device will continue to send telemetry message with IoTHubDeviceClient_LL_SendEventAsync(), all those message get queued in SDK layer and eat memory. Since it's on embedded device with limited resource, memory get eaten up and cause program been killed by a lower memory killer app.
Is there way to specified total size of iot message can be queued in sdk layer? If exceed this quota, IoTHubDeviceClient_LL_SendEventAsync() will failed immediately.
Actually this is also needed for normal scenario too. When iot device send message, seems message been queued in low layer and get flushed out at certain time. I don't see any API that can control the flush. That create another problem, even when there is internet connection, from application level, there is no control of how many message been queued and how long it been queued, in turn, app has no control of how much memory been used by process. On my device, there is system monitor that will kill process use too much memory.
The question is what do you do even in that case if the message failure occurs in the case that the queue is full? Do you lose the information then because of lack of storage capacity? From the IoT perspective, I would recommend in this case to consider if your device is reliable IoT device to handle these edge cases as well. And also knowing the limits of the devices, and knowing how long it can be without the internet connection helps to mitigate these risks from your application, not SDK.
From the GitHub, default sendMessageAsync method throws timeout exception in case your message sending fails, unless you have some kind of retry policies implemented(according to the documentation C SDK does not allow custom retry policies
https://learn.microsoft.com/en-us/azure/iot-hub/iot-hub-reliability-features-in-sdks).
According to the documentation in case of connection failure based on the retry policy(if you have set it), SDK will try to initiate connection this way or that way and queue the messages created in the meantime:
https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/connection_and_messaging_reliability.md
So, an expectation here is that SDK does not take responsibility for the memory limits. This is up to the application to deal. Since your device has some limitations, I would recommend implementing your own queuing mechanism(maybe set no-retry as a policy and that way avoid queuing). That way you have under the control what will happen in the case that there is no internet connection and have under the control memory limitations. Maybe your business case accepts that you calculate an average value and instead of 50 you store 1 message over the time etc..
If this something you do not like, the documentation says also that you set the timeout for the queue - maybe not the memory limit but timeout yes, so maybe you can try to investigate this a bit deeper:
"There are two timeout controls in this system. An original one in the iothub_client_ll layer - which controls the "waiting to send" queue - and a modern one in the protocol transport layer - that applies to the "in progress" list. However, since IoTHubClient_LL_DoWork causes the Telemetry messages to be immediately* processed, sent and moved to the "in progress" list, the first timeout control is virtually non-applicable.
Both can be fine-tuned by users through IoTHubClient_LL_SetOption, and because of that removing the original control could cause a break for existing customers. For that reason it has been kept as is, but it will be re-designed when we move to the next major version of the product."
I am using RabbitMQ with Spring AMQP
large message (>100MB, 102400KB)
small bandwidth (<512Kbps)
low heartbeat interval (10 seconds)
single broker
It will take >= 200*8 seconds to consume the message, which is more than my heartbeat interval. From https://stackoverflow.com/a/42363685/418439
If the message transfer time between nodes (60seconds?) > heartbeat time between nodes, it will cause the cluster to disconnect and the loose the message
Will I also face the disconnection issue even I am using single broker?
Does the heartbeat and consumer using the same thread, where if
consumer is consuming, it is not possible to perform heartbeat?
If so, what can I do to consume the message, without increase heartbeat interval or reduce my message size?
Update:
I have received another answer and comments after I posted my own answer. Thanks for the feedback. Just to clarify, I do not use AMQP for file transfer. Actually the data is in JSON message, some are simple and small but some contain complex information, include some free hand drawing. Besides saving the data at Data Center, we also save a copy of message at branch level via AMQP, for case connectivity to Data Center is not available.
So, the real questions here are a bit more fundamental, and those are: (1) is it appropriate to perform a large file transfer via AMQP, and (2) what purpose does the heartbeat serve?
Heartbeats
First off, let's address the heartbeat question. As the RabbitMQ documentation clearly states, the purpose of the heartbeat is "to ensure that the application layer promptly finds out about disrupted connections."
The reason for this is simple. In an ordinary AMQP usage, there may be several seconds, even minutes between the arrival of successive messages. Without data being exchanged across a TCP session, many firewalls and other networking equipment automatically close ports to lower exposure to the enterprise network. Heartbeats further help mitigate a fundamental weakness in TCP, which is the difficulty of detecting a dropped connection. Networks experience failure, and TCP is not always able to detect that on its own.
So, the bottom line here is that, while you're transferring a large message, the connection is active and the heartbeat function serves no useful purpose, and can cause you trouble. It's best to turn it off in such cases.
AMQP For Moving Large Files?
The second issue, and I believe more important question, is how should large files be dealt with. To answer this, let's first consider what a message queue does: sending messages -- small bits of data which communicate something to another computer system. The operative word here is small. Messages typically contain one of three things: 1. commands (go do something), 2. events (something happened), 3. requests (give me some data), and 4. responses (here is your data). A full discussion on these is beyond the scope, but suffice it to say that each of these can generally be composed of a small message less than 100kB.
Indeed, the AMQP protocol, which underlies RabbitMQ, is a fairly chatty protocol. It requires large messages be divided into multiple segments of no more than 131kB. This can add a significant amount of overhead to a large file transfer, especially when compared to other file transfer mechanisms (FTP, for instance). Secondly, the message has to be fully processed by the broker before it is made available in a queue, and it ties up valuable resources on the broker while this is being done. For one, the whole message must fit into RAM on the broker due to its architecture. This solution may work for one client and one broker, but it will break quickly when scaling out is attempted.
Finally, compression is often desirable when transferring files - HTTP supports gzip compression automatcially. AMQP does not. It is quite common in message-oriented applications to send a message containing a resource locator (e.g. URL) pointing to the larger data file, which is then accessed via appropriate means.
The moral of the story
As the adage goes: "to the man with a hammer, everything looks like a nail." AMQP is not a hammer- it's a precision scalpel. It has a very specific purpose, and narrow applicability within that purpose. Using it for something other than its intended purpose will lead to stability and reliability problems in whatever it is you are designing, and overall dissatisfaction with your end product.
Will I also face the disconnection issue even I am using single
broker?
Yes
Does the heartbeat and consumer use the same thread, where
if consumer is consuming, it is not possible to perform heartbeat?
Can't confirm the thread, but from what I observe when Java RabbitMQ consumer consumes a message, it won't perform heartbeat acknowledgement. If the time to consume longer than 3 x heartbeat timeout timer (due to large message and/or low bandwidth), MQ server will close AMQP connection.
If so, what can I do to consume the message, without increase
heartbeat interval or reduce my message size?
I resolved my issue by increasing heartbeat size. No further code change is required.
I'm working with GameKit.framework and I'm trying to create a reliable communication between two iPhones.
I'm sending packages with the GKMatchSendDataReliable mode.
The documentation says:
GKMatchSendDataReliable
The data is sent continuously until it is successfully received by the intended recipients or the connection times out.
Reliable transmissions are delivered in the order they were sent. Use this when you need to guarantee delivery.
Available in iOS 4.1 and later. Declared in GKMatch.h.
I have experienced some problems on a bad WiFi connection. The GameKit does not declare the connection lost, but some packages never arrive.
Can I count on a 100% reliable communication when using GKMatchSendDataReliable or is Apple just using fancy names for something they didn't implement?
My users also complain that some data may be accidentally lost during the game. I wrote a test app and figured out that GKMatchSendDataReliable is not really reliable. On weak internet connection (e.g. EDGE) some packets are regularly lost without any error from the Game Center API.
So the only option is to add an extra transport layer for truly reliable delivery.
I wrote a simple lib for this purpose: RoUTP. It saves all sent messages until acknowledgement for each received, resends lost and buffers received messages in case of broken sequence.
In my tests combination "RoUTP + GKMatchSendDataUnreliable" works even beter than "RoUTP + GKMatchSendDataReliable" (and of course better than pure GKMatchSendDataReliable which is not really reliable).
It nearly 100% reliable but maybe not what you need sometimes… For example you dropped out of network all the stuff that you send via GKMatchSendDataReliable will be sent in the order you've send them.
This is brilliant for turn-based games for example, but if fast reaction is necessary a dropout of the network would not just forget the missed packages he would get all the now late packages till he gets to realtime again.
The case GKMatchSendDataReliable doesn't send the data is a connection time out.
I think this would be also the case when you close the app
Im trying to communicate with 2 xmpp clients but this is not like messaging or chatting. It's more like event caused at one end and action performed at other (realtime). I wish there will not be any latency time when a Client A is trying to send packets to Client B. If available will there be any possible way to minimalize that it should be un noticed.? Is it possible to do this or by any other means?
First of all, that is still messaging.
As for you latency, there will always be some latency when sending data between processes. You haven't said what tolerance levels you are looking for as opposed to what you are getting so it is hard to say what you should do to improve them.
The biggest factors to any current latency you have will be message size and network speed. Of course direct point to point communication would remove one hop for you message, but without knowing your application there is no way of saying whether this is an acceptable direction.
A small message should be delivered in a few milliseconds on a fast network. If it is a slow network, then your problems lie outside of any communications protocol.
I have a project coming up where I need to send and receive messages through a specific mobile operator, which only provides an SMPP interface. The whole project will be a hosted website. I have already read quite a lot, but I do not yet quite understand what is actually needed from my side to use the protocol.
Should my application try to maintain a constant connection to the smpp?
Can I simply connect, send a message and then disconnect?
Are receiving messages based on push or pull?
Thanks for the help.
SMPP is a peer-to-peer protocol. That should mean that SMS Gateway (your side) and SMSC (your mobile operator) need to have a proper bind/connection established. Even when there are no SMS or DLRs to send/receive, there is a continous exchange of smpp PDU (enquire_link/enquire-link_resp) that ensure that the bind is established.
In detail, if you send an enquire_link PDU and you get no response (enquire_link_resp) the bind is broken. Your sms won't be delivered (will remain enqueued in your gateway store), and you won't receive MOs (incoming sms) or DLRs (delivery report). To re-establish the connection you should re-initiate the connection.
So, my answer would be that you need a constant connection to SMSC.
You are stating you want to receive messages, as a result at least a bind_receiver is needed. Because you don't know when messages are going to come in, you will have to be constantly connected, rather than disconnecting after each event.
With regards to your question about "push or pull" this depends on how you solve the first problem. If you can build a solution that is constantly connected, the result will be a push (the carrier will push it to you as soon as they receive the message). If (for some reason) you cannot maintain a constant connection, you'll end up building a pull mechanism. You'll connect to the carrier ever X seconds to see if they have a message waiting for you.
I do need to highlight 2 pitfalls though:
A number of carriers in the world, do not store or even accept messages if you are not connected, therefore, depending on which carrier you interact with, you might be forced to use a continuous connection.
Most carriers do not allow you to open and close connections in quick succession. Once you disconnect, you can not reconnect for a time frame of X seconds.
Therefore a constant connection is really the way to go. Alternatively, you can look into a company like Nexmo, which will provide you with a HTTP Call every time a message arrives.
I'm not sure which language your developing your application in, but if you use any of the popular languages (Java, PHP, Perl) there are modules out there that handle basic SMPP Connectivity for you. A quick google search for your language and "SMPP Client" will give you a list of references.