Using the azure-iot-sdk for python I have a program that opens a connection to the IoT Hub and continually listens for direct methods, using the MQTT protocol. This is working as expected. I have a second python program that I invoke from cron hourly, that connects to the IoT Hub and updates the device twin for my device. Again this is using MQTT. Everything is working fine.
However I've come across in the documentation that a device can only have one MQTT connection at a time and the second will drop cause the first to drop. I'm not seeing this, however is what I'm doing unsupported?
Should I have a single program doing both tasks and sharing a single connection?
Yes that is correct, you can't have more than one connection with the same device ID to the IoTHub. Eventually in time you will have inconsistency behaviors and that scenario is unsupported. You should use a single program with a unique device ID doing both tasks.
Depending on the scenario you may want to consider using an iothubowner connection string to do service side operations like manage your IoT hub, and optionally send messages, schedule jobs, invoke direct methods, or send desired property updates to your IoT devices or modules.
Related
I've set up a remote kernel running through SSH to which I connect using my Spyder IDE, and have just added 2-factor authentication (2FA) on the SSH connections using Duo .
Now when I attempt to connect, I get 4 different push notifications, and once I approve some or all of them, Spyder connects and gives me the IPython prompt; and for each attempt below I did approve all 4.
On my first attempt, it didn't display a result when testing with something like 2+2
On my second attempt, everything appeared to be working fine.
However, I am aware that there are 5 channels involved (shell, iopub, hb, stdin, control) as I can see on this Jupyter client doc page.
Is there any way I can, once connected to the remote kernel, test each of the individual 5 channels and check that they are all working properly?
And can you think of a reason why I would receive 4 push notifications rather than 5? Is it possible that one of the channels isn't used or connected to later on-demand or something like that?
UPDATE: After doing a netstat on the server side, I can see that the control channel is not connected, but the other four (shell, iopub, hb, stdin) are. Still unsure what I miss out on by not using the control channel, and whether Spyder provides the same features the control channel by other means; this page says:
Control: This channel is identical to Shell, but operates on a separate socket to avoid queueing behind execution requests. The control channel is used for shutdown and restart messages, as well as for debugging messages.
For a smoother user experience, we recommend running the control channel in a separate thread from the shell channel, so that e.g. shutdown or debug messages can be processed immediately without waiting for a long-running shell message to be finished processing (such as an expensive execute request).
I'm tryng to develop an high-frequency message dispatching application and i'm observing for the behavior of the SDK about message reading from the ModuleClient connected to the edgeHub by using "MQTT on TCP Only" transport settings.
Seems that there is no way to read multiple messages at time (batch) from the edgeHub (I think is something related to the underlying protocol).
So the result is that one must sequentially read a message, process it and give the ack to the hub.
Does exist a way to process multiple message at time without waiting for the processing of the previous?
Is this "limitation" tied to the underlyng protocol?
I'm using Microsoft.Azure.Devices.Client 1.37.2 on a .NET Core 3.1 application deployed on Azure Kubernetes (AKS) by using Azure IoT Edge on Kubernetes workload.
You are correct, you cannot use batch in MQTT protocol. This is a limitation tied to IoTHub when using MQTT Protocol.
IoT Hub only supports batch send over AMQP and HTTPS at the moment.
The MQTT implementation loops over the batch and sends each message
individually.
Ref: https://github.com/Azure/azure-iot-sdk-csharp
Suggest that you add a new feature request, if need IoTHub to support batch when connecting using MQTT: https://feedback.azure.com/forums/321918-azure-iot-hub-dps-sdks
We're using RabbitMQ in a new project. We'll have IoT devices communicating with queues.
For the devices to send info to the cloud we don't see any issues, however sometimes we need to deliver messages from our backend to the IoT devices. For this we let the devices open an exclusive queue. This works perfectly, as long as the devices are online. When they aren't, the queue is closed and no messages can be send to it anymore.
Is there a way to keep the queue open, so messages are kept until the IoT device comes back online?
Vice-versa: Is there some way to have guaranteed delivery starting at the IoT device. For example: energy measurements every 15 minutes. If the connection drops, messages should be stored on disk (to prevent message loss in case of power cut). They are sent later on when the connection comes back online. Does a service or client library exist that implements this or do we need to develop this ourselves?
Is there a way to keep the queue open, so messages are kept until the
IoT device comes back online?
Use a regular queue, and make sure it's durable if you'd like it to survive RabbitMQ restarts.
Is there some way to have guaranteed delivery starting at the IoT
device.
That depends on the library you are using, but you don't tell us what library nor what protocol you're using (AMQP vs MQTT, for instance).
Some libraries offer automatic reconnect and re-creation of topology (queues, exchanges, etc) but I'm not aware of any that offer local storage of messages until the broker is available again. You'll have to code that yourself.
Please carefully read the documentation with regard to publisher confirmations and consumer acknowledgements, as those are both necessary for reliable messaging link.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
Our Cloud has several exchanges and credentials called CredentialsBucket assigned to a set of IoT devices. When an IoT device register, we provide them this credentials that includes a durable queue and exchange. When the IoT device push messages, it goes to Cloud through the exchange where we do additional security check using HMAC.
When Cloud send a message, it send it directly to his queue (no persistent messages in our case) and the IoT device do the same kind of security check.
When using DeviceClient I can send messages using SendEvent and files using SendBlob. But I did not find a way to receive acknowledgement that messages/files have been received by Azure IoT Hub?
The only way I found to solve this is using serviceClient.GetFileNotificationReceiver().
Am I missing something or is this the only way?
Also it seems I need SharedAccessKeyName to use ServiceClient. But this is not present in e.g. tokens created by DeviceExplorer (which I use for DeviceClient). Any advice is appreciated.
For Java and C sdks there are IotHubEventCallback and IOTHUB_CLIENT_EVENT_CONFIRMATION_CALLBACK but for C# there is no such interface implemented.
So, for C#, a message will be sent successfully if DeviceClient.SendEventAsync() without throwing any exception, otherwise it fails.
Or you can utilize Event Hub-compatible endpoint to monitor the status of operations on your IoT hub, D2C message, file upload...
For ServiceClient, you need Azure IoT Hub connection string, not device connection string. You can find it in Configuration of Device Explorer:
Why can't the application server send messages directly to the application? Why do you need the C2DM service in the middle?
To send a message from the server side you have two possibilities:
The client polls for new messages in certain intervals. Downside: Not a real-time solution. If you poll too frequently it will drain battery, consume your quota (if you don't have an unlimited package). Generally you do a lot of unnecessary work and traffic as most polls will return no messages.
Stay connected all the time. Downside: hard to deliver technically as phones can close connections when going to sleep mode. (At least nothing guarantees that they won't). Also you are running a background application 24/7.
The current state of C2DM will give you:
The ability to get messages even when your application is not running as Android will start your application (the part of it you configured, not necessarily the whole UI) when a message arrives.
A central, shared channel to deliver such messages. If 10 applications need real-time notifications on your phone this is one single facility, not 10 applications running and polling in parallel.
The promise: As this is the sanctioned API by Google for push messaging you can expect it to be optimized in the future. One improvement can be carrier-level messaging to initiate a C2DM session. That would mean you can put 100% of the "smart" part of your phone asleep.
Because the application can't (or isn't supposed to) act as a server.
If you would like to send messages to your app directly, then your application would need to have some sort of server listening in some port. This is bad because:
connections are usually firewalled, you cant just listen in some port,
your device can be turned off or without connectivity (then you app sever would need to retry),
the app server would need to know the address of your device,
app would need to be running (at least the server module) all the time, this isn't battery friendly.