Using Azure IoT - telemetry from a Windows desktop application - azure-iot-hub

I work for a company that manufactures large scientific instruments, with a single instrument having 100+ components: pumps, temperature sensors, valves, switches and so on. I write the WPF desktop software that customers use to control their instrument, which is connected to the PC via a serial or TCP connection. The concept is the same though - to change a pump's speed for example, I would send a "command" to the instrument, where an FPGA and custom firmware would take care of handling that command. The desktop software also needs to display dozens of "readback" values (temperatures, pressures, valve states, etc.), and are again retrieved by issuing a "command" to request a particular readback value from the instrument.
We're considering implementing some kind of telemetry service, whereby the desktop application will record maybe a couple of dozen readback values, each having its own interval - weekly, daily, hourly, per minute or per second.
Now, I could write my own telemetry solution, whereby I record the data locally to disk then upload to a server (say) once a week, but I've been wondering if I could utilise Azure IoT for collecting the data instead. After wading through the documentation and concepts I'm still none the wiser! I get the feeling it is designed for "physical" IoT devices that are directly connected to the internet, rather than data being sent from a desktop application?
Assuming this is feasible, I'd be grateful for any pointers to the relevant areas of Azure IoT. Also, how would I map a single instrument and all its components (valves, pumps, etc) to an Azure IoT "device"? I'm assuming each component would be a device, in which case is it possible to group multiple devices together to represent one customer instrument?
Finally, how is the collected data reported on? Is there something built-in to Azure, or is it essentially a glorified database that would require bespoke software to analyse the recorded data?

Azure IoT would give you:
Device SDKs for connecting (MQTT or AMQP), sending telemetry, receiving commands, receiving messages, reporting properties, and receiving property update requests.
An HA/DR service (IoT Hub) for managing devices and their authentication, configuring telemetry routes (where to route the incoming messages).
Service SDKs for managing devices, sending commands, requesting property updates, and sending messages.
If it matches your solution, you could also make use of the Device Provisioning Service, where devices connect and are assigned an IoT hub. This would make sense, for instance, if you have devices around the world and wish to have them connect to the closest IoT hub you have deployed.
Those are the building blocks. You'd integrate the device SDK into your WPF app. It doesn't have to be a physical device, but the fact it has access to sensor data makes it behave like one and that seems like a good fit. Then you'd build a service app using the Service SDKs to manage the fleet of WPF apps (that represent an instrument with components, right?). For monitoring telemetry, it would depend on how you choose to route it. By default, it goes to an EventHub instance created for you. You'd use the EventHub SDK to subscribe to those messages. Alternatively, or in addition to, those telemetry messages could be routed to Azure Storage where you could perform historical analysis. There are other routing options.
Does that help?

Related

The design of BLE application with frequent connect/disconnect scenario , how should I optimize reocnnection?

I am working on BLE application on a embedded platform where there are frequent connect/disconnect events. The issue I am seeing is re-connection takes too long. The high frequency of connect/disconnect is a part of usage scenario so I can't change that. What I can do is make the re connection more efficient. I noticed, the bulk of re-connection is spent of service/characteristic discovery of other devices.
I still want to make sure the service/characteristic of the connecting device hasn't been changed. In stead of discovering all the service , can we instead use a characteristic that has the hash of all the service/characteristic on the device? So each device can compare the received hash with the stored one. Only in case of mismatch perform full service discovery. Is there a precedent of doing it in BLE?
Bluetooth Low Energy (BLE) allows devices to leave their transmitters off most of the time to achieve its “Low Energy”.
I would expect a central device to subscribe to notifications from the peripheral. That way the peripheral only turns on and transmits when there are updates.
The other approach would be to put the data (or hash) in the advertising data (manufacturer data or service data) like many sensor beacons do. That way you might not need to connect at all or only connect if needed.

iot edge best practise

We have around 9000 devices in field.
This devices are at groups of 1-100 at customers on prem.
The devices are not capable of azure-iot-sdk integration.
The devices have a webservice API.
The devices should appear as first-class devices in azure.
We like the iot edge module provisiong feature.
We want to evaluate if modules could gather data from the devices and send them to IoTHub for further processing.
We found this feature overview of IoTEdge: https://learn.microsoft.com/de-de/azure/iot-edge/iot-edge-as-gateway
Pattern Transparent and Protocol translation are out of scope due to above facts. Pattern Identity translation seems to fit.
We want a 1 to 1 relationship between module and real device.
Therefor we assume the following POC with the hope of clarification and best practise:
we implement a iot edge module (azure-iot-sdk-java)
we open module connection to iotedge and suscribe to desired properties
the module identity gets as desired property the ip of the real device and the azure device identitiy connection string.
we open device connection to iotedge by adding GatewayHostName to the device connection string as described here https://learn.microsoft.com/de-de/azure/iot-edge/iot-edge-as-gateway
we request data from the real device and send them via azure device identity.
This somewho mixes up two patterns and seems kind of odd to us.
Can you point out best practises and risks with this approach?
Yes, I agree with that Pattern Identity translation could fit your scenario.
There are three patterns for using an IoT Edge device as a gateway: transparent, protocol translation, and identity translation, you can refer to this link to get more introduction about these three pattern.

Duplex messaging or Azure Queue Service

All ,
We have a requirement to develop a azure based platform, in which the user can configure multiple pharmaceutical instruments, start measurements on them and analyze the measured data. The typical components in the azure based platform will be following
1 - A .NET based 4 client application running on the computer connected to each instrument. This client application should receive the start measurement command from the azure platform , perform the measurement and update the result back to the azure*
2 - A set of services[probably REST based] which will get the results from the client application and update the database on the cloud
3 - A set of services and business logic which which can be used to perform analysis on the data
4 - A asp.net web application where the user can view instrument details , start measurement etc
There is a two way communication between the Azure platform and the client application i.e. the client needs to update results to the azure and the azure needs to initiate measurement on the instrument via the client application
In such a scenario , what is the recommended approach for the azure platform to communicate to the clients. Is it any of the following
1 - Create a duplex service between the client and server and provide a call back interface to start the measurement
2 - Create a command queue using Azure message queue for each client. when a measurement needs to be started , a message will the put on the queue. The client app will always read from the queue and execute the command
or do we have any other ways to do this , any help is appreciated
We do not fully understand your scenario and constraints around it, but as pointers, we have seen lot of customers use Azure storage queues to implement master-worker scenario (some component adds message to appropriate queue to get work done (take measurements in your case) and workers polling the queue to process this work (client computer connected to your instrument in this case)).
In terms of storing the results back, your master component could provide SAS access to client to write results back to specific blob in an Azure storage account and either have your service and business logic monitor existence of that blob to start your analysis.
Above approach will decouple your client from server and make communication asynchronous via storage. Again, these are just pointers and you would be the best person to pick the right approach that suits your requirement
For communication between the server and the client, you could use SignalR http://signalr.net/ there are two forms of messaging systems supported "as a service" on Azure, these are Service Bus and Message Queues - see this link http://msdn.microsoft.com/en-us/library/hh767287.aspx

Tools to monitor and debug SaaS Services

What tools will come in handy to debug and monitor SaaS services built on WCF in production environment ?
FYI - No access to the actual server whatsoever. No remoting in, and no access to the file system.
There are dozens of 'dotcom-monitors' (eg site24x7.com) but they can only monitor parameters that are publicly available, like site uptime, response times etc.
If you want to monitor memory usage and other parameters known only from 'inside', then you have two choices: either install some monitoring agent on a server (in most cases it would be a pain).
You can also send 'signals' from your code to some external event handling and notification service. I recommend AlertGrid (http://alert-grid.com) for the latter purpose it is very flexible and extremely easy to integrate.
AlertGrid doesn't require installation, access to the file system etc. it just gathers data you send and allows to build some notification rules. Examples:
you can send some parameter like memory usage and built rule 'if memory_usage > threshold -> send SMS to admin'
you can send data related to your applicatioin. If you have application proceeding orders, you can send number of processed orders in the signal and build notification rules around that
If you have some logic trigerred periodically (cron, windows service) you can send signal each time your logic is executed to check if it is executed on a scheduled basis.
(I am a developer in AlertGrid's team, in case of any question, please feel free to ask.)
What exactly do you want to monitor? If you only care about availability then good old ping might be enough :)

Offline client and messages to azure

I'm playing around with windows azure and I would like to build a clouded server application that receives messages from many different clients, such as mobile and desktop.
I would like to build the client so that they work while in "offline-mode", i.e. I would like the client to build up a local queue of messages that are sent to the azure server as soon as they get online.
Can I accomplish this using wcf and/or azure queing mechanism, so that I don't have to worry about whether the client is online or offline when I write the code?
You won't need queuing in the cloud to accomplish this. For the client app to be "offline enabled" you need to do queuing on the client. For this there are many options, a local database, xml files, etc. Whenever the app senses network availability you can upload your queue to Azure. And yes, you can use WCF for that.
For the client queue/sync stuff you could take a look at the Sync Framework.
I haven't found a great need for the queue so far. Maybe it's just that I'm not seeing it in my app view. Could also be that the data you can store in the queue is minimal. You basically store short text strings (like record ids), and then you have to do something with the ID when you pull it from the queue, such as look it up, delete it, whatever.
In my app, I didn't use the queue at all, just as Peter suggests. I wrote directly to table storage (accessed via it's REST interface using StorageClient) from the client. If you want to look at a concrete example, take a look at http://www.netalerts.mobi/traffic. Like you, I wanted to learn Azure so I built a small web site.
There's a worker_role that wakes up every 60 seconds. Using one thread, it retrieves any new data from it's source (screen scraping a web page). New entries are stored directly in table storage (no need for a queue). Another thread deletes entries in table storage that are older than a specified threshold (there's no issue with running multiple threads against table storage). And then I'm working on the third thread which is designed to send notifications to handheld devices.
The app itself is a web_role, obviously.