Duplex messaging or Azure Queue Service - wcf

All ,
We have a requirement to develop a azure based platform, in which the user can configure multiple pharmaceutical instruments, start measurements on them and analyze the measured data. The typical components in the azure based platform will be following
1 - A .NET based 4 client application running on the computer connected to each instrument. This client application should receive the start measurement command from the azure platform , perform the measurement and update the result back to the azure*
2 - A set of services[probably REST based] which will get the results from the client application and update the database on the cloud
3 - A set of services and business logic which which can be used to perform analysis on the data
4 - A asp.net web application where the user can view instrument details , start measurement etc
There is a two way communication between the Azure platform and the client application i.e. the client needs to update results to the azure and the azure needs to initiate measurement on the instrument via the client application
In such a scenario , what is the recommended approach for the azure platform to communicate to the clients. Is it any of the following
1 - Create a duplex service between the client and server and provide a call back interface to start the measurement
2 - Create a command queue using Azure message queue for each client. when a measurement needs to be started , a message will the put on the queue. The client app will always read from the queue and execute the command
or do we have any other ways to do this , any help is appreciated

We do not fully understand your scenario and constraints around it, but as pointers, we have seen lot of customers use Azure storage queues to implement master-worker scenario (some component adds message to appropriate queue to get work done (take measurements in your case) and workers polling the queue to process this work (client computer connected to your instrument in this case)).
In terms of storing the results back, your master component could provide SAS access to client to write results back to specific blob in an Azure storage account and either have your service and business logic monitor existence of that blob to start your analysis.
Above approach will decouple your client from server and make communication asynchronous via storage. Again, these are just pointers and you would be the best person to pick the right approach that suits your requirement

For communication between the server and the client, you could use SignalR http://signalr.net/ there are two forms of messaging systems supported "as a service" on Azure, these are Service Bus and Message Queues - see this link http://msdn.microsoft.com/en-us/library/hh767287.aspx

Related

How to organize scheduled data polling during the application scaling?

I have a microservice that among other things is used as a "caching proxy" (I'm not sure that this term is correct). It is in between the application API and Azure API. This microservice periodically fetches some data from Azure for several resources and stores it in Redis. Application API from the other side requests the resource data but reads it not from Azure itself, but from Redis.
(This is done in order to limit the scale of requests hitting the Azure API when having a high load on the application API.)
The periodical polling is currently implemented as a naive "while not canceled - fetch, update Redis and sleep for 15 seconds".
This worked well while I had only one instance of the microservice. But now due to new requirements, I have an automatic scaling of my microservice. And that means that if there are 5 instances of the microservice running right now - I'm hitting the Azure API 5 times more frequently than I should.
My question is how can I fix this to do "one request to Azure API per resource once in 15 seconds" - no matter how many microservice instances I have?
My constraints are:
do the minimal changes since the microservice is already in Production;
use the existing resources as much as possible (apart from Redis the microservice is already using message queues - Azure Service Bus).
Ideas I have:
make only one instance a "master" - only this instance will fetch data from Azure. But what should I do when auto-scaling shuts this instance down? How can I detect this and decide on a new master instance? Maybe I could store the master instance identifier in a short-living key in Redis and prolong it every time the resource data is retrieved from Azure? If there is no key in Redis - a new master instance is selected.
use Azure Service Bus message scheduling - on microservice application startup the instance schedules a message in the next 15 seconds which will be received by only one microservice instance. On receiving this message the microservice instance will fetch the data from Azure, update Redis - and schedule another message in the next 15 seconds. This time another microservice instance can receive the instance and do the same - fetch data, update Redis, and schedule the next message. But I don't know how to avoid parallel message chains initiated when several microservice instances are started/restarted.
Anyway, I don't see any good solution for my problem and would appreciate a hint.

Triggering an update on all microservices

Using ASP.NET Core microservices, both API and worker roles, running in Azure Service Fabric.
We use Service Bus to do inter-microservice communication.
Consider the following situation;
Each microservice holds a local, in-mem copy of cached objects of type X.
One worker role is responsible for processing a message that would result in a rebuild of this cache for all instances.
We are having multiple nodes, and thus multiple instances of each microservice in Service Fabric.
What would be the best approach to trigger this update?
I though of the following approaches:
Calling SF for all service replica's and firing an HTTP POST on each replica to trigger the update
This however does not seem to work as worker roles don't expose any APIs
Creating a specific 'broadcast' topic where each instance registers a subscription for, and thus using pub/sub mechanism
I fail to see how I can make sure each instance has it's own subscription, but also I don't end up with ghost subscriptions when something happens like a crash
You can use the OSS library Service Fabric Pub Sub for this.
Every service partition can create its own subscription for messages of a given type.
It uses the partition identifier for subscriptions, so crashes and moves won't result in ghost subscriptions.
It uses regular SF remoting, so you won't need to expose API's for messaging.

Using Azure IoT - telemetry from a Windows desktop application

I work for a company that manufactures large scientific instruments, with a single instrument having 100+ components: pumps, temperature sensors, valves, switches and so on. I write the WPF desktop software that customers use to control their instrument, which is connected to the PC via a serial or TCP connection. The concept is the same though - to change a pump's speed for example, I would send a "command" to the instrument, where an FPGA and custom firmware would take care of handling that command. The desktop software also needs to display dozens of "readback" values (temperatures, pressures, valve states, etc.), and are again retrieved by issuing a "command" to request a particular readback value from the instrument.
We're considering implementing some kind of telemetry service, whereby the desktop application will record maybe a couple of dozen readback values, each having its own interval - weekly, daily, hourly, per minute or per second.
Now, I could write my own telemetry solution, whereby I record the data locally to disk then upload to a server (say) once a week, but I've been wondering if I could utilise Azure IoT for collecting the data instead. After wading through the documentation and concepts I'm still none the wiser! I get the feeling it is designed for "physical" IoT devices that are directly connected to the internet, rather than data being sent from a desktop application?
Assuming this is feasible, I'd be grateful for any pointers to the relevant areas of Azure IoT. Also, how would I map a single instrument and all its components (valves, pumps, etc) to an Azure IoT "device"? I'm assuming each component would be a device, in which case is it possible to group multiple devices together to represent one customer instrument?
Finally, how is the collected data reported on? Is there something built-in to Azure, or is it essentially a glorified database that would require bespoke software to analyse the recorded data?
Azure IoT would give you:
Device SDKs for connecting (MQTT or AMQP), sending telemetry, receiving commands, receiving messages, reporting properties, and receiving property update requests.
An HA/DR service (IoT Hub) for managing devices and their authentication, configuring telemetry routes (where to route the incoming messages).
Service SDKs for managing devices, sending commands, requesting property updates, and sending messages.
If it matches your solution, you could also make use of the Device Provisioning Service, where devices connect and are assigned an IoT hub. This would make sense, for instance, if you have devices around the world and wish to have them connect to the closest IoT hub you have deployed.
Those are the building blocks. You'd integrate the device SDK into your WPF app. It doesn't have to be a physical device, but the fact it has access to sensor data makes it behave like one and that seems like a good fit. Then you'd build a service app using the Service SDKs to manage the fleet of WPF apps (that represent an instrument with components, right?). For monitoring telemetry, it would depend on how you choose to route it. By default, it goes to an EventHub instance created for you. You'd use the EventHub SDK to subscribe to those messages. Alternatively, or in addition to, those telemetry messages could be routed to Azure Storage where you could perform historical analysis. There are other routing options.
Does that help?

Message Queuing solution for WCF

We are developing a WCF based data analytic application which will receives data from multiple instruments , store it in a database and make it available for data analysis... Since the data flow to the application can be high , we are planning to use a Queuing solution , our initial choice is MSMQ ... Can you please let me know if there are any alternate solutions

Offline client and messages to azure

I'm playing around with windows azure and I would like to build a clouded server application that receives messages from many different clients, such as mobile and desktop.
I would like to build the client so that they work while in "offline-mode", i.e. I would like the client to build up a local queue of messages that are sent to the azure server as soon as they get online.
Can I accomplish this using wcf and/or azure queing mechanism, so that I don't have to worry about whether the client is online or offline when I write the code?
You won't need queuing in the cloud to accomplish this. For the client app to be "offline enabled" you need to do queuing on the client. For this there are many options, a local database, xml files, etc. Whenever the app senses network availability you can upload your queue to Azure. And yes, you can use WCF for that.
For the client queue/sync stuff you could take a look at the Sync Framework.
I haven't found a great need for the queue so far. Maybe it's just that I'm not seeing it in my app view. Could also be that the data you can store in the queue is minimal. You basically store short text strings (like record ids), and then you have to do something with the ID when you pull it from the queue, such as look it up, delete it, whatever.
In my app, I didn't use the queue at all, just as Peter suggests. I wrote directly to table storage (accessed via it's REST interface using StorageClient) from the client. If you want to look at a concrete example, take a look at http://www.netalerts.mobi/traffic. Like you, I wanted to learn Azure so I built a small web site.
There's a worker_role that wakes up every 60 seconds. Using one thread, it retrieves any new data from it's source (screen scraping a web page). New entries are stored directly in table storage (no need for a queue). Another thread deletes entries in table storage that are older than a specified threshold (there's no issue with running multiple threads against table storage). And then I'm working on the third thread which is designed to send notifications to handheld devices.
The app itself is a web_role, obviously.