Offline client and messages to azure - wcf

I'm playing around with windows azure and I would like to build a clouded server application that receives messages from many different clients, such as mobile and desktop.
I would like to build the client so that they work while in "offline-mode", i.e. I would like the client to build up a local queue of messages that are sent to the azure server as soon as they get online.
Can I accomplish this using wcf and/or azure queing mechanism, so that I don't have to worry about whether the client is online or offline when I write the code?

You won't need queuing in the cloud to accomplish this. For the client app to be "offline enabled" you need to do queuing on the client. For this there are many options, a local database, xml files, etc. Whenever the app senses network availability you can upload your queue to Azure. And yes, you can use WCF for that.
For the client queue/sync stuff you could take a look at the Sync Framework.

I haven't found a great need for the queue so far. Maybe it's just that I'm not seeing it in my app view. Could also be that the data you can store in the queue is minimal. You basically store short text strings (like record ids), and then you have to do something with the ID when you pull it from the queue, such as look it up, delete it, whatever.
In my app, I didn't use the queue at all, just as Peter suggests. I wrote directly to table storage (accessed via it's REST interface using StorageClient) from the client. If you want to look at a concrete example, take a look at http://www.netalerts.mobi/traffic. Like you, I wanted to learn Azure so I built a small web site.
There's a worker_role that wakes up every 60 seconds. Using one thread, it retrieves any new data from it's source (screen scraping a web page). New entries are stored directly in table storage (no need for a queue). Another thread deletes entries in table storage that are older than a specified threshold (there's no issue with running multiple threads against table storage). And then I'm working on the third thread which is designed to send notifications to handheld devices.
The app itself is a web_role, obviously.

Related

Using Azure IoT - telemetry from a Windows desktop application

I work for a company that manufactures large scientific instruments, with a single instrument having 100+ components: pumps, temperature sensors, valves, switches and so on. I write the WPF desktop software that customers use to control their instrument, which is connected to the PC via a serial or TCP connection. The concept is the same though - to change a pump's speed for example, I would send a "command" to the instrument, where an FPGA and custom firmware would take care of handling that command. The desktop software also needs to display dozens of "readback" values (temperatures, pressures, valve states, etc.), and are again retrieved by issuing a "command" to request a particular readback value from the instrument.
We're considering implementing some kind of telemetry service, whereby the desktop application will record maybe a couple of dozen readback values, each having its own interval - weekly, daily, hourly, per minute or per second.
Now, I could write my own telemetry solution, whereby I record the data locally to disk then upload to a server (say) once a week, but I've been wondering if I could utilise Azure IoT for collecting the data instead. After wading through the documentation and concepts I'm still none the wiser! I get the feeling it is designed for "physical" IoT devices that are directly connected to the internet, rather than data being sent from a desktop application?
Assuming this is feasible, I'd be grateful for any pointers to the relevant areas of Azure IoT. Also, how would I map a single instrument and all its components (valves, pumps, etc) to an Azure IoT "device"? I'm assuming each component would be a device, in which case is it possible to group multiple devices together to represent one customer instrument?
Finally, how is the collected data reported on? Is there something built-in to Azure, or is it essentially a glorified database that would require bespoke software to analyse the recorded data?
Azure IoT would give you:
Device SDKs for connecting (MQTT or AMQP), sending telemetry, receiving commands, receiving messages, reporting properties, and receiving property update requests.
An HA/DR service (IoT Hub) for managing devices and their authentication, configuring telemetry routes (where to route the incoming messages).
Service SDKs for managing devices, sending commands, requesting property updates, and sending messages.
If it matches your solution, you could also make use of the Device Provisioning Service, where devices connect and are assigned an IoT hub. This would make sense, for instance, if you have devices around the world and wish to have them connect to the closest IoT hub you have deployed.
Those are the building blocks. You'd integrate the device SDK into your WPF app. It doesn't have to be a physical device, but the fact it has access to sensor data makes it behave like one and that seems like a good fit. Then you'd build a service app using the Service SDKs to manage the fleet of WPF apps (that represent an instrument with components, right?). For monitoring telemetry, it would depend on how you choose to route it. By default, it goes to an EventHub instance created for you. You'd use the EventHub SDK to subscribe to those messages. Alternatively, or in addition to, those telemetry messages could be routed to Azure Storage where you could perform historical analysis. There are other routing options.
Does that help?

Duplex messaging or Azure Queue Service

All ,
We have a requirement to develop a azure based platform, in which the user can configure multiple pharmaceutical instruments, start measurements on them and analyze the measured data. The typical components in the azure based platform will be following
1 - A .NET based 4 client application running on the computer connected to each instrument. This client application should receive the start measurement command from the azure platform , perform the measurement and update the result back to the azure*
2 - A set of services[probably REST based] which will get the results from the client application and update the database on the cloud
3 - A set of services and business logic which which can be used to perform analysis on the data
4 - A asp.net web application where the user can view instrument details , start measurement etc
There is a two way communication between the Azure platform and the client application i.e. the client needs to update results to the azure and the azure needs to initiate measurement on the instrument via the client application
In such a scenario , what is the recommended approach for the azure platform to communicate to the clients. Is it any of the following
1 - Create a duplex service between the client and server and provide a call back interface to start the measurement
2 - Create a command queue using Azure message queue for each client. when a measurement needs to be started , a message will the put on the queue. The client app will always read from the queue and execute the command
or do we have any other ways to do this , any help is appreciated
We do not fully understand your scenario and constraints around it, but as pointers, we have seen lot of customers use Azure storage queues to implement master-worker scenario (some component adds message to appropriate queue to get work done (take measurements in your case) and workers polling the queue to process this work (client computer connected to your instrument in this case)).
In terms of storing the results back, your master component could provide SAS access to client to write results back to specific blob in an Azure storage account and either have your service and business logic monitor existence of that blob to start your analysis.
Above approach will decouple your client from server and make communication asynchronous via storage. Again, these are just pointers and you would be the best person to pick the right approach that suits your requirement
For communication between the server and the client, you could use SignalR http://signalr.net/ there are two forms of messaging systems supported "as a service" on Azure, these are Service Bus and Message Queues - see this link http://msdn.microsoft.com/en-us/library/hh767287.aspx

Event notification on new e-mail in IBM Domino

Is it possible to subscribe to mail events on an IBM Domino server?
I need a service similar to the one provided by Microsoft Exchange Event Notification, where you can subscribe to events and get notified when there are changes - eg. arrival of a new e-mail. I need the solution to be server side, since I can't rely on users having their client running.
Unfortunately, as per my comment above, there is no pre-packaged equivalent to the push, pull and streaming subscription services that EWS supports. A Notes client can get notifications via Notes RPC protocol, and there's also obviously some technology in IBM's Notes Traveler mobile product, but nothing that I'm aware of as a pre-packed web service or even as a notifications API. You would have to build it. There are a variety of ways you could go about it.
For push or streaming subscriptions, one way would be with a Notes C API plugin using the Extension Manager, running on the server and monitoring the mailboxes. You might be able to use a DSAPI plugin into Domino's HTTP stack to manage the incoming connections and feed the data out to subscribers, but honestly I have no idea if Domino's HTTP stack can handle the persistent connections that are implied in the subscription model. Alternatively, the Extension Manager plugin could quickly send the data over to code written in any other language that you want, running on any web stack that. Of course, you'll have to deal with security through all the linked-together parts.
For pull subscriptions, I guess it's really more of a polling archiecture, with state saved somewhere so that only changes since the last call will be delivered. You have any number of options for that. You could use Domino's built-in HTTP server, obviously, so you could write your own Domino-hosted web service for this. You could also use the Domino Data Service, which is a REST API, to do this -- with all necessary state information being stored on the client-side. (On quick look, I don't see a good option for getting all new docs since a specified date-time via Domino Data Service, but it might be possible.)
I do worry a bit about scalability of any custom solution for this. My understanding is that Microsoft has quite a bit of caching and optimization in their services in order to address scale. Obviously, you can build whatever you need for that into your own web service, but it will likely add a lot of effort.

Do I really need reliable sessions for my services? (description inside)

Our company leases a music service to it's clients. The product consists of an automated mp3 player and daily renewals/updates of the costumers music library (mp3 songs) downloaded to their machines. So far we use an ugly solution for the mp3 updates, by synchronizing server and client folders using GBridge. This is obviously a disadvantage, as we force our clients to download our whole music library (currently 25.000 songs) while most of them will never play songs from all of our music categories (pop, rock etc). Most important we can only offer one subscription packet (our whole music library) while our competitors offer packets by categories with lower prices. For those reasons we decided to turn to WCF.
The service uses PerCall instancing mode and implements two operations, invoked from a winform client application with the classic request-reply pattern.
The first operation retrieves from a database the categories a client is allowed to download from (request) and sends back to the client a list of these categories (reply).
The second operation is used for downloading. The client first downloads an xml version of the server's database. A similar xml lies on the client side. The client app checks which songs, in each of the categories returned from the first operation, are missing in it's own xml compared to the server's xml file. If there are any files (elements on the xml) missing, it downloads them one file at a time. After each download, the client updates his xml and does the same comparison again until all files (elements) match in the 2 xml.
Long story short, considering that the instancing mode on the service is PerCall for throughput reasons and keeping memory consumption low and that both my operations use the request-reply pattern which means that the acknowledgement messages will be send back to the client with each response from the service, so if something goes wrong in the connection or if the client can't reach the service I can catch the CommunicationObjectFaultedException on the client, reconstruct the proxy and retry do you think theres a need for reliable sessions on my service implementation? What problems could arise if I don't have reliable sessions in the operations just described?
What problems could arise if I don't have reliable sessions in the
operations just described?
I am aware of only few problems being solved by reliable sessions while it puts a lot of stress on the server.
I would personally go for BasicHttpBinding (for better interoperability) without reliable session.
UPDATE
In order to understand Reliable Sessions, have a read of this and this.
If you are a bank, it makes sense to use Reliable Sessions, if you are sending money to and from other banks. This will ensure the message is received by the final party involved. But in most cases, you would not need it.

Tools to monitor and debug SaaS Services

What tools will come in handy to debug and monitor SaaS services built on WCF in production environment ?
FYI - No access to the actual server whatsoever. No remoting in, and no access to the file system.
There are dozens of 'dotcom-monitors' (eg site24x7.com) but they can only monitor parameters that are publicly available, like site uptime, response times etc.
If you want to monitor memory usage and other parameters known only from 'inside', then you have two choices: either install some monitoring agent on a server (in most cases it would be a pain).
You can also send 'signals' from your code to some external event handling and notification service. I recommend AlertGrid (http://alert-grid.com) for the latter purpose it is very flexible and extremely easy to integrate.
AlertGrid doesn't require installation, access to the file system etc. it just gathers data you send and allows to build some notification rules. Examples:
you can send some parameter like memory usage and built rule 'if memory_usage > threshold -> send SMS to admin'
you can send data related to your applicatioin. If you have application proceeding orders, you can send number of processed orders in the signal and build notification rules around that
If you have some logic trigerred periodically (cron, windows service) you can send signal each time your logic is executed to check if it is executed on a scheduled basis.
(I am a developer in AlertGrid's team, in case of any question, please feel free to ask.)
What exactly do you want to monitor? If you only care about availability then good old ping might be enough :)