Persistent Connection to Web Server (Like AJAX on Web) - objective-c

I am wanting to create a program that talks with a Cometd server to allow for pushing of data to the app.
I have done this on the web side using AJAX, but I am a little unsure of the best way to do this with Cocoa.
I can make a standard connection using NSURLRequest and NSURLConnection, but how do I keep this connection alive so I can send data when needed and get the pushed info when needed.
Am I even going about this the correct way?
Thanks in advance

In terms of push notifications, if the http server does not close the close the connection the the NSURLConnection will stay open, and you will keep getting data. Note that if you are designing something like that you must use the asynchronous NSURLConnection methods, as a synchronous connection will not end until the server closes the connection.
As for sending more data, it is really not designed to do that. If you want to push more data in a single http request after you have sent it (which seems like a pretty bad idea to me) you are going to have to roll your http stack of find some opensource component you can use.
Note that NSURLConnection will use keep alive and other things as it deems appropriate, so if you start multiple logical connections to the same host in your app they may end up on the wire using the same keep alive connection, etc.

Related

How to handle the application if connection breaks in between a web service call

In several interviews I have been asked about handling of connection, web service calls, server responses and all. Even now I am not clear about many things.Could you please help me to get a better idea about the following scenarios?
What is the advantage of using NSURLSessionDataTask instead of NSURLConnection-I have an idea like data loss will not happen even if the connection breaks for NSURLSessionDataTask but not for the latter.But how it works?
If the connection breaks after sending the request to a server or while connecting to server , How can we handle the code at our end in case of NSURLConnection and NSURLSessionDataTask?-My idea is to use Reachability classes and check when it becomes online.
The data we are sending got updated at the server side. But we don't get the response from server. What can we do at our side to handle this situation?- Incrementing timeOutInterval is the only thing that we can do?
Please help me with these scenarios. Thank you very much in advance!!
That's multiple questions, really, but I'll try to answer them all briefly.
Most failure handling is the same between NSURLConnection and NSURLSession. The main advantages of the latter are support for background downloads and cancelling groups of related requests.
That said, if you're doing a large download that you think might fail, NSURLSession does provide download tasks that let you resume the download if your network connection fails, similar to what NSURLDownload used to do on OS X (never available on iOS). This only helps for downloading large files, though, not for large uploads (which require significant server-side support to resume) or other requests.
Your intuition is correct. When a connection fails, create a reachability object monitoring that particular hostname to see when it would be a good time to try the request again. Then, try the request again.
You might also display some sort of advisory UI to say that you have no Internet connection. (By advisory, I mean something that the user doesn't have to click on and that does not impact offline use of the app any more than necessary; look at the Facebook app for a great example.)
Provide a unique identifier when you make the request, and store that on the server along with the server's response until the client acknowledges receipt of the response (or purge it anyway after some reasonable number of days). When the upload finishes, the server gives you back its response if it can.
If something goes wrong, the client asks the server to resend the response associated with that unique identifier. Once your client has the data, it acknowledges receipt and the server deletes the response. If you ask the server for the response and it doesn't have one, then the upload didn't really complete.
With some additional work, this approach can make it possible to support long-running uploads more reliably. If an upload fails, ask the server how much data it got for that identifier, then tell the server that you're going to upload new data starting at the next byte. On the server side, overwrite the old data starting at that byte (just in case some data was still being written when you asked for the length).
Hope that helps.

Async WCF and Protocol Behaviors

FYI: This will be my first real foray into Async/Await; for too long I've been settling for the familiar territory of BackgroundWorker. It's time to move on.
I wish to build a WCF service, self-hosted in a Windows service running on a remote machine in the same LAN, that does this:
Accepts a request for a single .ZIP archive
Creates the archive and packages several files
Returns the archive as its response to the request
I have to support archives as large as 10GB. Needless to say, this scenario isn't covered by basic WCF designs; we must take additional steps to meet the requirement. We must eliminate timeouts while the archive is building and memory errors while it's being sent. Both of these occur under basic WCF designs, depending on the size of the file returned.
My plan is to proceed using task-based asynchronous WCF calls and streaming mode.
I have two concerns:
Is this the proper approach to the problem?
Microsoft has done a nice job at abstracting all of this, but what of the underlying protocols? What goes on 'under the hood?' Does the server keep the connection alive while the archive is building (could be several minutes) or instead does it close the connection and initiate a new one once the operation is complete, thereby requiring me to properly route the request through the client machine firewall?
For #2, clearly I'm hoping for the former (keep-alive). But after some searching I'm not easily finding an answer. Perhaps you know.
You need streaming for big payloads. That is the right approach. This has nothing at all to do with asynchronous IO. The two are independent. The client cannot even tell that the server is async internally.
I'll add my standard answers for whether to use async IO or not:
https://stackoverflow.com/a/25087273/122718 Why does the EF 6 tutorial use asychronous calls?
https://stackoverflow.com/a/12796711/122718 Should we switch to use async I/O by default?
Each request runs over a single connection that is kept alive. This goes for both streaming big amounts of data as well as big initial delays. Not sure why you are concerned about routing. Does your router kill such connections? That's a problem.
Regarding keep alive, there is nothing going over the wire to do that. TCP sessions can stay open indefinitely without any kind of wire traffic.

Managing WCF Duplex Callback connections for Silverlight frontend

Perhaps I'm going about this the wrong way, but here's my current "setup".
I have a silverlight client that uses Caliburn.Micro and a MEF container with a "LoadCatalog" class, to keep everything loosely coupled in an MVVM way.
I have a "common" dll where all the interfaces are kept.
All my views and viewmodels are separate projects, that only have a reference to the common dll.
The viewmodels use WCF (regular) to communicate to the backend. The frontend itself has a duplex connection to the backend.
Now here's where the question comes to mind. Whenever the backend thinks it's time to have a new screen appear at the frontend, it uses the callback channel to tell the frontend to load the next screen.
Does this seem like a good pattern to use? Or should I leave the management of what screen to load when to the frontend? I think it's nice to have this in the backend, but perhaps this is some kind of anti-pattern I'm not aware of, hence the question.
Now for argument sake, lets say I want to keep this in the backend.
What would be the best way to go about managing the collection of callback channels on the backend? If I enable SessionMode.Required on all the regular WCF endpoints, as well as the duplex channel, does this persist state together over multiple endpoints (regular+duplex)? Or will this persist state only within a single endpoint?
My guess (from the tests I have been able to do so far) is that I need to add some logic, like for example provide the frontend with a guid as soon as the callback connection is made. And then use that guid in the regular endpoint connections so the backend knows which "client" it is.
And would I "ever" be able to reliably collect all the channels and detect current state if I made a collection of the callback channels that I receive? I can intercept the callback channel now (just 1 instance atm, no collections made or anything, so single user) and use that to tell the frontend what to do. But sometimes when the client stops abrubtly (in other words when an error occurs) and I start the client again, it seems like the previous (faulted?) connection is still "re-used" or something, without luck so the communication flow stops after connection the duplex endpoint.
Does this make any sense?
Hope there's someone who has some experience in the matter that can shed some light on it for me. I'm no total newbie, but with regards to multiple connection and keeping them separated, I might need some pointers in the right direction.
Thanks!
Huron.
I managed to get this up and running.
When I create the duplex connection from the frontend to the backend I return a unique Guid. From now on I use this guid for all communication I do with the backend.
This makes the backend "recognize" the client.
In the backend I have a list of connections (grabbed the callback channel and stored it together with the Guid).
Just had to make sure to lock the list object whenever I iterate it or when I add/remove items from it, since it will be used from multiple threads by design.
The pattern to take control from the backend seems to work great so far.

WCF Server Push connectivity test. Ping()?

Using techniques as hinted at in:
http://msdn.microsoft.com/en-us/library/system.servicemodel.servicecontractattribute.callbackcontract.aspx
I am implementing a ServerPush setup for my API to get realtime notifications from a server of events (no polling). Basically, the Server has a RegisterMe() and UnregisterMe() method and the client has a callback method called Announcement(string message) that, through the CallbackContract mechanisms in WCF, the server can call. This seems to work well.
Unfortunately, in this setup, if the Server were to crash or is otherwise unavailable, the Client won't know since it is only listening for messages. Silence on the line could mean no Announcements or it could mean that the server is not available.
Since my goal is to reduce polling rather than immediacy, I don't mind adding a void Ping() method on the Server alongside RegisterMe() and UnregisterMe() that merely exists to test connectivity of to the server. Periodically testing this method would, I believe, ensure that we're still connected (and also that no Announcements have been dropped by the transport, since this is TCP)
But is the Ping() method necessary or is this connectivity test otherwise available as part of WCF by default - like serverProxy.IsStillConnected() or something. As I understand it, the channel's State would only return Faulted or Closed AFTER a failed Ping(), but not instead of it.
2) From a broader perspective, is this callback approach solid? This is not for http or ajax - the number of connected clients will be few (tens of clients, max). Are there serious problems with this approach? As this seems to be a mild risk, how can I limit a slow/malicious client from blocking the server by not processing it's callback queue fast enough? Is there a kind of timeout specific to the callback that I can set without affecting other operations?
Your approach sounds reasonable, here are some links that may or may not help (they are not quite exactly related):
Detecting Client Death in WCF Duplex Contracts
http://tomasz.janczuk.org/2009/08/performance-of-http-polling-duplex.html
Having some health check built into your application protocol makes sense.
If you are worried about malicious clients, then add authorization.
The second link I shared above has a sample pub/sub server, you might be able to use this code. A couple things to watch out for -- consider pushing notifications via async calls or on a separate thread. And set the sendTimeout on the tcp binding.
HTH
I wrote a WCF application and encountered a similar problem. My server checked clients had not 'plug pulled' by periodically sending a ping to them. The actual send method (it was asynchronous being a server) had a timeout of 30 seconds. The client simply checked it received the data every 30 seconds, while the server would catch an exception if the timeout was reached.
Authorisation was required to connect to the server (by using the built-in feature of WCF that force the connecting person to call a particular method first) so from a malicious client perspective you could easily add code to check and ban their account if they do something suspicious, while disconnecting users who do not authenticate.
As the server I wrote was asynchronous, there wasn't any way to really block it. I guess that addresses your last point, as the asynchronous send method fires off the ping (and any other sending of data) and returns immediately. In the SendEnd method it would catch the timeout exception (sometimes multiple for the client) and disconnect them, without any blocking or freezing of the server.
Hope that helps.
You could use a publisher / subscriber service similar to the one suggested by Juval:
http://msdn.microsoft.com/en-us/magazine/cc163537.aspx
This would allow you to persist the subscribers if losing the server is a typical scenario. The publish method in this example also calls each subscribers on a separate thread, so a few dead subscribers will not block others...

Best approach for WCF client

I have client application that uses WCF service to insert some data to backend database. Client application is going to call service on per event basis (it can be every hour or every second).
I'm wondering what's the best way of calling that service.
Should I create communication channel and keep it open all the time, or should I close channel after each call and create it again?
The first question is whether your server needs to maintain any state about the client directly (i.e. are you doing session-like transactions?) If you are, you will need to be able to manage how the server holds the information between communications.
My initial feeling of your question is that if there is no need to leave a connection open, then close it each time and recreate a new connection on demand. This will avoid issues where a connection can be placed into a faulted state between calls. The overhead of creating and destroying connections is minimal, and it will (probably) save you a lot of time in debugging when something goes wrong.
I would think you probably wanna implement a Keep Alive pattern, with a configurable duration to inform your underlying mechanism to close the connection if past beyond the Keep-alive duration with zero communication activity.