How to handle the application if connection breaks in between a web service call - objective-c

In several interviews I have been asked about handling of connection, web service calls, server responses and all. Even now I am not clear about many things.Could you please help me to get a better idea about the following scenarios?
What is the advantage of using NSURLSessionDataTask instead of NSURLConnection-I have an idea like data loss will not happen even if the connection breaks for NSURLSessionDataTask but not for the latter.But how it works?
If the connection breaks after sending the request to a server or while connecting to server , How can we handle the code at our end in case of NSURLConnection and NSURLSessionDataTask?-My idea is to use Reachability classes and check when it becomes online.
The data we are sending got updated at the server side. But we don't get the response from server. What can we do at our side to handle this situation?- Incrementing timeOutInterval is the only thing that we can do?
Please help me with these scenarios. Thank you very much in advance!!

That's multiple questions, really, but I'll try to answer them all briefly.
Most failure handling is the same between NSURLConnection and NSURLSession. The main advantages of the latter are support for background downloads and cancelling groups of related requests.
That said, if you're doing a large download that you think might fail, NSURLSession does provide download tasks that let you resume the download if your network connection fails, similar to what NSURLDownload used to do on OS X (never available on iOS). This only helps for downloading large files, though, not for large uploads (which require significant server-side support to resume) or other requests.
Your intuition is correct. When a connection fails, create a reachability object monitoring that particular hostname to see when it would be a good time to try the request again. Then, try the request again.
You might also display some sort of advisory UI to say that you have no Internet connection. (By advisory, I mean something that the user doesn't have to click on and that does not impact offline use of the app any more than necessary; look at the Facebook app for a great example.)
Provide a unique identifier when you make the request, and store that on the server along with the server's response until the client acknowledges receipt of the response (or purge it anyway after some reasonable number of days). When the upload finishes, the server gives you back its response if it can.
If something goes wrong, the client asks the server to resend the response associated with that unique identifier. Once your client has the data, it acknowledges receipt and the server deletes the response. If you ask the server for the response and it doesn't have one, then the upload didn't really complete.
With some additional work, this approach can make it possible to support long-running uploads more reliably. If an upload fails, ask the server how much data it got for that identifier, then tell the server that you're going to upload new data starting at the next byte. On the server side, overwrite the old data starting at that byte (just in case some data was still being written when you asked for the length).
Hope that helps.

Related

Can I send an API response before successful persistence of data?

I am currently developing a Microservice that is interacting with other microservices.
The problem now is that those interactions are really time-consuming. I already implemented concurrent calls via Uni and uses caching where useful. Now I still have some calls that still need some seconds in order to respond and now I thought of another thing, which I could do, in order to improve the performance:
Is it possible to send a response before the sucessfull persistence of data? I send requests to the other microservices where they have to persist the results of my methods. Can I already send the user the result in a first response and make a second response if the persistence process was sucessfull?
With that, the front-end could already begin working even though my API is not 100% finished.
I saw that there is a possible status-code 207 but it's rather used with streams where someone wants to split large files. Is there another possibility? Thanks in advance.
"Is it possible to send a response before the sucessfull persistence of data? Can I already send the user the result in a first response and make a second response if the persistence process was sucessfull? With that, the front-end could already begin working even though my API is not 100% finished."
You can and should, but it is a philosophy change in your API and possibly you have to consider some edge cases and techniques to deal with them.
In case of a long running API call, you can issue an "ack" response, a traditional 200 one, only the answer would just mean the operation is asynchronous and will complete in the future, something like { id:49584958, apicall:"create", status:"queued", result:true }
Then you can
poll your API with the returned ID to see if the operation that is still ongoing, has succeeded or failed.
have a SSE channel (realtime server side events) where your server can issue status messages as pending operations finish
maybe using persistent connections and keepalives, or flushing the response in the middle, you can achieve what you point out, ie. like a segmented response. I am not familiar with that approach as I normally go for the suggesions above.
But in any case, edge cases apply exactly the same: For example, what happens if then through your API a user issues calls dependent on the success of an ongoing or not even started previous command? like for example, get information about something still being persisted?
You will have to deal with these situations with mechanisms like:
Reject related operations until pending call is resolved "server side": Api could return ie. a BUSY error informing that operations are still ongoing when you want to, for example, delete something that still is being created.
Queue all operations so the server executes all them sequentially.
Allow some simulatenous operations if you find they will not collide (ie. create 2 unrelated items)

About losing HTTP Requests

I have a server to which my client sends a HTTP GET request with some values. The server on its end simply stores these values to a database.
Now, I am observing that sometimes I do not observe these values in the database. One of the following could have happened:
The client never sent it
The server never received it
The server failed in writing to the database
My strongest doubt is that the reason is 2 - but I am unable to explain it completely. Since this is an HTTP request (which means there is TCP underneath) reliable delivery of the GET request should be guaranteed, right? Is it possible that even though I send a GET request to the server - it was never received by the server? If yes, what is TCP doing there?
Or, can I confidently assert that if the server is up and running and everything sent to the server is written to the database, then the absence of the details of the GET request in the database means the client never sent it?
Not sure if the details will help - but I am running a tomcat server and I am just sending a name-value pair through the get request.
There are a few things you seem to be missing. First of all, yes, if TCP finishes successfully, you pretty much have a guarantee that your message (i.e. the TCP payload) has reached the other side: TCP assures that it will take care of lost packages and the order in which packages arrive. However, this is not universially failproof, as there are still things beyond the powers of TCP (think of a physical disconnect by cutting through an ethernet cable). There is also no assertion regarding the syntactical correctness of the protocol "above." Any checks beyond delivering a bit-perfect copy is simply not TCP's concern.
So, there is a chance that the requests issued by your client are faulty or that they are indeed correct but not parsed correctly by your server. Former is striking me as more likely as latter one as Tomcat is a very mature piece of software. I think it would help tremendously if you would record and analyse some of your generated traffic through e.g. Wireshark.
You do not really mention what database you have in use. But there are some sacrificing acid-compliance in favour of increased write speeds. The nature of these databases brings it that you can never be really sure wether something actually got written to disk or is still residing in some buffer in memory. Should you happen to use such a db, this were another line of investigation.
Programmatically, I advise you take the following steps when dealing with HTTP traffic:
Has writing to the socket finishes without error?
Could a response be read from the socket?
Does the response carry a code in the 2xx range (indicating a successful operation)?
If any of these fail, you should really log something.
On a realated note, what you are doing there does not call for the GET method but for POST as you are changing application state. Consider it as a nice-to-have ;)
Without knowing the specifics, you can break it down into two parts. The HTTP request and the DB write. The client will receive a 200 OK response from the server when its GET request has been acknowledged. I've written code under Tomcat to connect to a MySQL DB using DAO. In the case of a failure an exception would be thrown and logged. Which ever method you're using, you'll want to figure out how failures are logged.

Why sending and receiving JSON data from the server in IPhone is so slow?

I'm making a mobile client for a web site now. And information exchange between my app and server is in JSON (searching users and data on server,sending messages, conversation threading, etc.) But all these features work too slow. I click on the button "send" and then wait for some second before the message will be sent, the same thing with searching, authorization, etc. So I have such a questions:
1. Why it's such a performance overhead?
2. Can it be troubles with the server side or it's JSON parser troubles or may be something else?
3. How can i fix/optimize this? All solutions, advices etc. will be helpful!
I would use Xcode to debug the app to see whether the majority of time is spent loading the data from the server or parsing the JSON once the data is received.
If it is the first, try loading the data from a PC over the same wireless connection and see if it is slow on that too. If so, clearly your server side code needs optimising.
If it is the second and the parsing is slow, you may want to look into using JSONKit instead of the native JSON parser as testing shows it is faster. You may also want to review the structure of your JSON.
One thing I have noticed however is that connections are slower on my iPad than on other machines. I've noticed this when comparing apps I've developed in the simulator to on the device on the same network and when conducting speedtests. As for why this happens, I am not sure - some form of additional overhead in iOS perhaps.
I can save you some time - it has nothing to do with JSON. It has to do with how the your app handles requests in general. It obviously needs optimization on the server.
EDIT:
I suppose it could also be that you might be experiencing high-latency on your phone, but again, that has nothing to do with your app.
Debug it using a regular browser and chrome dev tools (in the network tab) - you'll see that the requests take long even on a desktop at which point you'll have to start fishing around in the server-side code to see what's making it go slow (hint: unoptimized database queries are a big bottleneck....but then again, so is crappy hardware).
Sorry that I couldn't be of more help, but without seeing the entire setup of the server and the code that's going slow (not the client requests, but the server code), that's the best I can do.
Best of luck.

How to write a middle-tier http API endpoint that can stream results as they arrive to the client?

The scenario is this - I have a frontend web-server that I'm writing in node.js. I have an as-yet-unwritten middle-tier internal-API layer written in, well, anything. The internal-API is the only thing allowed to talk to the data-store (which happens to be a relational database).
Disclaimer: I'm a node.js beginner.
node.js wants to do data-access asynchronously - that makes calls like Database.query.all inefficient, since the response callback wouldn't start until the whole list has been assembled. Documentation I've read suggests that instead, it'd be better to stream results one at a time to the client.
I would like to know how to write the frontend and middle-tier http internal-API such that I can take advantage of node.js' asynchronicity, here.
I guess the question is "how do I stream structured data over http"? I guess that's the feature of the internal API that I'm asking for support for.
Should I:
Get the frontend to ask for a list of IDs, then issue one request each to the backend? Sounds crude and chatty, plus I don't see a guarantee that the requests will return in the order that I want, so I'd have to wait 'til I had everything back at the frontend anyway..?
Get the frontend to make a series of requests against the internal API for pages of data, and treat each chunk as a stream-segment...?
Fetch only enough data for the first screen's worth, then request for subsequent chunks, writing each one to the end of the list as it arrives?
something cleverer!?
(Note: please don't say "get rid of the middle-tier so you can talk to the database directly" - that's not an option)
I am not sure what exactly you mean by "streaming"; from the ideas you give, it could be either interpreted as some HTTP server push or long polling technique, or simply making subsequent XHR requests.
Since you're using node, I recommend Socket.io, which allows you to really push data to the browser whenever you want.
If you chose to go with XHRs, simply tell the browser what to request next.
If that doesn't fit you, and you want to use server push or long polling, response.write() seems the way to go. But you will probably run into problems with request timeouts and such.

Persistent Connection to Web Server (Like AJAX on Web)

I am wanting to create a program that talks with a Cometd server to allow for pushing of data to the app.
I have done this on the web side using AJAX, but I am a little unsure of the best way to do this with Cocoa.
I can make a standard connection using NSURLRequest and NSURLConnection, but how do I keep this connection alive so I can send data when needed and get the pushed info when needed.
Am I even going about this the correct way?
Thanks in advance
In terms of push notifications, if the http server does not close the close the connection the the NSURLConnection will stay open, and you will keep getting data. Note that if you are designing something like that you must use the asynchronous NSURLConnection methods, as a synchronous connection will not end until the server closes the connection.
As for sending more data, it is really not designed to do that. If you want to push more data in a single http request after you have sent it (which seems like a pretty bad idea to me) you are going to have to roll your http stack of find some opensource component you can use.
Note that NSURLConnection will use keep alive and other things as it deems appropriate, so if you start multiple logical connections to the same host in your app they may end up on the wire using the same keep alive connection, etc.