I have configured a WCF service to transfer data on a streamed transfer mode. I think I have set the configurations properly because I'm able to transfer files above 100Mb and that's more than I need.
Now I'm calling my transfer service three times to get three different files that don't pass the 2 Mb each. The problem is that as soon as I call for the third file, my program freezes and I don't get any response anymore, forcing me to close the program.
I don't think this is a file size issue because I have tested passing files of 20 Mb of size and only the first two get to the client just fine. But i don't have any response from the third call.
Is this a configuration issue which may limit the service calls to just two?
Best regards
HALF-SOLVED
Well, First of all I could not find out why the client cannot reach the server after two succesful requests, it hangs out spectacularly.
Now I know that I'm able to transfer 500 Mb on the service I'm sending data to the client as a zipped file. Then I call to 7z.exe (7zip) to unzip my files.
This is not a way to solve this issue. The problem still exists and I think there's a way to solve it the right way. I'll be posting the answer as soon as I find it, but in the mean time, my users will keep using my system.
Related
In several interviews I have been asked about handling of connection, web service calls, server responses and all. Even now I am not clear about many things.Could you please help me to get a better idea about the following scenarios?
What is the advantage of using NSURLSessionDataTask instead of NSURLConnection-I have an idea like data loss will not happen even if the connection breaks for NSURLSessionDataTask but not for the latter.But how it works?
If the connection breaks after sending the request to a server or while connecting to server , How can we handle the code at our end in case of NSURLConnection and NSURLSessionDataTask?-My idea is to use Reachability classes and check when it becomes online.
The data we are sending got updated at the server side. But we don't get the response from server. What can we do at our side to handle this situation?- Incrementing timeOutInterval is the only thing that we can do?
Please help me with these scenarios. Thank you very much in advance!!
That's multiple questions, really, but I'll try to answer them all briefly.
Most failure handling is the same between NSURLConnection and NSURLSession. The main advantages of the latter are support for background downloads and cancelling groups of related requests.
That said, if you're doing a large download that you think might fail, NSURLSession does provide download tasks that let you resume the download if your network connection fails, similar to what NSURLDownload used to do on OS X (never available on iOS). This only helps for downloading large files, though, not for large uploads (which require significant server-side support to resume) or other requests.
Your intuition is correct. When a connection fails, create a reachability object monitoring that particular hostname to see when it would be a good time to try the request again. Then, try the request again.
You might also display some sort of advisory UI to say that you have no Internet connection. (By advisory, I mean something that the user doesn't have to click on and that does not impact offline use of the app any more than necessary; look at the Facebook app for a great example.)
Provide a unique identifier when you make the request, and store that on the server along with the server's response until the client acknowledges receipt of the response (or purge it anyway after some reasonable number of days). When the upload finishes, the server gives you back its response if it can.
If something goes wrong, the client asks the server to resend the response associated with that unique identifier. Once your client has the data, it acknowledges receipt and the server deletes the response. If you ask the server for the response and it doesn't have one, then the upload didn't really complete.
With some additional work, this approach can make it possible to support long-running uploads more reliably. If an upload fails, ask the server how much data it got for that identifier, then tell the server that you're going to upload new data starting at the next byte. On the server side, overwrite the old data starting at that byte (just in case some data was still being written when you asked for the length).
Hope that helps.
I am new to Laravel and Api development, i am facing a problem, the workflow of my api is, a user sends post data to api, then api takes that data and processes the data to databases, now there is a process in which php waits for 30 min. while inserting data into two different tables.
The problem is as far as i know after that process is complete then only i can send json response back to user. but this way user has to wait for 30 minute.
Is there a way that process that takes 30 min do work in background and send the response json immediately when that process started ?
1) I studied about queues but the web server i will be hosting will not give me access to server as a whole to install something, it will only give me space for my files.
I am confused how to achieve this functionality, so that user do not have to wait much for Response.
I will really appreciate.
Thanks,
You can use the queue without any server installation. All your configuration goes in the config/queue.php file.
You can use
Amazon SQS: aws/aws-sdk-php ~3.0
Beanstalkd: pda/pheanstalk ~3.0
Redis: predis/predis ~1.0
Read more here: https://laravel.com/docs/5.2/queues#introduction
I have a text file which has about 100,000 records of identifier.
I must read all of record, each record i do request to web service and receive the result from web service, the result i write to another file.
I'm confuse between two solution:
- Read identifier file to a list of identifier, iterate this list, call web service, ....
- Read identifier line on each line, call web service, .....
Do you think what solution will be better ? program will do faster ?
Thanks for all.
As Dukeling says, using different threads to read the file, send requests and write to file can increase the speed of the program, rather the one thread solutions you propose.
I recommend that you would start using asynchronous calls to your web service. You make the call, but don't wait for a response (you handle the responses in the callback). When you make a lot of calls to the web service in parallel (as you want speed), this frees up some I/O threads on your hosting machine and can improve the rate/time of processed requests sometimes.
Then you can have a thread that reads from the file, starts the asynchronous call and repeats. On the callback function you implement the writing to file. You should at this level implement a logic that insures that your responses are written in the right order.
On the other hand, calling the web service for each record may be too chatty.
I would suggest an implementation similar to pagging: loading a certain amount of records, sending them to operation and receiving the responses in bulk. You should take care of not failing the whole package for one recors, have a logic for resending only a part of the tasks and so on.
basically need to transfer large file between wcf service and java client,
can someone give directions please?
Basically I need to a create wcf service which needs to read blob content(actually a file content stored in db column) and pass it to a java web application(being a client to wcf).
File size may vary from 1kb to 20MB in size.
By now I have already researched/checked below options but still not able to finalize which one i should go with, which is feasible and which is not,
could someone guide me please.
pass file content as byte[]:
I understand it will increase data size passed to client as it will encode data into base 64 format and embed the base 64 encoding into soap message itself and hence makes communication slower and have performance issues.
But this works for sure, but I am not sure if it is advisable to go by this approach.
Share a NetworkDrive/FTPFolder accessible to both client and wcf service App:
Per this File needed by client will first get stored there by wcf and then client needs to use java I/O OR FTP options to read it.
This looks good from data size/bandwidth point of view, but has extra processing at both service and client side (as need to store/read via NetworkShared/FTP folder)
Streaming:
This one I am not sure will be feasible with a Java client, but my understanding is that streaming is supported for Non .net clients but how to go about it i am not sure???
I understand for streaming i need to use basichttp binding, but do i need to use DataContract or MessageContract or any will work, and then what is to be done at java client side, that i am not sure about.
Using MTOM approach for passing large data in soap requests:
This looks actually having support specifically designed to solve large data transfer in web service calls, but have to investigate further on this, as of now I don’t have much idea on this. Does anyone of you have some suggestion on this?
I understand question is bit lengthier but i had to put all 4 options i have tried and my concerns/findings with each, so that you all can suggest among these or new option may be, also you will know what i have already tried and so can direct me in more effective way.
I am in the same position as yourself and I can state from experience that option 1 is a poor choice for anything more than a couple of MB.
In my own system, times to upload increase exponentially, with 25MB files taking in excess of 30mins to upload.
I've run some timings and the bulk of this is in the transfer of the file from the .NET client to the Java Web Service. Our web service is a facade for a set of 3rd party services; using the built in client provided by the 3rd party (not viable in the business context) is significantly faster - less than 5mins for a 25MB file. Upload to our client application is also quick.
We have tried MTOM and, unless we implemented it incorrectly, didn't see huge improvements (under 10% speed increase).
Next port of call will be option 2 - file transfers are relatively quick so by uploading the file directly to one of the web service hosts I'm hoping this will speed things up dramatically - if I get some meaningful results I will add them to my post.
On a WCF rest service I am dealing with streams. In a service method I am uploading a stream in a data contract which works fine. And on service side I process the stream and its position is now at eof. After doing that I need to set its position to 0 again therefore I can save it there. But it throws the exception:
Specified method is not supported.
Does it mean I can't process a stream more then once? If it does I will need a workaround for that :/ and only solution pops into my mind is sending the stream two times so I can process it separately, but it is not good since I would have to upload it twice.
Any help would be appreciated.
Funny that I found my own solution :) first I saved the stream, then read it from that path for further processes over that stream. its interesting that finding the solution didn't require more detailed, technical information but a change of logical approach.