WCF methods with large result sets - visually tracking transfer progress - wcf

I have a WCF services that has to return some data sets that can be as large as 10mb or more, I want some visual feedback for the user on progress, is there a way to track the download progress?
My client is Silverlight 3 and ultimately I would like to be able to bind a progress bar to this; any ideas?
EDIT: After the bounty SO automatically selected the answer with upvotes as the correct answer when this is not the case.

There is an example of this on the code project, see:
http://www.codeproject.com/KB/WCF/WCF_FileTransfer_Progress.aspx

If you have one giant WCF call, then you only have two states, everything or nothing. Also, WCF has a maximum transaction size, so returning a large dataset runs the risks of going over this limit.
In order to solve these problems in my projects, I split the one big request into many smaller requests. I then check how many responses I have vs. original requests to get an indication of progress.
Edit: added better explanation.

The CodeProject article may be tricky to get working with Silverlight since Silverlight only has access to the BasicHttpBinding -- although it looks like BasicHttpBinding has a TransferMode="Streamed" so perhaps it is possible -- I don't know.
If you can get it to return a Stream, that seems like it would be the best approach.
Still, I thought that I would put forward a random "other" approach.
Perhaps you could serialize the data into a file and use the WebClient to download it. So basically, you would have a WS.GetData() which would save a file on the server and return its filename -- then the Silverlight app would use WebClient to download it (which has a DownloadProgressChanged event).
I know it's not what you're looking for -- just an idea...

EDIT: I answered this thinking you wanted a silverlight uploader, but it actually looks like you want a silverlight downloader. You can do the same thing I suggested for the uploader except use HTTP GET, or Binary WCF, or Sockets.
I have written a Silverlight 2 uploader with a progress bar and I modeled it after this one. It uses HTTP POST to sent the file to the server one piece at a time. The tricky part is that the bigger your POST, the faster the file will be uploaded, but your progress bar only gets updated once per POST. So I wrote an algorithm that dynamically tries to find the biggest POST size that takes less than a second.
If you want to use WCF instead of HTTP POST, that's probably better because Silverlight 3 now supports binary message encoding:
<customBinding>
<binding name="MyBinaryBinding" maxBufferSize="2147483647"
maxReceivedMessageSize="2147483647">
<binaryMessageEncoding />
<httpTransport />
</binding>
</customBinding>
OR you could write a sockets implementation - Silverlight does support this but it can be a little tricky to setup and requires your server to have a port in the range 4502-4532 open, and port 943 open for a policy file.

Related

How do I design a REST call that is just a data transformation?

I am designing my first REST API.
Suppose I have a (SOAP) web service that takes MyData1 and returns MyData2.
It is a pure function with no side effects, for example:
MyData2 myData2 = transform(MyData myData);
transform() does not change the state of the server. My question is, what REST call do I use? MyData can be large, so I will need to put it in the body of the request, so POST seems required. However, POST seems to be used only to change the server state and not return anything, which transform() is not doing. So POST might not be correct? Is there a specific REST technique to use for pure functions that take and return something, or should I just use POST, unload the response body, and not worry about it?
I think POST is the way to go here, because of the sheer fact that you need to pass data in the body. The GET method is used when you need to retrieve information (in the form of an entity), identified by the Request-URI. In short, that means that when processing a GET request, a server is only required to examine the Request-URI and Host header field, and nothing else.
See the pertinent section of the HTTP specification for details.
It is okay to use POST
POST serves many useful purposes in HTTP, including the general purpose of “this action isn’t worth standardizing.”
It's not a great answer, but it's the right answer. The real issue here is that HTTP, which is a protocol for the transfer of documents over a network, isn't a great fit for document transformation.
If you imagine this idea on the web, how would it work? well, you'd click of a bunch of links to get to some web form, and that web form would allow you to specify the source data (including perhaps attaching a file), and then submitting the form would send everything to the server, and you'd get the transformed representation back as the response.
But - because of the payload, you would end up using POST, which means that general purpose components wouldn't have the data available to tell them that the request was safe.
You could look into the WebDav specifications to see if SEARCH or REPORT is a satisfactory fit -- every time I've looked into them for myself I've decided against using them (no, I don't want an HTTP file server).

How to reduce round trip to database in Search engine using REST

I have thousands on records in MS Sql server database table, to get it search it quickly on web page I created WCF REST service that returns List of records fetched from database by keywords converted into JSON and get displayed in DIV just below html textbox in html page. (like google search textbox).
I used server side cache object to avoid database hit upto some extent.
But I am forced to hit REST GET url on every text change.
Any suggestions to make it faster?
There can be a way to reduce your REST calls. There are techniques available for client side caching which allows to cache the ajax responses so that next time if same request is repeated the results are produced from cache. But you have to Very Careful using such techniques as it may be endup giving wrong results and behavior.
See this answer. It is similar to your question but the discussion is really interesting which will give you insight of client side cache implementation to reduced Ajax call round trips.
As you're using a Rest, you're making a http request to your service. You can take advantage of the Output Cache of ASP.Net.
The call will hit the server, but it will automatically response your request without running the code.
You do it like this:
[AspNetCacheProfile("CachePoliceName")]
[WebGet(UriTemplate = "{userName}")]
public String GetData(string parameter)
{ // your code }
If requered, you need to enable AspNet compatibility in your configuration file:
<system.serviceModel>
<serviceHostingEnvironment aspNetCompatibilityEnabled="true" />
</system.serviceModel>
See more here: https://msdn.microsoft.com/en-us/library/vstudio/ee230443%28v=vs.100%29.aspx
And here: http://blogs.msdn.com/b/endpoint/archive/2010/01/28/integrating-asp-net-output-caching-with-wcf-webhttp-services.aspx
Hope it helps.

Sending a file from a java client to a server using wcf method?

I want to build a wcf web service so that the client and the server would be able to transfer files between each other. Do you know how I can achieve this? I think I should turn it into a byte array but I have no idea how to do that. The file is also quite big so I must turn on streamed response.
It sounds like you're on the right track. A quick search of the interwebz yielded this link: http://www.codeproject.com/Articles/166763/WCF-Streaming-Upload-Download-Files-Over-HTTP
Your question indicates that you want to send a file from a java client to a WCFd endpoint, but the contents of your question indicate that this should be a bidirectional capability. If this is the case, then you'll need to implement a service endpoint on your client as well. As far as that is concerned, I cannot be of much help, but there are resources out there like this SO question: In-process SOAP service server for Java
As far as practical implementation, I would think that using these two links you should be able to produce some code for your server and client.
As far as reading all bytes of a file, in C# you can use: File.ReadAllBytes It should work as in the following code:
//Read The contents of the file indicated
string fileName = "/path/to/some/file";
//store the binary in a byte array
byte[] buffer = File.ReadAllBytes(fileName);
//do something with those bytes!
Be sure to use the search function in the future:

WCF Fault - missing detail element

We have a custom exception handling behaviour (implementing IErrorHandler) in our solution which essentially uses Automapper to convert exceptions to faults.
This has been working well since day 1. However we have just noticed while browsing ServiceTraceViewer (looking at server logs - not client) on our shared development server that any faults returned from our services omit the detail element.
Running exactly the same code and configuration on my development machine, the detail element is correctly populated. As I say configuration files (behaviours, bindings) are identical on both machines. Both configurations do specify includeExceptiondetailsInFaults = true.
I've also added a bunch of log statements that seem to indicate that the same code path is followed on both machines with the same values for various things like fault code, fault reason etc.
My dev machine is 2008R2 standard (64bit). The server in question is also 2008R2 Standard (64 bit).
I can post extracts of the code if required, but in the first instance is there anything environmental that could allow for what we're seeing?
Extract from problem file:
<s:Body u:Id="_1">
<s:Fault>
<s:Code>
<s:Value>s:Sender</s:Value>
</s:Code>
<s:Reason>
<s:Text xml:lang="en-NZ">An error occured during the request to the ...</s:Text>
</s:Reason>
</s:Fault>
</s:Body>
Not 100% sure about etiquette here. This is an answer I guess to my specific brand of stupidity. Maybe somebody else will be as stupid, then the answer applies to them...
I was sure I had compared everything (I stated exactly the same code / configuration). But the behaviour configuration file I just gave a quick visual. After another developer approached me I realised that the local files were NOT the same as the server files. Doh!
In fact the server files had one extra line added by a post build step - triggering another custom behaviour which implemented IErrorHandler in addition to the IErrorHandler behaviour we already use for logging etc.
I guess I'm going to open another question now which seeks an answer to the approach on having multiple behaviours implementing the same interface and not polluting eachothers functionality (like returning Faults).

WCF Paged Results & Data Export

I've walked into a project that is using a WCF service for the data tier. Currently, when data is needed for a grid, all rows are returned and the results are bound to a grid and the dataset is stuffed into a session variable for paging/sorting/rebinding. We've already hit a max message size problem, so I'm thinking it's time to convert from fetch and cache to fetch only the current page.
Face value this seems easy enough, but there's a small catch. The user is allowed to export the entire result set at any point. This means that for grid viewing purposes fetching the current page is fine, but when they want to do an export, I still need to make a call for all data.
This puts me back into the max message size issue. What is the recommended approach for this type of setup?
We are currently using the wsHttpBinding...
Thanks for any assistance.
I think the recommended approach for large files is to use WCF streaming. I'm not sure the exact details for your scenario, but you could take a look at this as a starting point:
http://msdn.microsoft.com/en-us/library/ms789010.aspx
I would probably do something like this in your case
create a service with a "paged" GetData() method - where you specify the page index and the page size as additional parameters. This should give you a nice clean interface for "regular" use, and that should not hit the maxMessageSize limits
create a second service (or method) that would send all data - ideally, you could bundle that up into a ZIP file or something on the server, before sending it. If that ZIP file is still too large, you might want to check out WCF streaming for handling large files, as Andy already pointed out
The maxMessageSizeLimit is in place for a good reason: to avoid Denial of Service attacks where a WCF service would just get flooded with large messages and thus brought to its knees. If you can, always try to keep that in mind and don't just jack up the maxMessageSize to 2 GB - it might come back to bite you :-)