I've been working on this for quite a few hours now and have not found any solutions to what I am experiencing.
I am serializing a list of 3000 DTOs via protobuf with basicHttpBinding. I can see that WCF is indeed sending the node with the encoded data along. The problem is, this response is about 1 MB in size and it taking 30 seconds to complete once I step through the last method in the call. If I do a .take(100) on the list the response takes about 1-2 seconds, and .take(1) is instant.
I tried manually serializing the 3000 record list to a memory stream with protobuf and it was nearly instantaneous which leads me to believe it has something to do with the transfering of data.
This is replicable on any machine the service and site are ran on. Any ideas?
While prototyping to compare WCF to some other technologies, we came across a similar problem. We thought we had configured WCF to use protobuf-net to serialize, but it wasn't working correctly. The most head-frying thing for me was that there was no error, WCF just silently defaulted to the DataContractSerializer. DataContractSerializer will either serialize in binary or XML depending on the WCF binding used. For basicHTTPBinding it will serialize to XML, which will of course result in a large and slow response.
I suggest that you verify that WCF is really serializing using protobuf-net. You should be able to see if this is happening by peeking inside the packets with wireshark (or similar tool).
Hope this helps <333
Related
basically need to transfer large file between wcf service and java client,
can someone give directions please?
Basically I need to a create wcf service which needs to read blob content(actually a file content stored in db column) and pass it to a java web application(being a client to wcf).
File size may vary from 1kb to 20MB in size.
By now I have already researched/checked below options but still not able to finalize which one i should go with, which is feasible and which is not,
could someone guide me please.
pass file content as byte[]:
I understand it will increase data size passed to client as it will encode data into base 64 format and embed the base 64 encoding into soap message itself and hence makes communication slower and have performance issues.
But this works for sure, but I am not sure if it is advisable to go by this approach.
Share a NetworkDrive/FTPFolder accessible to both client and wcf service App:
Per this File needed by client will first get stored there by wcf and then client needs to use java I/O OR FTP options to read it.
This looks good from data size/bandwidth point of view, but has extra processing at both service and client side (as need to store/read via NetworkShared/FTP folder)
Streaming:
This one I am not sure will be feasible with a Java client, but my understanding is that streaming is supported for Non .net clients but how to go about it i am not sure???
I understand for streaming i need to use basichttp binding, but do i need to use DataContract or MessageContract or any will work, and then what is to be done at java client side, that i am not sure about.
Using MTOM approach for passing large data in soap requests:
This looks actually having support specifically designed to solve large data transfer in web service calls, but have to investigate further on this, as of now I don’t have much idea on this. Does anyone of you have some suggestion on this?
I understand question is bit lengthier but i had to put all 4 options i have tried and my concerns/findings with each, so that you all can suggest among these or new option may be, also you will know what i have already tried and so can direct me in more effective way.
I am in the same position as yourself and I can state from experience that option 1 is a poor choice for anything more than a couple of MB.
In my own system, times to upload increase exponentially, with 25MB files taking in excess of 30mins to upload.
I've run some timings and the bulk of this is in the transfer of the file from the .NET client to the Java Web Service. Our web service is a facade for a set of 3rd party services; using the built in client provided by the 3rd party (not viable in the business context) is significantly faster - less than 5mins for a 25MB file. Upload to our client application is also quick.
We have tried MTOM and, unless we implemented it incorrectly, didn't see huge improvements (under 10% speed increase).
Next port of call will be option 2 - file transfers are relatively quick so by uploading the file directly to one of the web service hosts I'm hoping this will speed things up dramatically - if I get some meaningful results I will add them to my post.
I need to streaming a large content with WCF and deferred execution (using NetTcpBinding), in other words, return a list of person (can be anything) from a database through a WCF service without consume all memory in the server side.
I have tried the solution in this post: Streaming large content with WCF and deferred execution
Using BasicHttpBinding it works like a charm, but when using NetTcpBinding....well...not working.
Can anyone help me with this??
Tks!
Project here: WCF Streaming IEnumerable
Here is guidance on transferring large data with streaming.
http://msdn.microsoft.com/en-us/library/ms733742.aspx
At the bottom of the page is a link to some samples, one of which shows using TCP.
http://msdn.microsoft.com/en-us/library/ms789010.aspx
If this doesn't help, then perhaps you could provide more detail on how it is failing (error message)?
I have a WCF service hosted on Windows Server 2008/IIS exposing a nettcpbinding endpoint using the DataContractSerializer. This service is consumed by a Windows Forms application, which is suffering from various performance issues. In an effort to reduce the payload transferred from server-to-client over our corporate network, I decided to integrate the protobuf-net (version r580) serialization engine into some of my service's operations using the ProtoBehavior attribute.
Prior to integration ofprotobuff-net, the cumulative size of the serialized server responses was approximately 18 MB. Afterwards, it was 1.6 MB, as verified with WCF trace logs on both client and server; unfortunately, this didn't result in decreased loading times in the client application.
After digging into it further, I found that the bytes received by the client over the network, pre-protobuf vs. post-protobuf, only differed by about 1MB, as reported by a network traffic tool. How can this be? How can payloads differing by almost 15 MB in their serialized form (comprising of several messages), only represent a 1 MB difference when transmitted over the network? Could the resulting TCP stream be overly bloated if the underlying protobuff stream is assembled a certain way?
Additionally, I should note that the protobuf-net serialized payload of 1.6 MB is comprised of several response messages, one of which is approximately 1.25 MB by itself; could this be the issue? Should I work on breaking that into smaller responses? If so, what's the threshold?
I'd appreciate any input regarding this as it's been puzzling me for a couple of weeks now. I've spent hours pouring through posts relating to protobuf-net, and while it's delivering on it's promise of providing a compact serialization format, I haven't been able to realize the benefits in practice.
Thanks in advance.
I have run into the exception below a few times in the past and each time I just change the configuration to allow a bigger object graph.
"Maximum number of items that can be serialized or deserialized in an object graph is '65536'. Change the object graph or increase the MaxItemsInObjectGraph quota."
However I was speaking to a colleague and he said that WCF should not be used to send large amounts of data, instead the data should be bite sized.
So what is the general consensus about large amounts of data being returned?
In my experience using synchronous web service operations to transmit large data sets or files leads to many different problems.
Firstly, you have performance related issues - serialization time at the service boundary. Then you have availability issues. Incoming requests can time out waiting for a response, or may be rejected because there is no dispatcher thread to service the request.
It is much better to delegate large data transfer and processing to some offline asynchronous process.
For example, in your situation, you send a request and the service returns a URI to the eventual resource you want. You may have to wait for the resource to become available, but you can code your consumer appropriately.
I haven't got any concrete examples but this article seems to point to WCF being used for large data sets, and I am aware of people using it for images.
Personally, I have always had to increase this property for any real world data.
I'm looking for a way of implementing a file transfer service over HTTPS which uses chunking to cope with intermittent connectivity loss and to reduce the large timeouts required by using Streaming. Because the client may be behind firewalls, the Chunking Channel sample on MSDN isn't suitable.
There is an old discussion about this on the Microsoft Forums but not a complete answer, or at least not one that I have the know-how to implement.
There is a sample of a resumable download service here: http://cid-8d29fb569d8d732f.skydrive.live.com/self.aspx/.Public/WCF/Resume%5E_Download%5E_WCF%5E_1%20%5E52%5E6.zip
This sample uses a custom WCF binding. It looks like it works by getting a segment of the file at a time, with the possibility to get any remaining segments when the system is back on line.