WCF Service method that uses byte[] - wcf

I have a WCF service written in C# and is hosted as a windows service.
The key and widely used method by most of the client is as shown in the method signature.
public string storeDocument(byte[] document)
The byte[] is passed to few shared methods before it gets stored in the database.
How do I cleanup the memory?
As this method is called by many clients and is widely used, and we recently noticed that the memory usage by this service on the server is 60 to 100 MB and CPU usage sometimes go up to 80%.
I would like to know is there any way I can make sure that it doesn't use that much memory.
Please help.

WCF also supports streaming. If you use large chunks of data, maybe that is a better solution. See http://msdn.microsoft.com/en-us/library/ms733742.aspx

Related

Async WCF and Protocol Behaviors

FYI: This will be my first real foray into Async/Await; for too long I've been settling for the familiar territory of BackgroundWorker. It's time to move on.
I wish to build a WCF service, self-hosted in a Windows service running on a remote machine in the same LAN, that does this:
Accepts a request for a single .ZIP archive
Creates the archive and packages several files
Returns the archive as its response to the request
I have to support archives as large as 10GB. Needless to say, this scenario isn't covered by basic WCF designs; we must take additional steps to meet the requirement. We must eliminate timeouts while the archive is building and memory errors while it's being sent. Both of these occur under basic WCF designs, depending on the size of the file returned.
My plan is to proceed using task-based asynchronous WCF calls and streaming mode.
I have two concerns:
Is this the proper approach to the problem?
Microsoft has done a nice job at abstracting all of this, but what of the underlying protocols? What goes on 'under the hood?' Does the server keep the connection alive while the archive is building (could be several minutes) or instead does it close the connection and initiate a new one once the operation is complete, thereby requiring me to properly route the request through the client machine firewall?
For #2, clearly I'm hoping for the former (keep-alive). But after some searching I'm not easily finding an answer. Perhaps you know.
You need streaming for big payloads. That is the right approach. This has nothing at all to do with asynchronous IO. The two are independent. The client cannot even tell that the server is async internally.
I'll add my standard answers for whether to use async IO or not:
https://stackoverflow.com/a/25087273/122718 Why does the EF 6 tutorial use asychronous calls?
https://stackoverflow.com/a/12796711/122718 Should we switch to use async I/O by default?
Each request runs over a single connection that is kept alive. This goes for both streaming big amounts of data as well as big initial delays. Not sure why you are concerned about routing. Does your router kill such connections? That's a problem.
Regarding keep alive, there is nothing going over the wire to do that. TCP sessions can stay open indefinitely without any kind of wire traffic.

Need to transfer large file content from wcf service to java client (java web app)

basically need to transfer large file between wcf service and java client,
can someone give directions please?
Basically I need to a create wcf service which needs to read blob content(actually a file content stored in db column) and pass it to a java web application(being a client to wcf).
File size may vary from 1kb to 20MB in size.
By now I have already researched/checked below options but still not able to finalize which one i should go with, which is feasible and which is not,
could someone guide me please.
pass file content as byte[]:
I understand it will increase data size passed to client as it will encode data into base 64 format and embed the base 64 encoding into soap message itself and hence makes communication slower and have performance issues.
But this works for sure, but I am not sure if it is advisable to go by this approach.
Share a NetworkDrive/FTPFolder accessible to both client and wcf service App:
Per this File needed by client will first get stored there by wcf and then client needs to use java I/O OR FTP options to read it.
This looks good from data size/bandwidth point of view, but has extra processing at both service and client side (as need to store/read via NetworkShared/FTP folder)
Streaming:
This one I am not sure will be feasible with a Java client, but my understanding is that streaming is supported for Non .net clients but how to go about it i am not sure???
I understand for streaming i need to use basichttp binding, but do i need to use DataContract or MessageContract or any will work, and then what is to be done at java client side, that i am not sure about.
Using MTOM approach for passing large data in soap requests:
This looks actually having support specifically designed to solve large data transfer in web service calls, but have to investigate further on this, as of now I don’t have much idea on this. Does anyone of you have some suggestion on this?
I understand question is bit lengthier but i had to put all 4 options i have tried and my concerns/findings with each, so that you all can suggest among these or new option may be, also you will know what i have already tried and so can direct me in more effective way.
I am in the same position as yourself and I can state from experience that option 1 is a poor choice for anything more than a couple of MB.
In my own system, times to upload increase exponentially, with 25MB files taking in excess of 30mins to upload.
I've run some timings and the bulk of this is in the transfer of the file from the .NET client to the Java Web Service. Our web service is a facade for a set of 3rd party services; using the built in client provided by the 3rd party (not viable in the business context) is significantly faster - less than 5mins for a 25MB file. Upload to our client application is also quick.
We have tried MTOM and, unless we implemented it incorrectly, didn't see huge improvements (under 10% speed increase).
Next port of call will be option 2 - file transfers are relatively quick so by uploading the file directly to one of the web service hosts I'm hoping this will speed things up dramatically - if I get some meaningful results I will add them to my post.

Cancelling WCF calls with large data?

I'm about to implement a FileService using WCF. It should be able to upload files by providing the filecontent itself and the filename. The current ServiceContract looks like the following
[ServiceContract]
public interface IFileService
{
[OperationContract]
[FaultContract(typeof(FaultException))]
byte[] LoadFile(string relativeFileNamePath);
[OperationContract]
[FaultContract(typeof(FaultException))]
void SaveFile(byte[] content, string relativeFileNamePath);
}
It works fine at the moment, but i want to be able to reduce the network payload of my application using this Fileservice. I need to provide many files as soon as the user openes a specific section of my application, but i might be able to cancel some of them as soon as the user navigates further through the application. As many of mine files are somewhere between 50 and 300 MB, it takes quite a few seconds to transfer the files (the application might run on very slow networks, it might take up a minute).
To clarify and to outline the difference to all those other WCF questions: The specific problem is that providing the data between client <-> server is the bottleneck, not the performance of the service itself. Is changing the interface to a streamed WCF service reasonable?
It is a good practice to use a stream if the file size is above a certain amount. At my work on the enterprise application we are writing, if it is bigger than 16kb then we stream it. If it is less than that, we buffer. Our file service is specially designed to handle this logic.
When you have the transfer mode of your service set to buffer, it will buffer on the client as well as on the service when you are transmitting your data. This means if you are sending a 300mb file, it will buffer all 300mb during the call on both ends before the call is complete. This will definitely create bottlenecks. For performance reasons, this should only be when you have small files that buffer quickly. Otherwise, a stream is the best way to go.
If the majority or all of your files are larger files I'd switch to using a stream.

reducing the memory consumption by wcf services

i have an asp.net and c# application using an wcf service and has been hosted in IIS
and now the memory consumption by the wcf service was increasing with time.
can any one guide me in making the wcf service to consume less space
When memory consumption rises, your service is probably leaking memory. Although a small memory rise is expected to happen during the first 100 or so calls of the web service, it should at one point stabilize around a specific usage with regular usage. You will have to check your service code for anything that could cause this leaking memory. (For example, don't rely too much on the automated garbage collection but assign null to variables that you won't use anymore to free them sooner.)
well for one thing, you can make the WCF service a per call instance. Which means it will create a service instance for each request, and then tear it down afterwards.

Concurrent WCF calls via shared channel

I have a web tier that forwards calls onto an application tier. The web tier uses a shared, cached channel to do so. The application tier services in question are stateless and have concurrency enabled.
But they are not being called concurrently.
If I alter the web tier to create a new channel on every call, then I do get concurrent calls onto the application tier. But I want to avoid that cost since it is functionally unnecessary for my scenario. I have no session state, and nor do I need to re-authenticate the caller each time. I understand that the creation of the channel factory is far more expensive than than the creation of the channels, but it is still a cost I'd like to avoid if possible.
I found this article on MSDN that states:
While channels and clients created by
the channels are thread-safe, they
might not support writing more than
one message to the wire concurrently.
If you are sending large messages,
particularly if streaming, the send
operation might block waiting for
another send to complete.
Firstly, I'm not sending large messages (just a lot of small ones since I'm doing load testing) but am still seeing the blocking behavior. Secondly, this is rather open-ended and unhelpful documentation. It says they "might not" support writing more than one message but doesn't explain the scenarios under which they would support concurrent messages.
Can anyone shed some light on this?
Addendum: I am also considering creating a pool of channels that the web server uses to fulfill requests. But again, I see no reason why my existing approach should block and I'd rather avoid the complexity if possible.
After much ado, this all came down to the fact that I wasn't calling Open explicitly on the channel before using it. Apparently an implicit Open can preclude concurrency in some scenarios.
You can cache the WCF proxy, but still create a channel for each service call - this will ensure concurrency, is not very expensive in comparison to creating a channel from scratch, and re-authentication for each call will not be necessary. This is explained on Wenlong Dong's blog - "Performance Improvement for WCF Client Proxy Creation in .NET 3.5 and Best Practices" (a much better source of WCF information and guidance than MSDN).
Just for completeness: Here is a blog entry explaining the observed behavior of request serialization when not opening the channel explicitly:
http://blogs.msdn.com/b/wenlong/archive/2007/10/26/best-practice-always-open-wcf-client-proxy-explicitly-when-it-is-shared.aspx