My WCF service transfers large files (200Mo), and I see memory usage growing up constantly (1Go, 2Go ... 8Go). Here is what the memory visual studio snapshot gives :
Object Number Size (Bytes)
BufferManagerOutputStream 3 266 668 448
...
And here, the service configuration :
BasicHttpBinding httpb = new BasicHttpBinding();
httpb.MaxReceivedMessageSize = int.MaxValue;
httpb.MaxBufferPoolSize = 0;
Have you got any idea for investigation and resolution ?
For the large file transfer between the server and the client, we use the MTOM encoding to reduce the overhead. you could also implement the IDisposable interface on your service contract to manage your memory.
I don’t think the memory leaks due to the fact that WCF I/O transfer. I suggest you could the VS2017 diagnostic tool check where the data object increased.
You could refer to the following document.
https://learn.microsoft.com/en-us/visualstudio/profiling/memory-usage?view=vs-2017
https://blogs.msdn.microsoft.com/visualstudio/2016/02/15/analyze-cpu-memory-while-debugging/
Related
I have added service reference to my web.config file but I am not sure about the transferMode property inside binding tag.
In the basicHttpBinding, which is the best transferMode for soap/xml response?
Basically there are four transfer mode. If you narrow down those to two, buffered and streamed, here is the criteria:
If you are transferring large files, mostly binary files, try using streamed. This method streams the data to the client instead of sending a big chunk of data. It helps your application to be more efficient in terms of memory consumption. Some of the advanced functionalities of WCF are not available with this transfer mode.
By default buffered is selected. This is suitable for normal messages with relatively small or medium size. The whole request or response will be buffered in memory and then flush to the client or server.
There is another method that needs a custom channel that send messages in multiple chunks.
Chunk Channel
I'm about to implement a FileService using WCF. It should be able to upload files by providing the filecontent itself and the filename. The current ServiceContract looks like the following
[ServiceContract]
public interface IFileService
{
[OperationContract]
[FaultContract(typeof(FaultException))]
byte[] LoadFile(string relativeFileNamePath);
[OperationContract]
[FaultContract(typeof(FaultException))]
void SaveFile(byte[] content, string relativeFileNamePath);
}
It works fine at the moment, but i want to be able to reduce the network payload of my application using this Fileservice. I need to provide many files as soon as the user openes a specific section of my application, but i might be able to cancel some of them as soon as the user navigates further through the application. As many of mine files are somewhere between 50 and 300 MB, it takes quite a few seconds to transfer the files (the application might run on very slow networks, it might take up a minute).
To clarify and to outline the difference to all those other WCF questions: The specific problem is that providing the data between client <-> server is the bottleneck, not the performance of the service itself. Is changing the interface to a streamed WCF service reasonable?
It is a good practice to use a stream if the file size is above a certain amount. At my work on the enterprise application we are writing, if it is bigger than 16kb then we stream it. If it is less than that, we buffer. Our file service is specially designed to handle this logic.
When you have the transfer mode of your service set to buffer, it will buffer on the client as well as on the service when you are transmitting your data. This means if you are sending a 300mb file, it will buffer all 300mb during the call on both ends before the call is complete. This will definitely create bottlenecks. For performance reasons, this should only be when you have small files that buffer quickly. Otherwise, a stream is the best way to go.
If the majority or all of your files are larger files I'd switch to using a stream.
I have a WCF service written in C# and is hosted as a windows service.
The key and widely used method by most of the client is as shown in the method signature.
public string storeDocument(byte[] document)
The byte[] is passed to few shared methods before it gets stored in the database.
How do I cleanup the memory?
As this method is called by many clients and is widely used, and we recently noticed that the memory usage by this service on the server is 60 to 100 MB and CPU usage sometimes go up to 80%.
I would like to know is there any way I can make sure that it doesn't use that much memory.
Please help.
WCF also supports streaming. If you use large chunks of data, maybe that is a better solution. See http://msdn.microsoft.com/en-us/library/ms733742.aspx
I have a WCF Service was uses the HTTP protocol. When a particularly large query hits the system, it creates a large Byte[] that leads up through buffers to HttpChannelListener and eventualy to the Service Host itself. This stays there even after the WCF transaction completes. This in turn causes Large Object Heap fragmentation which eventually causes the application to throw an OOM exception.
Here's the path to the Byte[]:
ServiceHost.channelDispatchers.items._items[0].listener.innerChannelListener.typedListener.bufferManager.innerBufferManager.bufferPools[13].pool.globalPool.items._array[0]
The system uses buffered WCF communication for transactions to ensure that it's reliable.
Is there anything I can do to prevent these large objects from staying in memory?
You need to tune the MaxBufferPoolSize and MaxBufferSize property of your WCF configuration. You may need to do experiment on what is the best value which suits the nature of your application, it depends on your message size, number of concurrent request, etc.
You may also set MaxBufferPoolSize to 0 to disallow pooling of buffer. It still buffered, but the buffer is not pooled. Be sure if this is really what you want because buffer pooling do have it's advantage by reducing memory allocation.
This is some explanation about what those settings actually means, and why it is actually needed.
i have an asp.net and c# application using an wcf service and has been hosted in IIS
and now the memory consumption by the wcf service was increasing with time.
can any one guide me in making the wcf service to consume less space
When memory consumption rises, your service is probably leaking memory. Although a small memory rise is expected to happen during the first 100 or so calls of the web service, it should at one point stabilize around a specific usage with regular usage. You will have to check your service code for anything that could cause this leaking memory. (For example, don't rely too much on the automated garbage collection but assign null to variables that you won't use anymore to free them sooner.)
well for one thing, you can make the WCF service a per call instance. Which means it will create a service instance for each request, and then tear it down afterwards.