My ASP.NET MVC3 application uses Ninject to instantiate service instances through a wrapper. The controller's constructor has an IMyService parameter and the action methods call myService.SomeRoutine(). The service (WCF) is accessed over SSL with a wsHttpBinding.
I have a search routine that can return so many results that it exceeds the maximum I have configured in WCF (Maximum number of items that can be serialized or deserialized in an object graph). When this happens, the application pools for both the service and the client grow noticeably and remain bloated well past the end of the request.
I know that I can restrict the number of results or use DTOs to reduce the amount of data being transmitted. That said, I want to fix what appears to be a memory leak.
Using CLR Profiler, I see that the bulk of the heap is used by the following:
System.RunTime.IOThreadTimer.TimerManager
System.RunTime.IOThreadTimer.TimerGroup
System.RunTime.IOThreadTimer.TimerQueue
System.ServiceModel.Security.SecuritySessionServerSettings
System.ServiceModel.Channels.SecurityChannelListener
System.ServiceModel.Channels.HttpsChannelListener
System.ServiceModel.Channels.TextMessageEncoderFactory
System.ServiceModel.Channels.TextMessageEncoderFactory.TextMessageEncoder
System.Runtime.SynchronizedPool
System.Runtime.SynchronizedPool.Entry[]
...TextMessageEncoderFactory.TextMessageEncoder.TextBufferedMessageWriter
System.Runtime.SynchronizedPool.GlobalPool
System.ServiceModel.Channels.BufferManagerOutputStream
System.Byte[][]
System.Byte[] (92%)
In addition, if I modify the search routine to return an empty list (while the NHibernate stuff still goes on in the background - verified via logging), the application pool sizes remain unchanged. If the search routine returns significant results without an exception, the application pool sizes remain unchanged. I believe the leak occurs when the list of objects is serialized and results in an exception.
I upgraded to the newest Ninject and I used log4net to verify that the service client was closed or aborted depending on its state (and the state was never faulted). The only thing I found interesting was that the service wrapper was being finalized and not explicitly disposed.
I'm having difficulty troubleshooting this to find out why my application pools aren't releasing memory in this scenario. What else should I be looking at?
UPDATE: Here's the binding...
<wsHttpBinding>
<binding name="wsMyBinding" closeTimeout="00:01:00" openTimeout="00:01:00"
receiveTimeout="00:02:00" sendTimeout="00:02:00" bypassProxyOnLocal="false"
transactionFlow="false" hostNameComparisonMode="StrongWildcard"
maxBufferPoolSize="999999" maxReceivedMessageSize="99999999"
messageEncoding="Text" textEncoding="utf-8" useDefaultWebProxy="false"
allowCookies="false">
<readerQuotas maxDepth="90" maxStringContentLength="99999"
maxArrayLength="99999999" maxBytesPerRead="99999"
maxNameTableCharCount="16384" />
<reliableSession enabled="false" />
<security mode="TransportWithMessageCredential">
<message clientCredentialType="UserName" />
</security>
</binding>
</wsHttpBinding>
UPDATE #2: Here is the Ninject binding but more curious is the error message. My wrapper wasn't setting MaxItemsInObjectGraph properly so it used the default. Once I set this, the leak went away. Seems that the client and service keep the serialized/deserialized data in memory when the service sends the serialized data to the client and the client rejects it because it exceeds MaxItemsInObjectGraph.
Ninject Binding:
Bind<IMyService>().ToMethod(x =>
new ServiceWrapper<IMyService>("MyServiceEndpoint")
.Channel).InRequestScope();
Error Message:
The InnerException message was 'Maximum number of items that can be
serialized or deserialized in an object graph is '65536'
This doesn't actually fix the memory leak so I am still curious as to what have been causing it if anyone has any ideas.
How are you handling your proxy client creation and disposal?
I've found the most common cause of WCF-related memory leaks is mishandling WCF proxy clients.
I suggest at the very least wrapping your clients with a using block kinda like this:
using (var client = new WhateverProxyClient())
{
// your code goes here
}
This ensures that the client is properly closed and disposed of, freeing memory.
This method is a bit controversial though, but it should remove the possibility of leaking memory from client creation.
Take a look here for more on this topic.
Related
I'm using NetMessagingBinding on a IIS hosted WCF service to consume messages published on a Windows Server Service Bus Topic.
From my understanding there is no limit on message size on Topics for Windows Server Service Bus, but nevertheless I'm getting an error deserializing a message from the subscription:
System.ServiceModel.Dispatcher.NetDispatcherFaultException: (...)
The maximum string content length quota (8192) has been exceeded while reading XML data.
This quota may be increased by changing the MaxStringContentLength property on the XmlDictionaryReaderQuotas object used when creating the XML reader.'.
Please see InnerException for more details. ---> System.Runtime.Serialization.SerializationException: There was an error deserializing the object of type [Type].
The maximum string content length quota (8192) has been exceeded while reading XML data. This quota may be increased by changing the MaxStringContentLength property on the XmlDictionaryReaderQuotas object used when creating the XML reader. ---> System.Xml.XmlException:
The maximum string content length quota (8192) has been exceeded while reading XML data. This quota may be increased by changing the MaxStringContentLength property on the XmlDictionaryReaderQuotas object used when creating the XML reader.
The way i see it there is no configuration that i can change in WCF's web.config to change the maximum string content. The only property that could be related is MaxBufferPoolSize but it is not exposed through the web.config.
The binding configuration used is:
<bindings>
<netMessagingBinding>
<binding name="messagingBinding"
closeTimeout="00:03:00" openTimeout="00:03:00"
receiveTimeout="00:03:00" sendTimeout="00:03:00"
prefetchCount="-1" sessionIdleTimeout="00:01:00">
<transportSettings batchFlushInterval="00:00:01" />
</binding>
</netMessagingBinding>
</bindings>
Thanks in advance,
Joao Carlos de Sousa
This issue can also be solved by using a custom binding which uses the netMessagingTransport. This way the readerQuotas node can be use to define the reader quotas.
<customBinding>
<binding name="sbBindingConfiguration" sendTimeout="00:01:00" receiveTimeout="00:01:00" openTimeout="00:01:00">
<binaryMessageEncoding>
<readerQuotas maxDepth="100000000" maxStringContentLength="100000000"
maxArrayLength="100000000" maxBytesPerRead="100000000" maxNameTableCharCount="100000000"/>
</binaryMessageEncoding>
<netMessagingTransport manualAddressing="false" maxBufferPoolSize="100000" maxReceivedMessageSize="100000000">
<transportSettings batchFlushInterval="00:00:00"/>
</netMessagingTransport>
</binding>
</customBinding>
Please refer to this post for more details on how to use the custom binding.
From the error, it seems like this is a WCF level error and not Service Bus. Have you tried to raise the MaxMessageSize? This thread has info on it, but basically, you need to setup something like the following on your binding's configuration in the web.config:
<binding name="yourBinding"
maxReceivedMessageSize="10000000"
maxBufferSize="10000000"
maxBufferPoolSize="10000000">
<readerQuotas maxDepth="32"
maxArrayLength="100000000"
maxStringContentLength="100000000"/>
</binding>
NetMessagingBinding currently does not allow one to change the MaxStringContentLenght through the XML configuration.
A solution that worked for me was to create a message formatter behaviour extension by implementing the IDispatchMessageFormatter interface.
The extension can then be used either by:
creating an attribute that can be used in code to identify which operation contracts will use the message formatter behaviour
public class MessageFormatterExtensionBehaviorAttribute :
Attribute, IOperationBehavior
{
(...)
public void ApplyDispatchBehavior(OperationDescription operationDescription, DispatchOperation dispatchOperation)
{
dispatchOperation.Formatter = new MessageFormatterExtension();
}
(...)
}
creating a configuration element that exposes the custom behaviour.
I have a use case in my ASP.MVC app in which I need to save a collection of about 15k records (this is from a CSV file upload). I'm putting it through CSLA business objects in order validate the uploaded data with business rules.
I'm making use of the WCF DataPortal. When save is called I get this error after about 30s to 45s:
System.ServiceModel.EndpointNotFoundException: There was no endpoint listening at [my dataportal host address]/WcfPortal.svc that could accept the message.
I have determined that if I break down the collection into smaller chunks, and call save on each of those chunks, the use case completes without a problem.
I have configured my Service to use the max values as follows (recommended in Rocky's book) (and increased the sendTimeout based on other guidance):
<binding name="wsHttpBinding_IWcfPortal" maxReceivedMessageSize="2147483647" sendTimeout="05:00:00">
<readerQuotas maxBytesPerRead="2147483647" maxArrayLength="2147483647" maxStringContentLength="2147483647" maxNameTableCharCount="2147483647" maxDepth="2147483647"/>
</binding>
Now I KNOW for a fact that my data does not exceed the 2147486347 size limit. Besides, if it did, I would expect to get a more meaningful error message indicating this (like I did when the size limits were at their defaults).
I have turned on WCF logging/tracing, which reveals nothing. This error seems to be some communication level error that gets hit before WCF stack comes into the picture.
Please advise as to why I would be getting this error when trying to save this large collection?
As WCF has changed over the years they've added some other limits that you can change. The latest info on WCF configuration for the data portal is available in two places:
The data portal FAQ page
The Using CSLA 4: Data Portal Configuration ebook
I am facing the problem while using the WCF to fetching the large amount of data, so I do not want to increase the "maxReceivedMessageSize="65536". So any alternative for that or can I achieve that using streaming. If yes then how ?
Please suggest.
Yes you can stream data in WCF, but WCF has some limitations while working in Streamed mode. So you might like to consider implementing a method that returns chunks of data and calling it multiple times if you don't mind handling it yourself.
Otherwise you can enable Streamed mode in configuration like
<basicHttpBinding>
<binding name="HttpStreaming" maxReceivedMessageSize="67108864"
transferMode="Streamed"/>
</basicHttpBinding>
<!-- an example customBinding using Http and streaming-->
<customBinding>
<binding name="Soap12">
<textMessageEncoding messageVersion="Soap12WSAddressing10" />
<httpTransport transferMode="Streamed" maxReceivedMessageSize="67108864"/>
</binding>
</customBinding>
And return a Stream object from your Contract method. This way the data will be transferred as you read the stream object.
interface IRemoteFileService
{
Stream OpenFile(string serverPath);
}
if your data is in a stream like a when you transfer a file. you just open the stream and return it. otherwise you can use a MemoryStream and DataContractSerializer to serialize almost any object tree.
for details check this and this
While this sounds simple there are complications and limitations for Streamed mode. If you just need a simple way to bypass the size limits for a big object transfer, Consider sending the object partially on multiple calls.
I am new to WCF,my task is to create,maintain sessions in WCF
I have a requirement in my project,what it says is I need to have a service(WCF) which has to be session enabled.More than one client will contact the above said service and the service has to deliver the required information that client wants.
For example: The service will hold a DOM object,here DOM means a database object which will have say Employee information.Each client will ask for different information from the DOM object,and our service has to deliver the information.Our servvice should not goto Database each time when the client calls,so for this we need to implement session management in service(WCF).
It will be of great help if someone provide some ideas,suggestions,or sample code for implementing my task...
Thanks...
First I'll point out that it is usually a very bad idea to use sessions with WCF. Having too many sessions open will consume lots of resources (eg memory and database connections). You mentioned that you are also storing database objects in the session - this is also likely to end up hurting you as most databases only allow a limited number of sessions.
All that said, if you really need to use sessions, there is some info for configuring it on MSDN.
You can configure your binding to use sessions as follows:
<wsHttpBinding>
<binding name="wsHttpBinding">
<reliableSession enabled="true" />
</binding>
</wsHttpBinding>
You can then mark your ServiceContract with SessionMode=SessionMode.Required:
[ServiceContract(Namespace="http://Microsoft.ServiceModel.Samples",
SessionMode=SessionMode.Required)]
public interface IMyService
{
...
}
I have a WCF REST Service which accepts a JSON string
One of the parameters is a large string of numbers
This causes the following error - which is visible by tracing and using SVC Trace Viewer
There was an error deserializing the object of type CarConfiguration. The maximum string content length quota (8192) has been exceeded while reading XML data. This quota may be increased by changing the MaxStringContentLength property on the XmlDictionaryReaderQuotas object used when creating the XML reader.
Now I've read all sorts of articles advising how to rectify this
All of them recommend increasing various config settings on the server and client
e.g.
Error Serializing String in WebService call
http://bloggingabout.net/blogs/ramon/archive/2008/08/20/wcf-and-large-messages.aspx
http://social.msdn.microsoft.com/Forums/en/wcf/thread/f570823a-8581-45ba-8b0b-ab0c7d7fcae1
So my config file looks like this
<webHttpBinding>
<binding name="webBinding" maxBufferSize="5242880" maxReceivedMessageSize="5242880" >
<readerQuotas maxDepth="5242880" maxStringContentLength="5242880" maxArrayLength="5242880" maxBytesPerRead="5242880" maxNameTableCharCount="5242880"/>
</binding>
</webHttpBinding>
...
...
...
<endpoint
address="/"
binding="webHttpBinding"
bindingConfiguration="webBinding"
My problem is that I can change this on the server, but there are no WCF config settings on the client as its a REST service and I'm just making a http request using the WebClient object
any ideas?
so it turns out you need a fullly qualified url on the endpoint address, not a relative one
Error calling a WCF REST service using JSON. length quota (8192) exceeded
That error wouldn't be happening on the client, since reader quotas are a WCF-only thing and WebClient/HttpWebRequest don't do deserialization themselves or enforce any other kind of quotas.
So I'd say say that it's likely you're putting the configuration in the wrong place and it's not getting picked up.
Either that or... you're not using one of the WCF DataContract Serializers manually on the client side, are you?