I am facing the problem while using the WCF to fetching the large amount of data, so I do not want to increase the "maxReceivedMessageSize="65536". So any alternative for that or can I achieve that using streaming. If yes then how ?
Please suggest.
Yes you can stream data in WCF, but WCF has some limitations while working in Streamed mode. So you might like to consider implementing a method that returns chunks of data and calling it multiple times if you don't mind handling it yourself.
Otherwise you can enable Streamed mode in configuration like
<basicHttpBinding>
<binding name="HttpStreaming" maxReceivedMessageSize="67108864"
transferMode="Streamed"/>
</basicHttpBinding>
<!-- an example customBinding using Http and streaming-->
<customBinding>
<binding name="Soap12">
<textMessageEncoding messageVersion="Soap12WSAddressing10" />
<httpTransport transferMode="Streamed" maxReceivedMessageSize="67108864"/>
</binding>
</customBinding>
And return a Stream object from your Contract method. This way the data will be transferred as you read the stream object.
interface IRemoteFileService
{
Stream OpenFile(string serverPath);
}
if your data is in a stream like a when you transfer a file. you just open the stream and return it. otherwise you can use a MemoryStream and DataContractSerializer to serialize almost any object tree.
for details check this and this
While this sounds simple there are complications and limitations for Streamed mode. If you just need a simple way to bypass the size limits for a big object transfer, Consider sending the object partially on multiple calls.
Related
I'm using NetMessagingBinding on a IIS hosted WCF service to consume messages published on a Windows Server Service Bus Topic.
From my understanding there is no limit on message size on Topics for Windows Server Service Bus, but nevertheless I'm getting an error deserializing a message from the subscription:
System.ServiceModel.Dispatcher.NetDispatcherFaultException: (...)
The maximum string content length quota (8192) has been exceeded while reading XML data.
This quota may be increased by changing the MaxStringContentLength property on the XmlDictionaryReaderQuotas object used when creating the XML reader.'.
Please see InnerException for more details. ---> System.Runtime.Serialization.SerializationException: There was an error deserializing the object of type [Type].
The maximum string content length quota (8192) has been exceeded while reading XML data. This quota may be increased by changing the MaxStringContentLength property on the XmlDictionaryReaderQuotas object used when creating the XML reader. ---> System.Xml.XmlException:
The maximum string content length quota (8192) has been exceeded while reading XML data. This quota may be increased by changing the MaxStringContentLength property on the XmlDictionaryReaderQuotas object used when creating the XML reader.
The way i see it there is no configuration that i can change in WCF's web.config to change the maximum string content. The only property that could be related is MaxBufferPoolSize but it is not exposed through the web.config.
The binding configuration used is:
<bindings>
<netMessagingBinding>
<binding name="messagingBinding"
closeTimeout="00:03:00" openTimeout="00:03:00"
receiveTimeout="00:03:00" sendTimeout="00:03:00"
prefetchCount="-1" sessionIdleTimeout="00:01:00">
<transportSettings batchFlushInterval="00:00:01" />
</binding>
</netMessagingBinding>
</bindings>
Thanks in advance,
Joao Carlos de Sousa
This issue can also be solved by using a custom binding which uses the netMessagingTransport. This way the readerQuotas node can be use to define the reader quotas.
<customBinding>
<binding name="sbBindingConfiguration" sendTimeout="00:01:00" receiveTimeout="00:01:00" openTimeout="00:01:00">
<binaryMessageEncoding>
<readerQuotas maxDepth="100000000" maxStringContentLength="100000000"
maxArrayLength="100000000" maxBytesPerRead="100000000" maxNameTableCharCount="100000000"/>
</binaryMessageEncoding>
<netMessagingTransport manualAddressing="false" maxBufferPoolSize="100000" maxReceivedMessageSize="100000000">
<transportSettings batchFlushInterval="00:00:00"/>
</netMessagingTransport>
</binding>
</customBinding>
Please refer to this post for more details on how to use the custom binding.
From the error, it seems like this is a WCF level error and not Service Bus. Have you tried to raise the MaxMessageSize? This thread has info on it, but basically, you need to setup something like the following on your binding's configuration in the web.config:
<binding name="yourBinding"
maxReceivedMessageSize="10000000"
maxBufferSize="10000000"
maxBufferPoolSize="10000000">
<readerQuotas maxDepth="32"
maxArrayLength="100000000"
maxStringContentLength="100000000"/>
</binding>
NetMessagingBinding currently does not allow one to change the MaxStringContentLenght through the XML configuration.
A solution that worked for me was to create a message formatter behaviour extension by implementing the IDispatchMessageFormatter interface.
The extension can then be used either by:
creating an attribute that can be used in code to identify which operation contracts will use the message formatter behaviour
public class MessageFormatterExtensionBehaviorAttribute :
Attribute, IOperationBehavior
{
(...)
public void ApplyDispatchBehavior(OperationDescription operationDescription, DispatchOperation dispatchOperation)
{
dispatchOperation.Formatter = new MessageFormatterExtension();
}
(...)
}
creating a configuration element that exposes the custom behaviour.
I have a use case in my ASP.MVC app in which I need to save a collection of about 15k records (this is from a CSV file upload). I'm putting it through CSLA business objects in order validate the uploaded data with business rules.
I'm making use of the WCF DataPortal. When save is called I get this error after about 30s to 45s:
System.ServiceModel.EndpointNotFoundException: There was no endpoint listening at [my dataportal host address]/WcfPortal.svc that could accept the message.
I have determined that if I break down the collection into smaller chunks, and call save on each of those chunks, the use case completes without a problem.
I have configured my Service to use the max values as follows (recommended in Rocky's book) (and increased the sendTimeout based on other guidance):
<binding name="wsHttpBinding_IWcfPortal" maxReceivedMessageSize="2147483647" sendTimeout="05:00:00">
<readerQuotas maxBytesPerRead="2147483647" maxArrayLength="2147483647" maxStringContentLength="2147483647" maxNameTableCharCount="2147483647" maxDepth="2147483647"/>
</binding>
Now I KNOW for a fact that my data does not exceed the 2147486347 size limit. Besides, if it did, I would expect to get a more meaningful error message indicating this (like I did when the size limits were at their defaults).
I have turned on WCF logging/tracing, which reveals nothing. This error seems to be some communication level error that gets hit before WCF stack comes into the picture.
Please advise as to why I would be getting this error when trying to save this large collection?
As WCF has changed over the years they've added some other limits that you can change. The latest info on WCF configuration for the data portal is available in two places:
The data portal FAQ page
The Using CSLA 4: Data Portal Configuration ebook
My ASP.NET MVC3 application uses Ninject to instantiate service instances through a wrapper. The controller's constructor has an IMyService parameter and the action methods call myService.SomeRoutine(). The service (WCF) is accessed over SSL with a wsHttpBinding.
I have a search routine that can return so many results that it exceeds the maximum I have configured in WCF (Maximum number of items that can be serialized or deserialized in an object graph). When this happens, the application pools for both the service and the client grow noticeably and remain bloated well past the end of the request.
I know that I can restrict the number of results or use DTOs to reduce the amount of data being transmitted. That said, I want to fix what appears to be a memory leak.
Using CLR Profiler, I see that the bulk of the heap is used by the following:
System.RunTime.IOThreadTimer.TimerManager
System.RunTime.IOThreadTimer.TimerGroup
System.RunTime.IOThreadTimer.TimerQueue
System.ServiceModel.Security.SecuritySessionServerSettings
System.ServiceModel.Channels.SecurityChannelListener
System.ServiceModel.Channels.HttpsChannelListener
System.ServiceModel.Channels.TextMessageEncoderFactory
System.ServiceModel.Channels.TextMessageEncoderFactory.TextMessageEncoder
System.Runtime.SynchronizedPool
System.Runtime.SynchronizedPool.Entry[]
...TextMessageEncoderFactory.TextMessageEncoder.TextBufferedMessageWriter
System.Runtime.SynchronizedPool.GlobalPool
System.ServiceModel.Channels.BufferManagerOutputStream
System.Byte[][]
System.Byte[] (92%)
In addition, if I modify the search routine to return an empty list (while the NHibernate stuff still goes on in the background - verified via logging), the application pool sizes remain unchanged. If the search routine returns significant results without an exception, the application pool sizes remain unchanged. I believe the leak occurs when the list of objects is serialized and results in an exception.
I upgraded to the newest Ninject and I used log4net to verify that the service client was closed or aborted depending on its state (and the state was never faulted). The only thing I found interesting was that the service wrapper was being finalized and not explicitly disposed.
I'm having difficulty troubleshooting this to find out why my application pools aren't releasing memory in this scenario. What else should I be looking at?
UPDATE: Here's the binding...
<wsHttpBinding>
<binding name="wsMyBinding" closeTimeout="00:01:00" openTimeout="00:01:00"
receiveTimeout="00:02:00" sendTimeout="00:02:00" bypassProxyOnLocal="false"
transactionFlow="false" hostNameComparisonMode="StrongWildcard"
maxBufferPoolSize="999999" maxReceivedMessageSize="99999999"
messageEncoding="Text" textEncoding="utf-8" useDefaultWebProxy="false"
allowCookies="false">
<readerQuotas maxDepth="90" maxStringContentLength="99999"
maxArrayLength="99999999" maxBytesPerRead="99999"
maxNameTableCharCount="16384" />
<reliableSession enabled="false" />
<security mode="TransportWithMessageCredential">
<message clientCredentialType="UserName" />
</security>
</binding>
</wsHttpBinding>
UPDATE #2: Here is the Ninject binding but more curious is the error message. My wrapper wasn't setting MaxItemsInObjectGraph properly so it used the default. Once I set this, the leak went away. Seems that the client and service keep the serialized/deserialized data in memory when the service sends the serialized data to the client and the client rejects it because it exceeds MaxItemsInObjectGraph.
Ninject Binding:
Bind<IMyService>().ToMethod(x =>
new ServiceWrapper<IMyService>("MyServiceEndpoint")
.Channel).InRequestScope();
Error Message:
The InnerException message was 'Maximum number of items that can be
serialized or deserialized in an object graph is '65536'
This doesn't actually fix the memory leak so I am still curious as to what have been causing it if anyone has any ideas.
How are you handling your proxy client creation and disposal?
I've found the most common cause of WCF-related memory leaks is mishandling WCF proxy clients.
I suggest at the very least wrapping your clients with a using block kinda like this:
using (var client = new WhateverProxyClient())
{
// your code goes here
}
This ensures that the client is properly closed and disposed of, freeing memory.
This method is a bit controversial though, but it should remove the possibility of leaking memory from client creation.
Take a look here for more on this topic.
I have a WCF REST Service which accepts a JSON string
One of the parameters is a large string of numbers
This causes the following error - which is visible by tracing and using SVC Trace Viewer
There was an error deserializing the object of type CarConfiguration. The maximum string content length quota (8192) has been exceeded while reading XML data. This quota may be increased by changing the MaxStringContentLength property on the XmlDictionaryReaderQuotas object used when creating the XML reader.
Now I've read all sorts of articles advising how to rectify this
All of them recommend increasing various config settings on the server and client
e.g.
Error Serializing String in WebService call
http://bloggingabout.net/blogs/ramon/archive/2008/08/20/wcf-and-large-messages.aspx
http://social.msdn.microsoft.com/Forums/en/wcf/thread/f570823a-8581-45ba-8b0b-ab0c7d7fcae1
So my config file looks like this
<webHttpBinding>
<binding name="webBinding" maxBufferSize="5242880" maxReceivedMessageSize="5242880" >
<readerQuotas maxDepth="5242880" maxStringContentLength="5242880" maxArrayLength="5242880" maxBytesPerRead="5242880" maxNameTableCharCount="5242880"/>
</binding>
</webHttpBinding>
...
...
...
<endpoint
address="/"
binding="webHttpBinding"
bindingConfiguration="webBinding"
My problem is that I can change this on the server, but there are no WCF config settings on the client as its a REST service and I'm just making a http request using the WebClient object
any ideas?
so it turns out you need a fullly qualified url on the endpoint address, not a relative one
Error calling a WCF REST service using JSON. length quota (8192) exceeded
That error wouldn't be happening on the client, since reader quotas are a WCF-only thing and WebClient/HttpWebRequest don't do deserialization themselves or enforce any other kind of quotas.
So I'd say say that it's likely you're putting the configuration in the wrong place and it's not getting picked up.
Either that or... you're not using one of the WCF DataContract Serializers manually on the client side, are you?
We have an application where we wish to expose an large number of database entities and some business logic. Each entity will require the ability to Read , Add, and Update. at this point we do not expect to allow deletion.
the software we build is used in a wide range of business, so of which are multi tenanted operations Bureau services, also some of our clients use this approach to have separate databases for financial reasons.
We wish to be able to minimize the number of endpoints that need to be maintained. At the moment there are only 3 tables be exposed as WCF interfaces each with 6 attached methods. this is manageable but if operation has 50 databases that suddenly becomes 150 endpoints. worse if we have 50 tables exposed that becomes 2500 endpoints.
Does anyone have a suggestion on how we could design out system that we still have a simple entity model of Job.add (var1) or iList jobs = Job.GetSelected("sql type read").
without all these endpoints
WCF Data Services allows you to expose your data in a RESTful manner using the Open Data protocal (OData). This was formally called ADO.Net data services and before that Astoria. Any IQueryable collection can be exposed. The way shown in most of the examples is to use the Entity Framework, however there are examples showing usage with NHibernate and other Data Access technologies. OData is a self describing API based on Atom-Pub with some custom extensions. With a minimal amount of code you can expose you're entire database in a well defined format. That's the easy part.
In order to implement multi-tenency, you can create query interceptors in the WCF Data Services application to implement that logic. The number of interceptors and the complexity of the code you write will depend upon your security model and requirements. Looking at something like T4 templates or CodeSmith to generate the interceptor methods based on your database schema may be a way to prevent lots of repetitive manual coding.
The link I provided has a lot of information and tutorials on WCF Data Services and would provide a good place to start to see if it would meet your needs. I have been looking at WCF Data Services for a similar problem (Multi-tenancy), and would love to hear how you evently implement your solution.
It seems like you could pass the "identity" to every query and take that into account. This would mean that every record on your "Job" table would need to have a reference to the owner "identity" but that should not be much of a problem.
Just make sure that every query validates the "identity", and you should be OK.
If I understand your question correctly, I think you need unique endpoints but you can have a single service behavior that your end points reference.
Create a default endpoint:
<behaviors>
<serviceBehaviors>
<behavior name="MyService.DefaultBehavior">
<serviceMetadata httpGetEnabled="true" />
<serviceDebug includeExceptionDetailInFaults="true" />
</behavior>
</serviceBehaviors>
</behaviors>
Set your default binding:
<bindings>
<wsHttpBinding>
<binding name="DefaultBinding">
<security mode="None">
<transport clientCredentialType="None"/>
</security>
</binding>
</wsHttpBinding>
</bindings>
Have all service reference point to the default behavior and binding:
<service behaviorConfiguration="MyService.DefaultBehavior"
name="MyService.Customer">
<endpoint address="" binding="wsHttpBinding" bindingConfiguration="DefaultBinding"
contract="MyService.ICustomer">
<identity>
<dns value="localhost" />
</identity>
</endpoint>
<endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" />
</service>
Each time you add a service, its a simple config entry.
With Apache you can use a fairly simple set of URL rewriting rules to map an arbitrary set of DB table tables and their corresponding endpoints to a single endpoint with a parameter.
For example, to map $ROOT/table_name/column_name to $ROOT/index.php?tn=table_name&cn=column_name, you could add a rule like this to $ROOT/.htaccess:
RewriteRule ^([a-zA-Z0-9_]+)/([a-zA-Z0-9_]+)/?$ index.php?tn=$1&cn=$2 [QSA,L]
Then you only need to maintain $ROOT/index.php (which of course can generate the appropriate HTTP status codes for nonexistent tables and/or columns).
Providing Multi-Tenancy, Without A Bazillion End Points
One way is to go with a REST-style WCF service that can use username/passwords to distinguish which client you are working with, and thus be able to select internally which DB to connect to. WCF gives you the the UriTemplate which allows you to map part's of the URL to the param's in your web methods:
HTTP GET Request: http://www.mysite.com/table1/(row Id)
HTTP PUT Request: http://www.mysite.com/table1/(row Id)/(field1)/(field2)
HTTP POST Request: http://www.mysite.com/table1/(row Id)/(field1)/(field2)
HTTP DELETE Request: http://www.mysite.com/table1/(row Id)
You can add other Uri Templates for more tasks as well, such as the following:
HTTP GET Request: http://www.mysite.com/table1/recentitems/(number of most recent items)
HTTP GET Request: http://www.mysite.com/table1/cancelPendingOrders/(user Id)
Who's Using My Service?
By requiring clients to supply a username and password, you can map that to specific DB. And by using the UriTemplate of /{tableName}/{operation}/{params...} you could then use code in your web service to execute the DB procedures given the table, operation, and params.
Wrapping It Up
Your web configuration wouldn't need to be altered much at all even. The following web article series is a great place to learn about REST-style web services, which I believe fits what you need: http://www.robbagby.com/rest/rest-in-wcf-blog-series-index/