I have a WCF service handling a very large number of requests (thousands per second). Each request contains objects, so they get built inside the DataContractSerializer during deserialization. My service processes the messages, and they get cleaned up by the .net garbage collector.
The problem is that garbage collections are causing problems for my service (requests occasionally taking 100+ milliseconds longer than they should). I need to minimize them. So I am looking for a way of using object pooling. In other words, I want the data contract serializer to obtain an object from my object pool (instead of getting one via GetUninitializedObject), and then when I am done processing the message, I would release it back to the pool for cleaning & reuse, thereby avoiding thousands of memory allocations a second.
I've seen this is possible with protobuf-net (Using protobuf-net, is it possible to deserialize a message without allocating memory?) and in fact I'm using protobuf elsewhere, but for this particular situation that is not an option
The DataContractSerializer is sealed and cannot be updated. So unfortunately you would not be able to remove it's call to FormatterServices.GetUninitializedObject.
What you will have to do instead is create your own serializer inheriting from XmlObjectSerializer so that you can fully control instance creation.
The next step is to create a DataContractSerializerOperationBehavior and override the CreateSerializer methods to return your customized serializer.
Last thing to do is remove the default DataContractSerializerOperationBehavior from the endpoint and replace it with the custom one that implements your custom serializer. Carlos Figueira has a post on his blog showing exactly how to do this (go to the section called Real world scenario: using a new serializer).
Related
I have searched a lot about these question, here and a lot of other places, but not getting everything I want to know!
From a WebApi project point-of-view, when are InTransientScope objects Created? In the Ninject docs it is stated that such objects are created whenever requested, but in a web api project that handles HTTP requests, the instance is created at the request start time so in this regard it is the same as InRequestScope then?
In a WebApi project, is it okay to use InTransientScope objects knowing that they will never be kept track of by Ninject? If Ninject never keeps track of Transient objects, then what is the purpose of this scope and what happens to such objects after they have been used?
If I declare an object with InRequestScope and that object doesn't implement the IDisposable interface, what happens to such object after the web request has completed? Will it be treated the same way as an InTransientScope object?
Are different scopes to be used for: WebApi controllers, Repositories(that use a InRequestScope Session that is created separately) and Application services?
There's two purposes for scopes:
Only allow one object to be created per scope
(optionally) dispose of the object once the scope ends.
As said, the disposal is optional. If it doesn't implement the IDisposable interface it's not being disposed. There's plenty of usecases for that.
The InTransientScope is the default scope - the one being used if you don't specify another one. It means that every time a type A is requested from the kernel one activation takes place and the result is returned. The activation logic is specified by the binding part that follows immediately after the Bind part (To<...>, ToMethod(...),...).
However, this is not necessarily at the time the web-request starts and the controller is instanciated. For example, you can use factories or service location (p.Ex. ResolutionRoot.Get<Foo>()) to create more objects after the controller has been created. To answer your questions in short:
When: When a request takes place or whenever your code asks for a type from Ninject either directly (IResolutionRoot.Get(..)) or through a factory. As InTransientScope objects are not being tracked they will not be disposed, however, if they are not disposable and the entire request code requests only one IFoo then practically there's is no discernible difference (apart from the slight performance hit due totracking InRequestScope()-ed objects)
As long as you don't need to make sure that instances are shared and/or disposed this is completely fine. After they are not being used anymore, they will get garbage-collected like any object you would new yourself.
When the scope ends ninject will remove the weak reference to the non-IDisposable object. The object itself will not be touched - just like when bound InTransientScope()
That depends on your specific requirements and implementation details. Generally one needs to make sure that long-scoped objects don't depend on short-scoped objects. For example, a Singleton-Service should not depend on a Request-scoped object. As a baserule, everything should be InTransientScope() unless there's a specific reason why it should not be. The reason will dictate what scope to use...
I am reading Juval Löwy's Programming WCF Sevices. It mentions:
In WCF, we have Context in which we have the instance. By default, the lifetime of the context is the same as that of the instance it hosts. We can have separate lifetimes for both.
WCF also allows a context to exist without an associated instance at all.
Why would we release the instance and keep the context empty?
Coincidentally I recently read the chapter you're probably referring to. In his book Löwy explains why this may be useful. First he writes:
WCF also allows a context to exist without an associated instance at all. I call this instance management technique context deactivation.
After mentioning this is typically done using an OperationBehavior with a specific ReleaseInstanceMode, he goes on and hints on when you could use this:
You typically apply instance deactivation on some but not all service methods, or with different values on different methods. [...] The reason you typically apply it sporadically is that if you were to apply it uniformly, you would end up with a per-call-like service, in which case you might as well have configured the service as per-call.
So you can use this "deactivation" to ensure only certain methods in a service with sessions behave as if they were part of a per-call service. The abovementioned MSDN article also explains this, in a different wording:
Use the ReleaseInstanceMode property to specify when WCF recycles a service object in the course of executing a method. The default behavior is to recycle a service object according to the InstanceContextMode value. Setting the ReleaseInstanceMode property changes that default behavior. In transaction scenarios, the ReleaseInstanceMode property is often used to ensure that old data associated with the service object is cleaned up prior to processing a method call.
Disconnecting the context from the instance makes sense when:
Instance creation is customized/extended. For example if you are creating your service instance with a custom IInstanceProvider with a dependency injection library (for example Unity) you might want a Unity lifetime manager to handle the service instance lifetime.
Some but not all operations result in an expensive service instance to be invalidated. For example: The service object loads a large expense resource as a part of creation. If the service operations called by the client modify or invalidate the resource, it needs to be disposed and reloaded for the next caller. If the operations don’t invalidate the resource the service instance can be reused by the next caller. (In almost all cases there’s a better way to solve this problem, but WCF allows it).
I’m sure there are additional use cases.
I'm in the process of designing my first "proper" WCF service and I'm trying to get my head around how to best handle versioning of the service.
In older ASMX web services, I would create aMethodNameRequest and MethodNameResponse object for each web service method.
A request object would really just be a POCO wrapper around what would typically be in the method parameters. A response object might typically inherit from a base response object that has information about any errors.
Reading about WCF and how the IExtensibleDataObject, FaultContractAttribute and Namespacing works, it seems that I can revert back to using standard parameters (string, int etc) in my method names, and if the service is versioned, then ServiceContract inheritance can provide this versioning.
I've been looking into http://msdn.microsoft.com/en-us/library/ms731060.aspx and linked articles in that, but I was just looking for a bit of clarification.
Am I correct in thinking that dispensing with the Request/Response objects is more ideal for WCF versioning?
EDIT: I just found this article which suggests using explicit request/response object: http://www.dasblonde.net/2006/01/05/VersioningWCFServiceContracts.aspx
I don't agree that dispensing with Request/Response objects is that way to go.
There are obvious benefits of coding with messages:
you can reuse them, which avoids pass 5 ints and 3 strings to a number of different methods.
the properties are named and so can be reliably understood, whereas a parameter that is passed by value through multiple tiers could be confused, and so on.
they can be proper objects rather than just data containers, if you choose - containing private methods, etc
But you are really asking about versioning. Don't forget that you can version the messages within your service contracts. The classes in assembly can have the same name provided they are in different namespaces (e.g. v1.Request and v2.Request), and they can both implement a required interface or inherit from some base object.
They also need to be versioned for your service consumer, which can be done with xml namespaces; I've typically put the service contracts (the operations) in a namespace like http://myapp.mydomain/v1 and the messages (the request and response objects) in http://myapp.mydomain/v1/messages.
One gotcha with this approach is that if you have an operation, call it Submit, in the http://myapp.mydomain/v1 namespace then by convention / default the soap objects SubmitRequest and SubmitResponse will also exist in the same namespace (I don't remember what the run-time exception is but it confused me for a while). The resolution is to put message objects in another namespace as I've described above.
See "Versioning WCF Services: Part I" and "Versioning WCF Services: Part II".
We are using a WCF Data Service to broker our data server side, and give third parties easy OData access to our data. The server side of things has been relatively easy. The client side, on the other hand, is giving us fits.
We are converting from regular Entity Framework to Data Services, and we've created an assembly which contains the generated client objects that talk to the data service (via a Service Reference). Those classes are partial, so we've added some logic and extended properties to them. This all works great.
The issue we are having is that we need to process our objects at save time, because they need to do some advanced serialization before they are sent over the wire. The DataServiceContext class contains two events: WritingEntity and ReadingEntity. The ReadingEntity event actually happens at the correct time for us (post object deserialization). The WritingEntity event happens at the WRONG time for us (post object serialization).
Is there any way to catch an object before it's written to the request, so that we can call a method on entity that is about to be written?
Obviously we could just loop through the Entities list, looking for any entity that is not in a state of Unchanged or Deleted, and call the appropriate method there...but this would require me to add special code every time I wanted to call SaveChanges on the context. This may be what we need to do, but it would be nice if there was a way to catch the entities before they are written to XML for sending to the service.
Currently there's no hook in the DataServiceContext to do what you want. The closest I can think of is the approach you suggested with walking all the entities and findings those which were modified. You could do this in your own SaveChanges-like method on the context class (which is also partial).
I miss the .Net remoting days when I could just send an object over the wire and it would work on both sides of the middle layer without much work. Here's why:
I've been given an assignment. I'm building a Logic/Data Abstraction layer (stupid PCI Compliance) so that we can move our database servers off of the corporate network into a protected network. I did a project like this once under .Net 2.0 with remoting. I built the object on the middleware layer and sent that object to the client and the client had my .Net object to work with. But WCF requires serialization to be able to send stuff up and down the pipe and serialization takes away from my fancy methods that do incredible things with the fields I have in place.
I've come up with two different strategies to get around this: (1) Move the methods from the class itself to a static utility class and (2) "Deserialize" the data on the client side and rebuild the native object with data from the serialized object.
nativeObject.Name = serializedObject.Name;
The flaw of the second method is that I have to re-serialize the object before I can send it back to the middleware layer.
serializedObject.Name = nativeObject.Name;
Both methods work but it is making writing objects take much longer than it should because of the whole serialization mess that the middle layer is causing. I would go back to .Net Remoting, but the architect says he wants this Abstraction Layer done in WCF because (my words, not his) it's new and sexy.
So how does one go about working with .Net native objects on both sides of a WCF connection... without writing 1,000 lines of glue code.
You can generate a proxy and tell it to use a specific set of classes instead of creating new ones. I belive this is done using the /r parameter of svutil.exe. If your using the IDE (VS2008), you can do this when adding a service reference click advanced, and make sure Reuse Types in Assemblies is selected (Which I think is the default).