I am reading Juval Löwy's Programming WCF Sevices. It mentions:
In WCF, we have Context in which we have the instance. By default, the lifetime of the context is the same as that of the instance it hosts. We can have separate lifetimes for both.
WCF also allows a context to exist without an associated instance at all.
Why would we release the instance and keep the context empty?
Coincidentally I recently read the chapter you're probably referring to. In his book Löwy explains why this may be useful. First he writes:
WCF also allows a context to exist without an associated instance at all. I call this instance management technique context deactivation.
After mentioning this is typically done using an OperationBehavior with a specific ReleaseInstanceMode, he goes on and hints on when you could use this:
You typically apply instance deactivation on some but not all service methods, or with different values on different methods. [...] The reason you typically apply it sporadically is that if you were to apply it uniformly, you would end up with a per-call-like service, in which case you might as well have configured the service as per-call.
So you can use this "deactivation" to ensure only certain methods in a service with sessions behave as if they were part of a per-call service. The abovementioned MSDN article also explains this, in a different wording:
Use the ReleaseInstanceMode property to specify when WCF recycles a service object in the course of executing a method. The default behavior is to recycle a service object according to the InstanceContextMode value. Setting the ReleaseInstanceMode property changes that default behavior. In transaction scenarios, the ReleaseInstanceMode property is often used to ensure that old data associated with the service object is cleaned up prior to processing a method call.
Disconnecting the context from the instance makes sense when:
Instance creation is customized/extended. For example if you are creating your service instance with a custom IInstanceProvider with a dependency injection library (for example Unity) you might want a Unity lifetime manager to handle the service instance lifetime.
Some but not all operations result in an expensive service instance to be invalidated. For example: The service object loads a large expense resource as a part of creation. If the service operations called by the client modify or invalidate the resource, it needs to be disposed and reloaded for the next caller. If the operations don’t invalidate the resource the service instance can be reused by the next caller. (In almost all cases there’s a better way to solve this problem, but WCF allows it).
I’m sure there are additional use cases.
Related
I have searched a lot about these question, here and a lot of other places, but not getting everything I want to know!
From a WebApi project point-of-view, when are InTransientScope objects Created? In the Ninject docs it is stated that such objects are created whenever requested, but in a web api project that handles HTTP requests, the instance is created at the request start time so in this regard it is the same as InRequestScope then?
In a WebApi project, is it okay to use InTransientScope objects knowing that they will never be kept track of by Ninject? If Ninject never keeps track of Transient objects, then what is the purpose of this scope and what happens to such objects after they have been used?
If I declare an object with InRequestScope and that object doesn't implement the IDisposable interface, what happens to such object after the web request has completed? Will it be treated the same way as an InTransientScope object?
Are different scopes to be used for: WebApi controllers, Repositories(that use a InRequestScope Session that is created separately) and Application services?
There's two purposes for scopes:
Only allow one object to be created per scope
(optionally) dispose of the object once the scope ends.
As said, the disposal is optional. If it doesn't implement the IDisposable interface it's not being disposed. There's plenty of usecases for that.
The InTransientScope is the default scope - the one being used if you don't specify another one. It means that every time a type A is requested from the kernel one activation takes place and the result is returned. The activation logic is specified by the binding part that follows immediately after the Bind part (To<...>, ToMethod(...),...).
However, this is not necessarily at the time the web-request starts and the controller is instanciated. For example, you can use factories or service location (p.Ex. ResolutionRoot.Get<Foo>()) to create more objects after the controller has been created. To answer your questions in short:
When: When a request takes place or whenever your code asks for a type from Ninject either directly (IResolutionRoot.Get(..)) or through a factory. As InTransientScope objects are not being tracked they will not be disposed, however, if they are not disposable and the entire request code requests only one IFoo then practically there's is no discernible difference (apart from the slight performance hit due totracking InRequestScope()-ed objects)
As long as you don't need to make sure that instances are shared and/or disposed this is completely fine. After they are not being used anymore, they will get garbage-collected like any object you would new yourself.
When the scope ends ninject will remove the weak reference to the non-IDisposable object. The object itself will not be touched - just like when bound InTransientScope()
That depends on your specific requirements and implementation details. Generally one needs to make sure that long-scoped objects don't depend on short-scoped objects. For example, a Singleton-Service should not depend on a Request-scoped object. As a baserule, everything should be InTransientScope() unless there's a specific reason why it should not be. The reason will dictate what scope to use...
I've been trying several different ways in order to get a simple set of transactions to work for a simple WCF client/server situation. My WCF server has a class level declaration of the Entity Framework class for my database access and several methods to modify data and a method to SaveChanges. I'm using the Oracle Data Access (ODP.NET).
For instance I want to call a modification from the client and then a separate call to save the changes in the WCF service. It doesn't work. Basically, everything executes fine, but when the second call to save the changes is made, the WCF service no longer has the original context and therefore no changes are saved (and, consequently, the previous call that made the changes was automatically rolled back).
I'm utilizing Transaction scope around both operations in my client and executing Complete() after done. My WCF services have OperationContract's that use [TransactionFlow(TransactionFlowOption.Mandatory)] and those method implementations use [OperationBehavior(TransactionScopeRequired = true, TransactionAutoComplete = true)]. Finally, my web config is configured with a wsHttpBinding that has the transactionFlow property set to True.
I'm having no luck. No matter what I try, when I try hitting the service for the follow-up save, the EF context is already renewed.
This has nothing to do with transaction. Transaction works on transactional resource but without calling SaveChanges in the first request there was no transactional resource active because EF context is not part of the transaction - the database is and the database is affected only when you call SaveChanges. To make this work you don't need distributed transactions. You need session-full service and store the EF context in the service instance. It a client uses the same client proxy instance to communicate with the service for all requests the communication will be handled by the same service instance = same EF context instance which will remember changes from previous calls.
IMHO this is very bad architecture. Simply don't use it. Expose specialized methods on WCF service which will do changes and save them. If you need to execute these methods in transaction with other transactional resources use the real distributed transaction.
this might be a reason. Since your are making an update in the different context. context doesn't know that the object is update to have say the context that the object is modified and then you call savechnages(). See if it helps
I have a WCF service application (actually, it uses WCF Web API preview 5) that intercepts each request and extracts several header values passed from the client. The idea is that the 'interceptor' will extract these values and setup a ClientContext object that is then globally available within the application for the duration of the request. The server is stateless, so the context is per-call.
My problem is that the application uses IoC (Unity) for dependency injection so there is no use of singleton's, etc. Any class that needs to use the context receives it via DI.
So, how do I 'dynamically' create a new context object for each request and make sure that it is used by the container for the duration of that request? I also need to be sure that it is completely thread-safe in that each request is truly using the correct instance.
UPDATE
So I realize as I look into the suggestions below that part of my problem is encapsulation. The idea is that the interface used for the context (IClientContext) contains only read-only properties so that the rest of the application code doesn't have the ability to make changes. (And in a team development environment, if the code allows it, someone will inevitably do it.)
As a result, in my message handler that intercepts the request, I can get an instance of the type implementing the interface from the container but I can't make use of it. I still want to only expose a read-only interface to all other code but need a way to set the property values. Any ideas?
I'm considering implementing two interfaces, one that provides read-only access and one that allows me to initialize the instance. Or casting the resolved object to a type that allows me to set the values. Unfortunately, this isn't fool-proof either but unless someone has a better idea, it might be the best I can do.
Read Andrew Oakley's Blog on WCF specific lifetime managers. He creates a UnityOperationContextLifetimeManager:
we came up with the idea to build a Unity lifetime manager tied to
WCF's OperationContext. That way, our container objects would live
only for the lifetime of the request...
Configure your context class with that lifetime manager and then just resolve it. It should give you an "operation singleton".
Sounds like you need a Unity LifetimeManager. See this SO question or this MSDN article.
I have recently come across the term Instance Deactivation.
a) What is that?
b) What for we need that?
c) In what context will it be useful?
I am looking for a simple answer that can be easily undestandable and if posible with some pseudo code.
Thanks
When a WCF method is invoked, it is passed to a service instance.
Instance Deactivation simply refers to the moment where the WCF system disposes of this instance.
In a Per-Call Service, instance deactivation will occur after each method call.
In a Per-Session Service, instance deactivation will occur when the client calls Close on the proxy, or when the transport-session's inactivity timeout is reached.
In a Singleton Service, instance deactivation will occur when the Service Host is closed.
You can also configure individual service methods to trigger instance deactivation:.
[OperationBehavior(ReleaseInstanceMode = ReleaseInstanceMode.AfterCall)]
public void MyMethodWhichTriggersAnAutomaticRelease()
{
// ...
}
As well as this, you can manually trigger a service instance release:
public void MyMethodWhichTriggersAManualRelease()
{
OperationContext.Current.InstanceContext.ReleaseServiceInstance();
}
Juval Lowy has this to say on whether you should manually override the standard instance deactivation mechanisms:
Instance deactivation is an
optimization technique, and like all
optimization techniques, you should
avoid it in the general case. Consider
using instance deactivation only after
failing to meet both your performance
and scalability goals and when careful
examination and profiling has proven
beyond a doubt that using instance
deactivation will improve the
situation.
Basically I take to mean that the instance of the class that the service operation is being called on is not torn down. If you have per call activation then a new instance of the service class will be created each time you call an operation on the service. After the method ends then that instance of the class will be disposed.
If you want to improve performance say at the expense of scalability then you would not deactivate the instance and thus choose a different instance activation scheme.
This MSDN artice : Discover Mighty Instance Management Techniques For Developing WCF Apps together with the link in #SteveCav answer provide good references.
I have been trying to get up to speed on Named Pipes this week. The task I am trying to solve with them is that I have an existing windows service that is acting as a device driver that funnels data from an external device into a database. Now I have to modify this service and add an optional user front end (on the same machine, using a form of IPC) that can monitor the data as it passes between the device and the DB as well as send some commands back to the service.
My initial ideas for the IPC were either named pipes or memory mapped files. So far I have been working through the named pipe idea using WCF Tutorial Basic Interprocess Communication . My idea is to set the Windows service up with an additional thread that implements the WCF NamedPipe Service and use that as a conduit to the internals of my driver.
I have the sample code working, however I can not get my head around 2 issues that I am hoping that someone here can help me with:
In the tutorial the ServiceHost is instantiated with a typeof(StringReverser) rather than by referencing a concrete class. Thus there seems to be no mechanism for the Server to interact with the service itself (between the host.Open() and host.Close() lines). Is it possible to create a link between and pass information between the server and the class that actually implements the service? If so, how?
If I run a single instance of the server and then run multiple instance of the clients, it seems that each client gets a separate instance of the service class. I tried adding some state information to the class implementing the service and it was only retained within the instance of the named pipe. This is possibly related to the first question, but is there anyway to force the named pipes to use the same instance of the class that is implementing the service?
Finally, any thoughts on MMF vs Named Pipes?
Edit - About the solution
As per Tomasr's answer the solution lies in using the correct constructor in order to supply a concrete singleton class that implements the service (ServiceHost Constructor (Object, Uri[])). What I did not appreciate at the time was his reference to ensuring the service class was thread safe. Naively just changing the constructor caused a crash in the server, and that ultimately lead me down the path of understanding InstanceContextMode from this blog entry Instancecontextmode And Concurrencymode. Setting the correct context nicely finished off the solution.
For (1) and (2) the answer is simple: You can ask WCF to use a singleton instance of your service to handle all requests. Mostly all you need to do is use the alternate ServiceHost constructor that takes an Object instance instead of a type.
Notice, however, that you'll be responsible for making your service class thread safe.
As for 3, it really depends a lot on what you need to do, your performance needs, how many clients you expect at the same time, the amount of data you'll be moving and for how long it needs to be available, etc.