I have been trying to get up to speed on Named Pipes this week. The task I am trying to solve with them is that I have an existing windows service that is acting as a device driver that funnels data from an external device into a database. Now I have to modify this service and add an optional user front end (on the same machine, using a form of IPC) that can monitor the data as it passes between the device and the DB as well as send some commands back to the service.
My initial ideas for the IPC were either named pipes or memory mapped files. So far I have been working through the named pipe idea using WCF Tutorial Basic Interprocess Communication . My idea is to set the Windows service up with an additional thread that implements the WCF NamedPipe Service and use that as a conduit to the internals of my driver.
I have the sample code working, however I can not get my head around 2 issues that I am hoping that someone here can help me with:
In the tutorial the ServiceHost is instantiated with a typeof(StringReverser) rather than by referencing a concrete class. Thus there seems to be no mechanism for the Server to interact with the service itself (between the host.Open() and host.Close() lines). Is it possible to create a link between and pass information between the server and the class that actually implements the service? If so, how?
If I run a single instance of the server and then run multiple instance of the clients, it seems that each client gets a separate instance of the service class. I tried adding some state information to the class implementing the service and it was only retained within the instance of the named pipe. This is possibly related to the first question, but is there anyway to force the named pipes to use the same instance of the class that is implementing the service?
Finally, any thoughts on MMF vs Named Pipes?
Edit - About the solution
As per Tomasr's answer the solution lies in using the correct constructor in order to supply a concrete singleton class that implements the service (ServiceHost Constructor (Object, Uri[])). What I did not appreciate at the time was his reference to ensuring the service class was thread safe. Naively just changing the constructor caused a crash in the server, and that ultimately lead me down the path of understanding InstanceContextMode from this blog entry Instancecontextmode And Concurrencymode. Setting the correct context nicely finished off the solution.
For (1) and (2) the answer is simple: You can ask WCF to use a singleton instance of your service to handle all requests. Mostly all you need to do is use the alternate ServiceHost constructor that takes an Object instance instead of a type.
Notice, however, that you'll be responsible for making your service class thread safe.
As for 3, it really depends a lot on what you need to do, your performance needs, how many clients you expect at the same time, the amount of data you'll be moving and for how long it needs to be available, etc.
Related
The project I'm currently working on includes a server that receives C# scripts (partial code) from clients, wraps it to create a complete class, compiles it then load it into a separate AppDomain for execution.
A task (currently running script) can send feedback to the user at any point of it's execution, as defined in the script by the user. And possibly the task might wait for a response from the user (currently assuming it's only right after having sent feedback). And the user might, at any moment, decide to kill a task.
The server is implemented as a Windows Service hosting a WCF Service Library.
As I don't want to overcomplicate the client to make it communicate directly with the dynamically created AppDomains, the (partial) solution that I considered after some research was hosting a second WCF service with named pipe binding to make the dynamic AppDomains use it as a relay between them and the client facing WCF service.
My issue is that now I can't think of a clean way to have the two WCF services interact.
My ideas are:
Having them maintain direct references to each other:
Seeing as Normally both of the services are singletons it shouldn't be hard to do.
But that would be a pain to maintain in the case one of them fails and needs to be restarted. (I'm still new to WCF so I have no idea how common that is, but it's still an issue to consider. I think.)
Introducing some sort of a "message queue" (or two, one for each direction) with properties that can be set and subscribed to. Thus when one service sets a property an event will be triggered in the second. But that feels somewhat hacky to me, even though I can't really think of any clear issues.
I could really use some expert input on what I'm trying to accomplish, be it opinions on my thoughts or new ideas. Even if that involves rethinking the architecture. This project is still in an early enough stage to afford some rework, as long as there is enough reason to do that of course.
Since I've put lots of efforts (read: 2 minutes on paint) to prepare a quick (read: useless) schema of the system, I'll link it here since I don't have the reputation to post images:
Link to schema
Edit:
As I now have the reputation thanks to an upvote:
Still after rereading my question, I feel that perhaps I have been looking at this issue from a too narrow perspective by thinking of the services as something more special than ordinary classes. The more I think about it the more I feel that the observer pattern is probably the best approach to take.
Just for the record, and to avoid leaving my (silly) question unanswered, I've realised that I was looking at this too narrowly by trying to find a solution specific to WCF services.
And finally I ended up using a variation of the observer pattern (based on the IObservable<T>Interface).
I came across the same issue. The way I handled a duplex communication between the two servers is as following:
For each process (AppDomain Seperated Task) create a pair of WCF services. Both services have their Instancing set to PerSession (no need for singleton which may cause problems in the long run like disconnect). This means the Client will be communicating for each process (AppDomain Separated Task) with two distinct Service instances or a service pair (i.e. Service1 and Service2).
We want a duplex communication in between these two services, which means that both can communicate with the other and pass data (in the form of a DataContract class object).
For this:
1- Declare two services (i.e. in a separate class library) and host them (self hosting or else).
2- Create your DataContract class and add any property, collection, enum etc. as you like. Both services must have a get-set property for this class.
3- In the same class library (where the Service1 and 2 classes reside), create another class. This class will act as a depository for the Service pair instances. It has a static List in order to register the service pair instances (you can identify each service with a GUID).
4- We setup the client proxy using svcUtil.exe (or by code). When the client makes a service request, a service (i.e. service1) will be created by the WCF. At service1, create or launch the process (App Domain Separated Task) as client2 and at its constructor create the Service2 proxy by code.
5- Initialize the Service2 instance (i.e. by a call to the service2) and register the service pair instances at static list of the depository (so that it can be retrieved later for duplex communication). Now we have both service instances and both of them are registered as a pair into a static list.
6- Start communication between both services by making a call from Client1 proxy.
7- At Service1 call method, retrieve the service pair from the static list. Deep copy (DeepClone) the Datacontract class object from Service1 to the Service2 using the get-set property mentioned at (2). (Note that you can use one of the many Deep Clone libraries from Nuget like DeepCloner).
8- Make a call back from Service2. Client2 now has the identical DataContract class property values as Client1
9- Repeat steps 6-8 for Client2 proxy for Service2-Service1 communication.
I have a WCF service application (actually, it uses WCF Web API preview 5) that intercepts each request and extracts several header values passed from the client. The idea is that the 'interceptor' will extract these values and setup a ClientContext object that is then globally available within the application for the duration of the request. The server is stateless, so the context is per-call.
My problem is that the application uses IoC (Unity) for dependency injection so there is no use of singleton's, etc. Any class that needs to use the context receives it via DI.
So, how do I 'dynamically' create a new context object for each request and make sure that it is used by the container for the duration of that request? I also need to be sure that it is completely thread-safe in that each request is truly using the correct instance.
UPDATE
So I realize as I look into the suggestions below that part of my problem is encapsulation. The idea is that the interface used for the context (IClientContext) contains only read-only properties so that the rest of the application code doesn't have the ability to make changes. (And in a team development environment, if the code allows it, someone will inevitably do it.)
As a result, in my message handler that intercepts the request, I can get an instance of the type implementing the interface from the container but I can't make use of it. I still want to only expose a read-only interface to all other code but need a way to set the property values. Any ideas?
I'm considering implementing two interfaces, one that provides read-only access and one that allows me to initialize the instance. Or casting the resolved object to a type that allows me to set the values. Unfortunately, this isn't fool-proof either but unless someone has a better idea, it might be the best I can do.
Read Andrew Oakley's Blog on WCF specific lifetime managers. He creates a UnityOperationContextLifetimeManager:
we came up with the idea to build a Unity lifetime manager tied to
WCF's OperationContext. That way, our container objects would live
only for the lifetime of the request...
Configure your context class with that lifetime manager and then just resolve it. It should give you an "operation singleton".
Sounds like you need a Unity LifetimeManager. See this SO question or this MSDN article.
As the title implies I am trying to get an understanding of why in WCF sometimes people choose to "generate proxies" vs using a ChannelFactory to manually create new channel instances. I have seen examples of each, but haven't really found any explanations of WHY you would go for one vs the other.
To be honest I have only ever worked with channels and the ChannelFactory<T> from code I have inherited, ie:
IChannelFactory<IDuplexSessionChannel> channelFactory =
binding.BuildChannelFactory<IDuplexSessionChannel>();
_duplexSessionChannel = channelFactory.CreateChannel(endpointAddress);
So why would I "generate a proxy"? What are the benefits and drawbacks?
The main difference is this:
generating a proxy only requires you to know the URL where the service resides. By generating the proxy, everything else (the service contract and the data contracts involved) will be determined by inspecting the metadata of the service
in order to directly create a ChannelFactory<T>, you must have direct access to the assembly that contains that service contract T for which you're generating a channel factory. This only ever works if you basically control both ends of the channel and you can share the assembly that contains those service contracts. Typically, with a third-party service, this won't be the case - with your own services, yes.
The second important point is this:
creating a generated proxy basically does the two steps that you would do - create a ChannelFactory<T>, and from that, create the actual channel - in a single constructor. You have no control over these two steps.
doing your own Channel creation is beneficial, since the creation of the ChannelFactory<T> is the expensive step - so yo could cache your channel factory instance somewhere. Creating and re-creating the actual channel from the factory is much less involved step which you can do more frequently
So if you do control both ends of the communication, service and client, you do have the option to share the service contracts in a separate assembly, and thus you have more options.
With most third-party services, you just simply don't have that option.
Using a proxy is simpler and easier to understand. You get to deal in terms of simple things - classes and methods on those classes - instead of complex, network-related things like channels.
OTOH, this is not made easier by the design flaw in WCF that prevents the same simple use of a WCF proxy that we could do with ASMX proxies:
using (var client = new MyServiceClient())
{
}
If you use this pattern with WCF, you can lose the original exception when the block is exited due to an exception. client.Dispose() can throw an exception, which will overwrite the exception originally being thrown. A more complex pattern is required.
This may help you:
When to use a proxy?
If you have a service that you know is going to be used by several applications or is generic enough to be used in several places, you’ll want to use the proxy classes.
When to use ChannelFactory?
ChannelFactory class is used to construct a channel between the client and the service without the need of a proxy. In some cases, you may have a service that is tightly bound to the client application. In such a case, you can reference the Interface DLL directly and use ChannelFactory to call your methods using that.
You could also refer following link to understand the difference between Channel Factory and Proxy class
http://ashishkhandelwal.arkutil.com/wcf/channelfactory-over-proxy-class-in-wcf/
The main advantage of the channelFactory is you can create the proxy at runtime dynamically on the fly. With SvcUtil (Add web reference in VS) you create the proxy at design time, so it's implementation is more static.
I have a WCF server that is a library assembly. (I am writing it so I can mock the level below it) It is called var a client helper class that is in a different assembly. As the data that is transferred is complex and the server has to send call-backs to the clients I wish to test the WCF code in isolation.
(I am only interested in the TCP channel or NamePipe channel)
I do not wish to mock WCF, as the risk I am trying to control is my usage of WCF.
It there a easy way to
Load my WCF server into a different app domain
(I could load the WCF server into the main app domain, but then I it harder to prove that the objects were serialized correctly rather than just pointer moved about.)
Setup all the WCF config so the client class can call it (most likely named pipes or TCP)
And use it in some nunit test
I rather not have my unit tests depending on config file.
I expect (hope) that there are some util classes for setting up WCF unit test that I can just pass the type of my server class to and will give me back a client factory that connects to the server.
Am I going about this the wrong way, e.g there a better way of testing my communication layer and usage of WCF?
It is by far the easiest approach if you spin up the service in-proc, because then you don't need to write a lot of complex synchronization code to determine when the service is running and when it isn't.
Don't worry about pointers being passed around - they won't (unless you choose the new in-proc binding in WCF 4). It's the binding that determines how and if objects are serialized. Named pipes are excellent for this purpose.
I always spin up a new ServiceHost in each test case inside a using statement, which effectively guarantees that the host is running before calls are being made to it, and that it is properly closed after each test. This last part is important because it ensures test independence.
You may also want to look at a series of blog posts I wrote about a very similar subject.
You can use SOA Cleaner for testing your WCF. Take a look at http://xyrow.com
no installation is needed. It's not unit testing, but it can be very helpful (you can have it run on your build, as it supports command line too).
i have recently been involved in developing a WCF service which acts as a kind of multicast relay (i.e. accepts some incoming data, does some processing and then sends it off to multiple other external services). this service (which i will refer to as "my service") is fed data by a second internal service.
this data is going to be relayed from my service as XML held in a string. therefore my service could simply accept a string as an parameter to a method request - but this is not ideal as we lose type safety.
the second service has a class that encapsulates all of the information which my service requires to be processed, and eventually relayed to the external services.
the second service exposes this class in it's data contract. Ideally, in order to maintain type safety, and without requiring lots of changes to the second service's implementation, i should accept this type of class as an argument to my service operation.
what would be the best way for me to say in my datacontract that i require this type of class without duplicating code? could i add a service reference to this second class, and then use the proxy class which is created in my data contract?
i just can't get my head around this, even though it seems like a trivial problem!
cheers for any help!
If you are trying to avoid duplication of classes, put your class declaration in its own assembly and share that dll between all parties in the WCF Service. When you create your service reference you can choose which assemblies are shared (assuming you use the VS GUI service utility).
The use of a proxy class might be a good avenue as well. If you expose your main data class as a data contract, then create a proxy of that, the proxy will have a version of the exposed class that can be used by your other services.