PRISM and WCF - Do they play nice? - wcf

Ok,
this is a more general "ugly critters in the corner" question. I am planning to start a project on WCF and PRISM. I have been playing around with PRISM some time know, and must say, I like it. Solid foundation for applications with nice possibilities to grow.
Now I want to incorporate WCF and build a distributed application, with one part on a server and two on the clients. It could be even the same machine, or not, depending on the scenario.
My idea is now to take the event concept from PRISM and extend it "over the wire" using WCF and callbacks, like described here WCF AlarmClock Callback Example.
I created a small picture to illustrate the idea (mainly for me), perhaps this makes things a little more clear:
The grey arrows stand for "using lib". The WCF-Event-Base meaning normal PRISM events, where the publish method is called "over the wire".
There are a few questions which come to mind:
Are there any existing known examples for such things?
What will be the best way to "raise" events over the wire?
Any possible problems with this concept (the ugly critters mentioned earlier)
Regarding the second question, I currently think about raising the events using a string (the type of the concrete event I want to raise) and the payload as argument. Something like public void RaiseEvent(string eventType, object eventPayload){} The payload needs to be serializeable, perhaps I even include a hashcheck. (Meaning if I raise e.g. an event with a picture as argument 10 times, I only transfer the picture once, afterwards using the hash to let the server use the buffer when publish)...
Ok, I think you get the idea. This "thing" should behave like a giant single application, using a kind of WCF_EventAggregator instead of the normal PRISM IEventAggregator. (wow, while writing I just got the idea to "simply" extend the IEventAggregator, have to think about this)...
Why do I write this? Well, for feedback mainly, and to sort my thoughts. So comments welcome, perhaps anything I should be "careful" about?
Chris
[EDITS]
Client distribution
There should be an undefined number of client, the server should not be aware of clients. The server itself can be a client to itself, raising strongly typed PRISM events in other parts of the source code.
The main difference between a "client" and a "server" is the actual implementation of the WCF_PRISM connector, see next chapter...
Client Event raising (PRISM feature)
In PRISM, to raise simple events you do NOT even need a reference to a service interface. The IEventAggregator can be obtained via dependency injection, providing an instance of the desired event (e.g. WeatherChangedEvent). This event can be raised by simply calling eventInstance.Publish(23) because the event is implemented as public class WeatherChangedEvent : CompositePresentationEvent<int>
WCF - PRISM Connector
As simple as raising events is subscribing to events. Every module can subsribe to events using the same technique, obtaining a reference and using Subscribe to attach to this event.
Here is now where the "magic" should happen. The clients will include a prism module responsible for connecting PRISM events to "wcf message sends". It will basically subsribe to all available events in the solution (they are all defined in the infrastructure module anyway) and send out a WCF message in case an event is raised.
The difference between a SERVER and a CLIENT is the implementation of this module. There needs to be a slight difference because of two things.
The WCF setup settings
The flow of events to prevent an infinite loop
The event flow will be (example)
Client obtain ref to WeatherChangedEvent
wChanged.Publish(27) --> normal PRISM event raising
WCF_PRISM module is subscribed to event and
send this event to the server
Server internally gets instance of WeatherChangedEvent and publishes
Server calls back to all clients raising their WeatherChangedEvent
Open Points
The obvious point is preventing a loop. If the server would raise the event in ALL clients, the clients would call back to the server, raising the event again, and so on... So there needs to be a difference between an event caused locally (which means I have to send it to the server) and a "server caused event" which means I do not have to send it to the server.
Also, if a client has initiated the event itself, it does not need to be called by the server, because the event has already be raised (in the client itself, point 2).
All this special behaviour will be encapsulated in the WCF event raiser module, invisible from the rest of the app. I have to think about "how to know if event already published", perhaps a GUID or something like this would be a good idea.
And now the second big question, what was I was aiming at when telling about "strings" earlier. I do not want to write a new service interface definition every time I add an event. Most events in PRISM are defined by one line, especially during development I do not want to update the WCF_Event_Raising_Module each time I add an event.
I thought about sending the events directly when calling WCF, e.g. using a function with a signature like:
public void RaiseEvent(EventBase e, object[] args)
The problem is, I do not really know if I can serialize PRISM events that easy. They all derive from EventBase, but I have to check this... For that reason, I had the idea to use the type (as string), because I know the server shares the infrastructure module and can obtain its own instance of the event (no need to send it over the wire, only the arg)
So far till here, I will keep the question open for more feedback. Main new "insight" I just got: Have to think about the recursion / infite loop problem.
Btw. if anybody is completely confused by all this event talk, give PRISM a try. You will love it, even if you only use DI and Events (RegionManager e.g. is not my favorite)
Chris
[END EDIT 1]

This is a very interesting approach. I would say only two things here:
You are really asking for trouble if you use strings and object parameters. Strongly typed EventAggregator events (inheriting from CompositeEvent) are the way to go here. The maintainability will go way up if you do this.
Your model for your WCF -> EventAggregator should consider everything to and from the EventAggregator as an "event" and everything to/from the WCF services as "messages". What you should really consider is that you are essentially translating a EventAggregator event to a message, rather than asking the question "how do I raise WCF events".
I think what you are doing is feasible. Looking at your implementation I really like how you are thinking about it.
Slight Alternative (w/ strong typing)
I wanted to throw a little something out there and see what you thought about it... maybe it will influence your design slightly. Specifically this is meant to address my first point above and go even further with the strong-typing.
Have you considered having EventAggregator-backed implementations of your service interface? Let's say in your example you have an IWeatherService WCF service that you are working with. Currently, as I understand it, your usage will look something like this:
Client uses the WCF Event Client library and calls RaiseEvent("ChangeWeather", Weather.Sunny);
The WCF Event Client library translates this into the appropriate call to the WCF service waiting to receive this message, using the IWeatherService channel interface to do so. Probably with a big nasty switch statement based on the name of the method call.
Why not modify this slightly. Make IWeatherService a shared contract among all of the servers and clients. The servers will have the actual implementation, obviously, but the clients will have EventAggregator-backed implementations that go to a central broker that queues and sends messages to servers.
Write an EventAggregator-backed implementation of the IWeatherService that raises events to be received by a central message broker and throw that implementation in your container for clients to use.
public ClientWeatherService : IWeatherService
{
IEventAggregator _aggregator;
public ClientWeatherService(IEventAggregator aggregator)
{
_aggregator = aggregator;
}
public void ChangeWeather(Weather weather)
{
ChangeWeatherEvent cwEvent = _aggregator.GetEvent<ChangeWeatherEvent>();
cwEvent.Publish(weather);
}
}
From there, instead of using your "WCF Event Client Library" directly, they use the IWeatherService directly, not knowing that it doesn't call the actual service.
public MyWeatherViewModel : ViewModel
{
IWeatherService _weatherService;
public MyWeatherViewModel(IWeatherService weatherService)
{
_weatherService = weatherService;
}
}
Then, you'd have some event handler setup to make the WCF calls to the real service, but now you have the benefit of strong-typing from the clients.
Just a thought.
I really like this type of question. I wish more people would ask this kind of thing on Stackoverflow. Gets the brain moving in the morning :)

It seems like a complicated approach to the problem.
Are you raising the event from the Client application, or raising the events from the service using the callback contract? or both?
I would approach this with a simple service class in the client. It can implement the Callback contract, and for each callback method it can just raise a Prism event locally to any subscribers in the client. If you need to raise events that are handled by the service, then the service class can subscribe to those events and call the wcf service.
All you need really is a class that abstracts the details of the wcf service away from the client, and exposes it's interface through Prism events.
I personally wouldn't want to modify / extend the infrastructure component and create a dependency on the concrete wcf service.

Related

What are the differences between BackgroundServices and SingletonServices?

I have a service which should begin when the server starts, and continue running for the entirety of the server lifetime. I would like to be able to manage the service (querying, modifying runtime options, etc) with a web frontend. While researching the best way to accomplish this, I came across two options: a scoped service with a singleton lifetime, and a backgroundservice/IHostedService. What are the differences between the two options, and when should one be used over the other?
Neither of those is actually a thing. The closest is the concept of a singleton and hosted services. A hosted service is a class that implements IHostedService and pretty much fits the bill of what you're looking for in that it will start at app startup and stop at app shutdown. ASP.NET Core 3.0 added a BackgroundService class, which is just an implementation of IHostedService with a lot of the cruft of defining what happens as start/stop/etc. covered. In practice, it usually makes more sense to inherit from BackgroundService, but you can also just implement IHostedService directly yourself.
"Singleton" is just a lifetime. All hosted services are registered with a singleton lifetime, but just because something is a singleton, doesn't mean it does anything special. You could, for example, register some random class as a singleton, and whenever it is injected, you'll always get the same instance. However, it will not do anything at startup or shutdown on its own.
Long and short, there are no differing options here. You're looking for a hosted service. That said, it only solves part of what you're looking for, in that it will "run" while the app is running. However, you can't really connect to it, or interact with it directly. It's not like a Web Api or something; it isn't exposed for HTTP requests, for example.
To "manage" it, you would have to expose some sort of API that would then interact with the service through code. For example, the docs provide an example of a queued background service that processes things added to the queue. However, to queue something, you would need to do something like create an API endpoint, inject the queue, and then use code to add a new item to the queue. Then, the actual hosted service would eventually pop that task from the queue and work on it.

Communication between two WCF service libraries on the same Windows Service host

The project I'm currently working on includes a server that receives C# scripts (partial code) from clients, wraps it to create a complete class, compiles it then load it into a separate AppDomain for execution.
A task (currently running script) can send feedback to the user at any point of it's execution, as defined in the script by the user. And possibly the task might wait for a response from the user (currently assuming it's only right after having sent feedback). And the user might, at any moment, decide to kill a task.
The server is implemented as a Windows Service hosting a WCF Service Library.
As I don't want to overcomplicate the client to make it communicate directly with the dynamically created AppDomains, the (partial) solution that I considered after some research was hosting a second WCF service with named pipe binding to make the dynamic AppDomains use it as a relay between them and the client facing WCF service.
My issue is that now I can't think of a clean way to have the two WCF services interact.
My ideas are:
Having them maintain direct references to each other:
Seeing as Normally both of the services are singletons it shouldn't be hard to do.
But that would be a pain to maintain in the case one of them fails and needs to be restarted. (I'm still new to WCF so I have no idea how common that is, but it's still an issue to consider. I think.)
Introducing some sort of a "message queue" (or two, one for each direction) with properties that can be set and subscribed to. Thus when one service sets a property an event will be triggered in the second. But that feels somewhat hacky to me, even though I can't really think of any clear issues.
I could really use some expert input on what I'm trying to accomplish, be it opinions on my thoughts or new ideas. Even if that involves rethinking the architecture. This project is still in an early enough stage to afford some rework, as long as there is enough reason to do that of course.
Since I've put lots of efforts (read: 2 minutes on paint) to prepare a quick (read: useless) schema of the system, I'll link it here since I don't have the reputation to post images:
Link to schema
Edit:
As I now have the reputation thanks to an upvote:
Still after rereading my question, I feel that perhaps I have been looking at this issue from a too narrow perspective by thinking of the services as something more special than ordinary classes. The more I think about it the more I feel that the observer pattern is probably the best approach to take.
Just for the record, and to avoid leaving my (silly) question unanswered, I've realised that I was looking at this too narrowly by trying to find a solution specific to WCF services.
And finally I ended up using a variation of the observer pattern (based on the IObservable<T>Interface).
I came across the same issue. The way I handled a duplex communication between the two servers is as following:
For each process (AppDomain Seperated Task) create a pair of WCF services. Both services have their Instancing set to PerSession (no need for singleton which may cause problems in the long run like disconnect). This means the Client will be communicating for each process (AppDomain Separated Task) with two distinct Service instances or a service pair (i.e. Service1 and Service2).
We want a duplex communication in between these two services, which means that both can communicate with the other and pass data (in the form of a DataContract class object).
For this:
1- Declare two services (i.e. in a separate class library) and host them (self hosting or else).
2- Create your DataContract class and add any property, collection, enum etc. as you like. Both services must have a get-set property for this class.
3- In the same class library (where the Service1 and 2 classes reside), create another class. This class will act as a depository for the Service pair instances. It has a static List in order to register the service pair instances (you can identify each service with a GUID).
4- We setup the client proxy using svcUtil.exe (or by code). When the client makes a service request, a service (i.e. service1) will be created by the WCF. At service1, create or launch the process (App Domain Separated Task) as client2 and at its constructor create the Service2 proxy by code.
5- Initialize the Service2 instance (i.e. by a call to the service2) and register the service pair instances at static list of the depository (so that it can be retrieved later for duplex communication). Now we have both service instances and both of them are registered as a pair into a static list.
6- Start communication between both services by making a call from Client1 proxy.
7- At Service1 call method, retrieve the service pair from the static list. Deep copy (DeepClone) the Datacontract class object from Service1 to the Service2 using the get-set property mentioned at (2). (Note that you can use one of the many Deep Clone libraries from Nuget like DeepCloner).
8- Make a call back from Service2. Client2 now has the identical DataContract class property values as Client1
9- Repeat steps 6-8 for Client2 proxy for Service2-Service1 communication.

Need some advice for a web service API?

My company has a product that will I feel can benefit from a web service API. We are using MSMQ to route messages back and forth through the backend system. Currently we are building an ASP.Net application that communicates with a web service (WCF) that, in turn, talks to MSMQ for us. Later on down the road, we may have other client applications (not necessarily written in .Net). The message going into MSMQ is an object that has a property made up of an array of strings. There is also a property that contains the command (a string) that will be routed through the system. Personally, I am not a huge fan of this, but I was told it is for scalability and every system can use strings.
My thought, regarding the web services was to model some objects based on our data that can be passed into and out of the web services so they are easily consumed by the client. Initially, I was passing the message object, mentioned above, with the array of strings in it. I was finding that I was creating objects on the client to represent that data, making the client responsible for creating those objects. I feel the web service layer should really be handling this. That is how I have always worked with services. I did this so it was easier for me to move data around the client.
It was recommended to our group we should maintain the “single entry point” into the system by offering an object that contains commands and have one web service to take care of everything. So, the web service would have one method in it, Let’s call it MakeRequest and it would return an object (either serialized XML or JSON). The suggestion was to have a base object that may contain some sort of list of commands that other objects can inherit from. Any other object may have its own command structure, but still inherit base commands. What is passed back from the service is not clear right now, but it could be that “message object” with an object attached to it representing the data. I don’t know.
My recommendation was to model our objects after our actual data and create services for the types of data we are working with. We would create a base service interface that would house any common methods used for all services. So for example, GetById, GetByName, GetAll, Save, etc. Anything specific to a given service would be implemented for that specific implementation. So a User service may have a method GetUserByUsernameAndPassword, but since it implements the base interface it would also contain the “base” methods. We would have several methods in a service that would return the type of object expected, based on the service being called. We could house everything in one service, but I still would like to get something back that is more usable. I feel this approach leaves the client out of making decisions about what commands to be passed. When I connect to a User service and call the method GetById(int id) I would expect to get back a User object.
I had the luxury of working with MS when I started developing WCF services. So, I have a good foundation and understanding of the technology, but I am not the one designing it this time.
So, I am not opposed to the “single entry point” idea, but any thoughts about why either approach is more scalable than the other would be appreciated. I have never worked with such a systematic approach to a service layer before. Maybe I need to get over that?
I think there are merits to both approaches.
Typically, if you are writing an API that is going to be consumed by a completely separate group of developers (perhaps in another company), then you want the API to be as self-explanative and discoverable as possible. Having specific web service methods that return specific objects is much easier to work with from the consumer's perspective.
However, many companies use web services as one of many layers to their applications. In this case, it may reduce maintenance to have a generic API. I've seen some clever mechanisms that require no changes whatsoever to the service in order to add another column to a table that is returned from the database.
My personal preference is for the specific API. I think that the specific methods are much easier to work with - and are largely self-documenting. The specific operation needs to be executed at some point, so why not expose it for what it is? You'd get laughed at if you wrote:
public void MyApiMethod(string operationToPerform, params object[] args)
{
switch(operationToPerform)
{
case "InsertCustomer":
InsertCustomer(args);
break;
case "UpdateCustomer":
UpdateCustomer(args);
break;
...
case "Juggle5BallsAtOnce":
Juggle5BallsAtOnce(args);
break;
}
}
So why do that with a Web Service? It'd be much better to have:
public void InsertCustomer(Customer customer)
{
...
}
public void UpdateCustomer(Customer customer)
{
...
}
...
public void Juggle5BallsAtOnce(bool useApplesAndEatThemConcurrently)
{
...
}

Accessing the ServiceModel layer directly

I'm new to WCF, so apologies if I'm missing the boat completely.
It seems like WCF provides plenty of functionality for using the "Channel" layer by itself. For example, to create a server, you can create a channel listener from a binding and call WaitForRequest, Reply, etc. These methods all deal with Message objects, so it is up to you to do something with the message.
My question has to do with what happens once we've already got a message. Suppose I have an object that implements a service, described by a ServiceContract, and a Message object which I know represents a call to a particular operation. What I'd really like to do is something like:
Message requestMessage = GetMessageSomehow();
OperationDescription oc = GetContractForMessage();
Message replyMessage = Invoke(myService, oc, requestMessage);
At the very least, if I could somehow access the IOperationInvoker and IDispatchMessageFormatter objects that get created for a type, it would be pretty simple to chain them together to get the functionality I'm looking for.
In my particular case, I need to implement some simple Soap 1.1 and 1.2 services (with no WS-Addressing). I already have HttpListenerRequest/Response objects, and can route based off of either the SOAPAction or ContentType header.
I think having this functionality would also be pretty useful for unit testing. For example, I need to implement to existing clients. It would be nice to have unit tests where I could test that the Attributes on the service class are correct (i.e. that the message that I know I will be getting gets properly translated into a call on my service interface).
Any suggestions?
Serialization/Deserialization from that Message instance to actual parameters for a call is usually done by an IDispatchMessageFormatter / IClientMessageFormatter.
On the server side, an IDispatchMessageFormatter is injected into the DispatchRuntime by a custom operation behavior that the data contract serializer (or other serializer) inserts.
But... if you're not using ServiceHost, there's no DispatchRuntime. Basically, if you want all of this, you're going to have to do all the hard work yourself :)
That said, if you can get an OperationDescription object, you should be able to instantiate a DataContractSerializerOperationBehavior, but you won't be able to get an IDispatchMessageFormatter out of it... you can get an XmlObjectSerializer, though, which might, or might not, be useful for you.
Notice that an IOperationInvoker wouldn't help all that much, since that presumes you've already done message serialization/deserialization, so it's not really all that useful (and the rest of the functionality is fairly simple for basic use cases if you want to roll it yourself)

wcf - transfer context into the headers

I am using wcf 4 and trying to transparently transfer context information between client and server.
I was looking at behaviors and was able to pass things around. My problem is how to flow the context received in the incoming headers to the other services that might be called by a service.
In the service behavior I intercept the the message and read the headers but don't know where to put that data to be accessible to the next service call that the current service might make.
What I am looking for is something like:
public void DoWork()
{
var someId = MyContext.SomeId;
//do something with it here and call another service
using(var proxy = GetProxy<IAnotherService>())
proxy.CallSomeOtherMethodThatShouldGetAccessTo_ MyContextualObject();
}
If I store the headers in thread local storage I might have problems due to thread agility(not sure this happens outside ASP.NET, aka custom service hosts). How would you implement the MyContext in the code above.
I chose the MyContext instead of accessing the headers directly because the initiator of the service call might not be a service in which case the MyContext is backed by HttpContext for example for storage.
In the service behavior I intercept
the the message and read the headers
but don't know where to put that data
to be accessible to the next service
call.
Typically, you don't have any state between calls. Each call is totally autonomous, each call gets a brand new instance of your service class created from scratch. That's the recommended best practice.
If you need to pass that piece of information (language, settings, whatever) to a second, third, fourth call, do so by passing it in their headers, too. Do not start to put state into the WCF server side! WCF services should always be totally autonomous and not retain any state, if at ever possible.
UPDATE: ok, after your comments: what might be of interest to you is the new RoutingService base class that will be shipped with WCF 4. It allows scenarios like you describe - getting a message from the outside and forwarding it to another service somewhere in the background. Google for "WCF4 RoutingService" - you should find a number of articles. I couldn't find antyhing in specific about headers, but I guess those would be transparently transported along.
There's also a two-part article series Building a WCF Router Part 1 (and part 2 here) in MSDN Magazine that accomplishes more or less the same in WCF 3.5 - again, not sure about headers, but maybe that could give you an idea.