WCF Data Service whose data source is another WCF Data Service - wcf

does someone know if it possible to use one WCF Data Service as data source of another WCF Data Service? If so, how?

So the short answer is yes. Actually I have consumed one WCF service in another (HttpBinding coming to a service on computer, then that service had a NamedPipesBinding service to communicate with multiple desktop apps, but it did some data transformation in the middle). That would not be an issue at all, you would set up a proxy/client just like you would in a desktop client, and handle everything in your new service as if it was just passing information along, you could even create a shared library for the DataContracts and such.
HOWEVER I would not suggest the leapfrog method in your implementation. Depending on how many customers you are potentially opening the door too, you may be introducing a bottlekneck, if you have a singleton service, or overload your existing service in the case of many connections from the new one. Since you have a SQL server, why would you not have a WCF service on your web/app server (public) that connected to it and provided the data you need? I'm only thinking this because your situation can become exponentially complicated when you start trying to pass credentials for authentication and authorization between the two, depending on your security settings. Another thing to consider is the complexity in debugging this new service and the old one, and a client at the same time, as if it wasn't a pain just to do server and client, since you are opening it to a public facing port, there are different things to set up, and debugging everything on the same machine is not the same as a public facing application server.
Sorry if this goes against what you were hoping to hear. I'm just saying that it is possible, but not suggested (at least by me) in your particular case.

Related

Load balancing a room-based pub/sub application on Azure

I've got a working Silverlight/WCF application that I need to start thinking about scaling. An obvious target for scaling, of course, is Azure.
The key architectural feature of the application is that 2-10 Silverlight clients will join a given "room" (using a duplex Net.TCP connection), and any of those clients can then send a message (for instance, a chat message), which then needs to be pushed in real-time to every other client connected to the same room, using the underlying duplex WCF connection.
Right now, the way the WCF service works is basically to keep in-memory a list of sessions and the rooms that they're associated with, so that when a message from one session comes in, it can automatically send the message to every other session in the room.
This works fine for a single WCF server instance, but it gets complicated if you need to scale it so that multiple WCF instances are in play. If you use network-layer load balancing, of course, you would typically find that only some of the members of your room are on the same server you're on, which means that when it comes time to push out messages to all those members, only some of them would actually get notified.
Apart from Azure, I had been thinking that I would handle it via some sort of application-layer load balancing. For instance, the web server that each client downloads the Silverlight application from might do a primitive round-robin sort of load-balancing, i.e., "OK, everyone in room x, you use WCF instance 1. Everyone in room y, you use WCF instance 2." That sort of thing.
So I have two questions:
(1) Is there any other, better way to architect this, so as to be able to use network-layer load balancing rather than needing to make the application aware of the underlying infrastructure?
(2) If I have to do the application-layer load balancing, what's the best way to handle this in Azure? Do I have to use the IAAS (full VM's), or is there a way to do this using PAAS (worker roles)? My understanding is that it's not possible to independently address worker roles, which would make a roles-based approach difficult, if not impossible.
SignalR powered by the Azure Service bus, may work for you.
http://vasters.com/clemensv/2012/02/13/SignalR+Powered+By+Service+Bus.aspx

Verify WCF interface is the same between client and server applications

We've got a Windows service that is connected to various client applications via a duplex WCF channel. The client and server applications are installed on different machines, in different locations, potentially at widely different times, and by different people. In addition, the client can be pointed at a different machine running the same Windows service at startup.
Going forward, we know that the interface between the client and the server applications will likely evolve. The application in the field will be administered by local IT personnel, and we have no real control over what version of either of these applications will be installed when/where or which will be connecting to the other. Since these are installed at various physical locations and by different people, there's a high likely that either the client or server application will be out of date compared to the other.
Since we can't control what versions of the applications in the field are trying to connect to each other, I'd like to be able to verify that the contracts between the client application and the server application are compatible.
Some things I'm looking for (may not be able to realistically get them all):
I don't think I care if the server's interface is newer or older, as long as the server's interface is a super-set of the client's
I want to use something other than an "interface version number". Any developer-kept version number will eventually be forgotten about or missed.
I'd like to use a computed interface comparison if that's possible
How can I do this? Any ideas on how to go about this would be greatly appreciated.
Seems like this is a case of designing your service for versioning. WCF has very good versioning capabilities and extension points. Here are a couple of good MSDN articles on versioning the service contract and more specifically the data contracts. For backward and "forward" compatible versioning look at this article on using the IExtensibleDataObject interface.
If the server's endpoint has metadata publishing enabled, you can programmatically inspect an endpoint's interface by using the MetadataResolver class. This class lets you retrieve the metadata from the server endpoint, and in your case, you would be interested in the ContractDescription which contains the list of all operations. You could then compare the list of operations to your client proxy's endpoint operations.
Of course now, comparing the lists of operations would need to be implemented, you could simply compare the operations names and fail if one of the client's operations is not found within the server's operations. This would not necessarily cover all incompatiblities, ex. request/response schema changes.
I have not tried implementing any of this by the way, so it's more of a theoretical view of your problem. If you don't want to fiddle with the framework, you could implement a custom operation that would return the list of operation names. This would be of minimal effort but is less standards-compliant.

WCF DAL COMPONENT

I have a DAL that is replicated across multiple apps (I know its a bad design but ignore this for now) , what I want to do is this...
Create a WCF DAL Component that will be accessed via all Desktop apps.. Could anyone share their thoughts on following ??
I am intending to use TCP Binding
What will be the overhead in terms of performance ( since 1 DAL component will b consumed via multiple apps )???
Since TCP Binding can only be hosted on IIS-7.0, this will be another overhead in terms of hardware+s/w ( or is it possible to have HTTP binding at top and TCP beneath that so that I can use IIS version 5 or 6 )???
Can I have multiple end points for multiple apps and is good from performace point of view as it will help us creating different thread for different client apps and can have diff contracts in future as well so that one application goes unaffected due changes in the DAL..
What Instancing Mode is preferred in this case (we are expecting a traffic of 100 concurrent user per day) , and DAL already handles this using SINGLETON design pattern.
Let me know your thoughts on all of above mentioned points and also if you could provide me more insight on this... will b gr8.
Thanks in advance...
Let me answer a few:
1) netTcpBinding is a great binding - very fast, very good in performance - definitely go with that!
3) Either host in IIS 7.0, or then self-host - write a little Windows NT Service and handle the hosting yourself. Gives you more control, and the ability to manually start and stop your DAL Service. I wouldn't even bother trying to get NetTcp working on IIS5/6 with some kind of a trick/hack - waste of time.
4) Multiple endpoints of the same binding are neither useful, nor do they help with performance.
5) I would always use "Per-Call". Each service request gets its own instance of the service, the call is handled, and then you're done. That makes programming the WCF service implementation a snap - if you go singleton, to have any performance at all, you need to worry about multi-threaded and thread-safe programming - a mess, really. Don't do it. NO, just don't do it.
A DAL should always be stateless and should operate on the "open the database connection as late as possible, do the work, and close the connection as soon as possible" again pattern which is a perfect fit for the per-call instance mode. When your service request comes in, the connection is opened (those are pooled in a connection pool in ADO.NET anyway, on the server side), the works is done, and the connection is closed again.

WCF performance improvements

I am developing a WPF application that talks to a server via WCF services over the internet. After profiling the application I noticed a lot of time is being taking up by creating the appropriate WCF client proxy and making the call to the server.
The code on the server is optimised and doesn't take any time to run yet I am still seeing a 1.5 second delay from when a service is invloked to it returning to the client.
A few points to give a bit of background:
I am using the ASP.Net membership for security
I will eventually hook into the same server side code through a website
I would eventually like to have offline support in the application
I really need to nail the performance early though as if the app is taking a couple of seconds to come back it is too long for what I am trying to do.
Can anyone suggest performance tips that will help me please?
The client side proxy in WCF is basically made up of two parts. If you control both ends of the communication - e.g. if you write both the server and the client side - you can optimize this by doing the following steps:
isolate all service and data contracts into their own separate assembly
reference that assembly on both the server side (to implement your service), as well as the client side
Doing so, you don't need to create a "generic" client-side proxy by using Add Service Reference, but instead, you can take that process apart into two separate steps:
first step is to create a ChannelFactory<T> using your service contract, e.g.
ChannelFactory<IMyService> factory = new ChannelFactory<IMyService>();
Because you need to have access to the service contract on the client side, you need to separate those contracts out into their own assembly, so that you can reference that same contract on the client side. Creating the channel factory is the expensive part - you want to hang on to that channel factory and put it into a shared, cached container of sorts (your main form or something).
the second step is to create the actual channel (the "proxy") from the channel factory:
IMyService proxy = factory.CreateChannel();
This operation is much less resource intensive and can be performed before every service call and shouldn't cause much wasted time.
So with a few basic steps, you should be able to siginificantly simplify and speed up your construction of service client proxies.

SOA and WCF design questions: Is this an unusual system design?

I have found myself responsible for carrying on the development of a system which I did not originally design and can't ask the original designers why certain design decisions were taken, as they are no longer here. I am a junior developer on design issues so didn't really know what to ask when I started on the project which was my first SOA / WCF project.
The system has 7 WCF services, will grow to 9, each self-hosted in a seperate console app/windows service. All of them are single instance and single threaded. All services have the same OperationContract: they expose a Register() and Send() method. When client services want to connect to another service, they first call Register(), then if successful they do all the rest of their communication with Send(). We have a DataContract that has an enum MessageType and a Content propety which can contain other DataContract "payloads." What the service does with the message is determined by the enum MessageType...everything comes through the Send() method and then gets routed to a switch statement...I suspect this is unusual
Register() and Send() are actually OneWay and Async...ALL results from services are returned to client services by a WCF CallbackContract. I believe that the reson for using CallbackContracts is to facilitate the Publish-Subscribe model we are using. The problem is not all of our communication fits publish-subscribe and using CallbackContracts means we have to include source details in returned result messages so clients can work out what the returned results were originally for...again clients have a switch statements to work out what to do with messages arriving from services based on the MessageType (and other embedded details).
In terms of topology: the services form "nodes" in a graph. Each service has hardcoded a list of other services it must connect to when it starts, and wont allow client services to "Register" with it until is has made all of the connections it needs. As an example, we have a LoggingService and a DataAccessService. The DataAccessSevice is a client of the LoggingService and so the DataAccess service will attempt to Register with the LoggingService when it starts. Until it can successfully Register the DataAccess service will not allow any clients to Register with it. The result is that when the system is fired up as a whole the services start up in a cascadeing manner. I don't see this as an issue, but is this unusual?
To make matters more complex, one of the systems requirements is that services or "nodes" do not need to be directly registered with one another in order to send messages to one another, but can communicate via indirect links. For example, say we have 3 services A, B and C connected in a chain, A can send a message to C via B...using 2 hops.
I was actually tasked with this and wrote the routing system, it was fun, but the lead left before I could ask why it was really needed. As far as I can see, there is no reason why services cannot just connect direct to the other services they need. Whats more I had to write a reliability system on top of everything as the requirement was to have reliable messaging across nodes in the system, wheras with simple point-to-point links WCF reliabily does the job.
Prior to this project I had only worked on winforms desktop apps for 3 years, do didn't know any better. My suspicions are things are overcomplicated with this project: I guess to summarise, my questions are:
1) Is this idea of a graph topology with messages hopping over indirect links unusual? Why not just connect services directly to the services that they need to access (which in reality is what we do anyway...I dont think we have any messages hopping)?
2) Is exposing just 2 methods in the OperationContract and using the a MessageType enum to determine what the message is for/what to do with it unusual? Shouldnt a WCF service expose lots of methods with specific purposes instead and the client chooses what methods it wants to call?
3) Is doing all communication back to a client via CallbackContracts unusual. Surely sync or asyc request-response is simpler.
4) Is the idea of a service not allowing client services to connect to it (Register) until it has connected to all of its services (to which it is a client) a sound design? I think this is the only design aspect I agree with, I mean the DataAccessService should not accept clients until it has a connection with the logging service.
I have so many WCF questions, more will come in later threads. Thanks in advance.
Well, the whole things seems a bit odd, agreed.
All of them are single instance and
single threaded.
That's definitely going to come back and cause massive performance headaches - guaranteed. I don't understand why anyone would want to write a singleton WCF service to begin with (except for a few edge cases, where it does make sense), and if you do have a singleton WCF service, to get any decent performance, it must be multi-threaded (which is tricky programming, and is why I almost always advise against it).
All services have the same
OperationContract: they expose a
Register() and Send() method.
That's rather odd, too. So anyone calling will first .Register(), and then call .Send() with different parameters several times?? Funny design, really.... The SOA assumption is that you design your services to be the model of a set of functionality you want to expose to the outside world, e.g. your CustomerService might have methods like GetCustomerByID, GetAllCustomersByCountry, etc. methods - depending on what you need.
Having just a single Send() method with parameters which define what is being done seems a bit.... unusual and not very intuitive / clear.
Is this idea of a graph topology with
messages hopping over indirect links
unusual?
Not necessarily. It can make sense to expose just a single interface to the outside world, and then use some internal backend services to do the actual work. .NET 4 will actually introduce a RoutingService in WCF which makes these kind of scenarios easier. I don't think this is a big no-no.
Is doing all communication back to a
client via CallbackContracts unusual.
Yes, unusual, fragile, messy - if you can ever do without it - go for it. If you have mostly simple calls, like GetCustomerByID - make those a standard Request/Response call - the client requests something (by supplying a Customer ID) and gets back a Customer object as a return value. Much much simpler!
If you do have long-running service calls, that might take minutes or more to complete - then you might consider One-Way calls which just deposit a request into a queue, and that request gets handled later on. Typically, here, you can either deposit the answer into a response queue which the client then checks, or you can have two additional service methods which give you the status of a request (is it done yet?) and a second method to retrieve the result(s) of that request.
Hope that helps to get you started !
All services have the same OperationContract: they expose a Register() and Send() method.
Your design seems unusual at some parts specially exposing only two operations. I haven't worked with WCF, we use Java. But based on my understanding the whole purpose of Web Services is to expose Operations that your partners can utilise.
Having only two Operations looks like odd design to me. You generally expose your API using WSDL. In this case the WSDL would add nothing of value to the partners, unless you have lot of documentation. Generally the operation name should be self-explanatory. Right now your system cannot be used by partners without having internal knowledge.
Is doing all communication back to a client via CallbackContracts unusual. Surely sync or asyc request-response is simpler.
Agree with you. Async should only be used for long running processes. Async adds the overhead of correlation.