I would like to intercept WCF messages on the client side. I cannot use any MessageInspector for this, because I would like to implement a client side WCF cache. If the request has been cached before, the response should come from the cache, otherwise the request ist forwarded to the service.
As I am using netTcpBinding and netNamedPipeBinding, the "simple" way, implementing IRequestChannel is not possible. I need to implement IDuplexSessionChannel. Now, I am looking for a working sample how to intercept and replace messages.
But why is this important?
In theory WCF-services, as all other calls that possibly goes over a network, should have coarse-grained interfaces. The reason is obvious: WCF has tons of features to secure the connection, enable reliable-messing, ensures authentication, enable transactions (... continued ...). This will not come for free, obviously.
In practice we ofen encounter "exeptions" from that rules. Services that are called a thousend times in one service method and other violations of best practices. Well, of course the best way to deal with this situation would be to redesign the services. Unfortunately, that rarely happens (you name the reasons ...).
That is where caching comes into play. There are, basically, two ways of doing this:
Implementing a solution that needs to rewrite parts of your applications. One way of doing this is to write a proxy-caller (e.g. using generics for that).
List item
Implementing a "transparent" solution, that works with all WCF-services, without any modifications
For obvious reasons the second solution seems more promising. Again, there are two alternatives:
Writing a servers-side-caching solution, using WCF behaviors and IOperationInvoker. This is pretty straightforward to accomplish and the web gives you some good samples how to do this. Such as olution is acceptable, if the service to be cached is pretty expensive in its methods, e.g. loading lots of information from a database, so that looking up the result in your cache is much faster than perform all necessary calculations and IO-operations. However, the WCF-call is still there, with all overhead that comes with it. The advantage is, that you only need to define that behavior once in your service and all clients of this service will benefit from the cache.
Writing a client-side-caching-solution, that prevents the WCF-call, if the response is already in the cache. This, of course, prevents all the WCF-overhead, but requires to define the (endpoint) behavior within all clients that accesses the services to be cached (e.g. any master data service or any other "slowly changing dimension"-services).
The second solution is much more complicated, as you need a channel factory, (and / or a listener) and an implementation of the channel itself. The channel could be a IRequestChannel or a IDuplexSessionChannel. Again, you will find a working solution for the first type on the web, but that, naturally, will not work for netTcpBining or netNamedPipeBinding, which uses the IDuplexSessionChannel. That is, why I am looking for a sample that illustrates how to do it right.
Just to give an impression of what the benefits would be: One solution, that has a long-running service method, the execution time is (approx. 150000 calls of other services within that service):
netTcpBinding, no caching: 65 minutes
netNamedPipeBinding, no caching: 40 minutes
netNamedPipeBinding, server-side-caching: 27 minutes
netNamedPipeBinding, client-side-caching: 19 minutes
The no. of calls drops fro 150000 to about 40000 in that szenario. However, my solution for client-side-caching will not work well for duplex-channels and other special commmunication types. Therefor, I am looking for a sample.
Any help would be appreciated.
Related
What is a best practice for designing WCF services concerning to the use of more or less operations under a single service.
Taking into consideration that a Service must be generic and Business oriented, I have encountered some SOAP services # work that have too much XML elements per operation in their contracts and too many operations in a single service.
From my point of view, without testing, I think the number of operations within a service will not have any impact on the performance in the middleware since a response is build specifically for each operation containing only the XML elements concerning that operation.
Or are there any issues for having too many operations within a SOAP service ?
There is an issue, and that is when trying to do a metadata exchange or a proxy creation against a service with many methods (probably in the thousands). Since it will be trying to do the entire thing at once, it could timeout, or even hit an OutOfMemory exception.
Dont hink it will impact performance much but important thing is methods must be logically grouped in different service. Service with large number of method usually mean they are not logically factored.
I have a web tier that forwards calls onto an application tier. The web tier uses a shared, cached channel to do so. The application tier services in question are stateless and have concurrency enabled.
But they are not being called concurrently.
If I alter the web tier to create a new channel on every call, then I do get concurrent calls onto the application tier. But I want to avoid that cost since it is functionally unnecessary for my scenario. I have no session state, and nor do I need to re-authenticate the caller each time. I understand that the creation of the channel factory is far more expensive than than the creation of the channels, but it is still a cost I'd like to avoid if possible.
I found this article on MSDN that states:
While channels and clients created by
the channels are thread-safe, they
might not support writing more than
one message to the wire concurrently.
If you are sending large messages,
particularly if streaming, the send
operation might block waiting for
another send to complete.
Firstly, I'm not sending large messages (just a lot of small ones since I'm doing load testing) but am still seeing the blocking behavior. Secondly, this is rather open-ended and unhelpful documentation. It says they "might not" support writing more than one message but doesn't explain the scenarios under which they would support concurrent messages.
Can anyone shed some light on this?
Addendum: I am also considering creating a pool of channels that the web server uses to fulfill requests. But again, I see no reason why my existing approach should block and I'd rather avoid the complexity if possible.
After much ado, this all came down to the fact that I wasn't calling Open explicitly on the channel before using it. Apparently an implicit Open can preclude concurrency in some scenarios.
You can cache the WCF proxy, but still create a channel for each service call - this will ensure concurrency, is not very expensive in comparison to creating a channel from scratch, and re-authentication for each call will not be necessary. This is explained on Wenlong Dong's blog - "Performance Improvement for WCF Client Proxy Creation in .NET 3.5 and Best Practices" (a much better source of WCF information and guidance than MSDN).
Just for completeness: Here is a blog entry explaining the observed behavior of request serialization when not opening the channel explicitly:
http://blogs.msdn.com/b/wenlong/archive/2007/10/26/best-practice-always-open-wcf-client-proxy-explicitly-when-it-is-shared.aspx
I'm currently playing around a little with WCF, during this I stepped on a question where I'm not sure if I'm on the right track.
Let's assume a simple setup that looks like this: client -> service1 -> service2.
The communication is tcp-based.
So where I'm not sure is, if it makes sense that the service1 caches the client proxy for service2. So I might get a multi-threaded access to that proxy, and I have to deal with it.
I'd like to take advantage of the tcp session to get better performance, but I'm not sure if this "architecture" is supported by WCF/network/whatever at all. The problem I see is that all the communication goes over the same channel, if I'm not using locks or another sync.
I guess the better idea is to cache the proxy in a threadstatic variable.
But before I do that, I wanted to confirm that it's really not a good idea to have only one proxy instance.
tia
Martin
If you don't know that you have a performance problem, then why worry about caching? You're opening yourself to the risk of improperly implementing multithreading code, and without any clear, measurable benefit.
Have you measured performance yet, or profiled the application to see where it's spending its time? If not, then when you do, you may well find that the overhead of multiple TCP sessions is not where your performance problems lie. You may wish you had the time to optimize some other part of your application, but you will have spent that time optimizing something that didn't need to be optimized.
I am already using such a structure. I have one service that collaborates with some other services and realise the implementation. Of course, in my case the client calls some one-way method of the first service. I am getting very good benifit. Of course, I also have configured it to limit the number of concurrent calls in some of the cases.
Yes, that architecture is supported by WCF. I deal with applications every day that use similar structures, using NetTCPBinding.
The biggest thing to worry about is the ConcurrencyMode of the various services involved, and making sure that they do not block unnecessarily. It is very easy to get into a scenario where you will be guaranteed timeouts, or at the least have poor performance due to multiple, synchronous calls across service boundaries. Even OneWay calls are not guaranteed to immediately return.
careful with threadstatic, .net changes the thread so the variable can get null.
For session...perhaps you could use session enabled calls:
http://msdn.microsoft.com/en-us/library/ms733040.aspx
But i would not recomend using if you do not have any performance issue. I would use the normal way, or if service 1 is just for forwarding you could use that functionality easily with 4.0:
http://www.sdn.nl/SDN/Artikelen/tabid/58/view/View/ArticleID/2979/Whats-New-in-WCF-40.aspx
Regards
Firstly, make sure you know about the behaviour of ThreadStatic in ASP.NET applications:
http://piers7.blogspot.com/2005/11/threadstatic-callcontext-and_02.html
The same thread that started your request may not be the same thread that finishes it. Basically the only safe way of storing Thread local storage in ASP.NET applications is inside HttpContext. The next obvious approach would be to creat a wrapper client to manage your WCF client proxy and ensure each IO request is thread safe using locks.
Although my personal preference would be to use a pool of proxy clients. Whenever you need one pop it off the pool queue and when you're finished with it put it back on.
I have found myself responsible for carrying on the development of a system which I did not originally design and can't ask the original designers why certain design decisions were taken, as they are no longer here. I am a junior developer on design issues so didn't really know what to ask when I started on the project which was my first SOA / WCF project.
The system has 7 WCF services, will grow to 9, each self-hosted in a seperate console app/windows service. All of them are single instance and single threaded. All services have the same OperationContract: they expose a Register() and Send() method. When client services want to connect to another service, they first call Register(), then if successful they do all the rest of their communication with Send(). We have a DataContract that has an enum MessageType and a Content propety which can contain other DataContract "payloads." What the service does with the message is determined by the enum MessageType...everything comes through the Send() method and then gets routed to a switch statement...I suspect this is unusual
Register() and Send() are actually OneWay and Async...ALL results from services are returned to client services by a WCF CallbackContract. I believe that the reson for using CallbackContracts is to facilitate the Publish-Subscribe model we are using. The problem is not all of our communication fits publish-subscribe and using CallbackContracts means we have to include source details in returned result messages so clients can work out what the returned results were originally for...again clients have a switch statements to work out what to do with messages arriving from services based on the MessageType (and other embedded details).
In terms of topology: the services form "nodes" in a graph. Each service has hardcoded a list of other services it must connect to when it starts, and wont allow client services to "Register" with it until is has made all of the connections it needs. As an example, we have a LoggingService and a DataAccessService. The DataAccessSevice is a client of the LoggingService and so the DataAccess service will attempt to Register with the LoggingService when it starts. Until it can successfully Register the DataAccess service will not allow any clients to Register with it. The result is that when the system is fired up as a whole the services start up in a cascadeing manner. I don't see this as an issue, but is this unusual?
To make matters more complex, one of the systems requirements is that services or "nodes" do not need to be directly registered with one another in order to send messages to one another, but can communicate via indirect links. For example, say we have 3 services A, B and C connected in a chain, A can send a message to C via B...using 2 hops.
I was actually tasked with this and wrote the routing system, it was fun, but the lead left before I could ask why it was really needed. As far as I can see, there is no reason why services cannot just connect direct to the other services they need. Whats more I had to write a reliability system on top of everything as the requirement was to have reliable messaging across nodes in the system, wheras with simple point-to-point links WCF reliabily does the job.
Prior to this project I had only worked on winforms desktop apps for 3 years, do didn't know any better. My suspicions are things are overcomplicated with this project: I guess to summarise, my questions are:
1) Is this idea of a graph topology with messages hopping over indirect links unusual? Why not just connect services directly to the services that they need to access (which in reality is what we do anyway...I dont think we have any messages hopping)?
2) Is exposing just 2 methods in the OperationContract and using the a MessageType enum to determine what the message is for/what to do with it unusual? Shouldnt a WCF service expose lots of methods with specific purposes instead and the client chooses what methods it wants to call?
3) Is doing all communication back to a client via CallbackContracts unusual. Surely sync or asyc request-response is simpler.
4) Is the idea of a service not allowing client services to connect to it (Register) until it has connected to all of its services (to which it is a client) a sound design? I think this is the only design aspect I agree with, I mean the DataAccessService should not accept clients until it has a connection with the logging service.
I have so many WCF questions, more will come in later threads. Thanks in advance.
Well, the whole things seems a bit odd, agreed.
All of them are single instance and
single threaded.
That's definitely going to come back and cause massive performance headaches - guaranteed. I don't understand why anyone would want to write a singleton WCF service to begin with (except for a few edge cases, where it does make sense), and if you do have a singleton WCF service, to get any decent performance, it must be multi-threaded (which is tricky programming, and is why I almost always advise against it).
All services have the same
OperationContract: they expose a
Register() and Send() method.
That's rather odd, too. So anyone calling will first .Register(), and then call .Send() with different parameters several times?? Funny design, really.... The SOA assumption is that you design your services to be the model of a set of functionality you want to expose to the outside world, e.g. your CustomerService might have methods like GetCustomerByID, GetAllCustomersByCountry, etc. methods - depending on what you need.
Having just a single Send() method with parameters which define what is being done seems a bit.... unusual and not very intuitive / clear.
Is this idea of a graph topology with
messages hopping over indirect links
unusual?
Not necessarily. It can make sense to expose just a single interface to the outside world, and then use some internal backend services to do the actual work. .NET 4 will actually introduce a RoutingService in WCF which makes these kind of scenarios easier. I don't think this is a big no-no.
Is doing all communication back to a
client via CallbackContracts unusual.
Yes, unusual, fragile, messy - if you can ever do without it - go for it. If you have mostly simple calls, like GetCustomerByID - make those a standard Request/Response call - the client requests something (by supplying a Customer ID) and gets back a Customer object as a return value. Much much simpler!
If you do have long-running service calls, that might take minutes or more to complete - then you might consider One-Way calls which just deposit a request into a queue, and that request gets handled later on. Typically, here, you can either deposit the answer into a response queue which the client then checks, or you can have two additional service methods which give you the status of a request (is it done yet?) and a second method to retrieve the result(s) of that request.
Hope that helps to get you started !
All services have the same OperationContract: they expose a Register() and Send() method.
Your design seems unusual at some parts specially exposing only two operations. I haven't worked with WCF, we use Java. But based on my understanding the whole purpose of Web Services is to expose Operations that your partners can utilise.
Having only two Operations looks like odd design to me. You generally expose your API using WSDL. In this case the WSDL would add nothing of value to the partners, unless you have lot of documentation. Generally the operation name should be self-explanatory. Right now your system cannot be used by partners without having internal knowledge.
Is doing all communication back to a client via CallbackContracts unusual. Surely sync or asyc request-response is simpler.
Agree with you. Async should only be used for long running processes. Async adds the overhead of correlation.
We can use polling to find out about updates from some source, for example, clients connected to a webserver. WCF provides a nifty feature in the way of Duplex contracts, in which, I can maintain a connection to a client, and make invocations on that connection at will.
Some peeps in the office were discussing the merits of both solutions, and I wanted to get feedback on when each strategy is best used.
I would use an event-based mechanism instead of polling. In WCF, you can do this easily by following the Publish-Subscribe framework that Juval Lowy provides at his website, IDesign.net.
Depends partly on how many users you have.
Say you have 1,000,000 users you will have problems maintaining that many sessions.
But if your system can respond to 1000 poll requests a second then each client can poll every 1000 seconds.
I think Shiraz nailed this one, but I wanted to say two more things.
I've had trouble with Duplex
contracts. You have to have all of
your ducks in a row with regards to
the callback channel... you have to
check it to make sure it's open,
etc. The IDesign.net stuff would be
a minimum amount of plumbing code
you'll have to include.
If it makes sense for your solution
(this is only appropriate in certain
situations), the MSMQ binding allows
a client to send data to a service
in an async manner (like Duplex),
but the service isn't "polling" for
messages... it gets notified when
one enters the queue through some
under-the-covers plumbing.
This sort of forces you to turn the
communication around (client becomes
server, server becomes client), but
if the majority of the communication
is one-way, this would provide a lot
of benefits. The other advantage
here is obviously the queued
communication - the server can be
down and not miss any messages...
it'll pick 'em up when it comes back
online.
Something to think about.