Having more WCF methods in a service can decrease performance? - wcf

What is a best practice for designing WCF services concerning to the use of more or less operations under a single service.
Taking into consideration that a Service must be generic and Business oriented, I have encountered some SOAP services # work that have too much XML elements per operation in their contracts and too many operations in a single service.
From my point of view, without testing, I think the number of operations within a service will not have any impact on the performance in the middleware since a response is build specifically for each operation containing only the XML elements concerning that operation.
Or are there any issues for having too many operations within a SOAP service ?

There is an issue, and that is when trying to do a metadata exchange or a proxy creation against a service with many methods (probably in the thousands). Since it will be trying to do the entire thing at once, it could timeout, or even hit an OutOfMemory exception.

Dont hink it will impact performance much but important thing is methods must be logically grouped in different service. Service with large number of method usually mean they are not logically factored.

Related

Implementing IDuplexSessionChannel for Message Interception & Replacement

I would like to intercept WCF messages on the client side. I cannot use any MessageInspector for this, because I would like to implement a client side WCF cache. If the request has been cached before, the response should come from the cache, otherwise the request ist forwarded to the service.
As I am using netTcpBinding and netNamedPipeBinding, the "simple" way, implementing IRequestChannel is not possible. I need to implement IDuplexSessionChannel. Now, I am looking for a working sample how to intercept and replace messages.
But why is this important?
In theory WCF-services, as all other calls that possibly goes over a network, should have coarse-grained interfaces. The reason is obvious: WCF has tons of features to secure the connection, enable reliable-messing, ensures authentication, enable transactions (... continued ...). This will not come for free, obviously.
In practice we ofen encounter "exeptions" from that rules. Services that are called a thousend times in one service method and other violations of best practices. Well, of course the best way to deal with this situation would be to redesign the services. Unfortunately, that rarely happens (you name the reasons ...).
That is where caching comes into play. There are, basically, two ways of doing this:
Implementing a solution that needs to rewrite parts of your applications. One way of doing this is to write a proxy-caller (e.g. using generics for that).
List item
Implementing a "transparent" solution, that works with all WCF-services, without any modifications
For obvious reasons the second solution seems more promising. Again, there are two alternatives:
Writing a servers-side-caching solution, using WCF behaviors and IOperationInvoker. This is pretty straightforward to accomplish and the web gives you some good samples how to do this. Such as olution is acceptable, if the service to be cached is pretty expensive in its methods, e.g. loading lots of information from a database, so that looking up the result in your cache is much faster than perform all necessary calculations and IO-operations. However, the WCF-call is still there, with all overhead that comes with it. The advantage is, that you only need to define that behavior once in your service and all clients of this service will benefit from the cache.
Writing a client-side-caching-solution, that prevents the WCF-call, if the response is already in the cache. This, of course, prevents all the WCF-overhead, but requires to define the (endpoint) behavior within all clients that accesses the services to be cached (e.g. any master data service or any other "slowly changing dimension"-services).
The second solution is much more complicated, as you need a channel factory, (and / or a listener) and an implementation of the channel itself. The channel could be a IRequestChannel or a IDuplexSessionChannel. Again, you will find a working solution for the first type on the web, but that, naturally, will not work for netTcpBining or netNamedPipeBinding, which uses the IDuplexSessionChannel. That is, why I am looking for a sample that illustrates how to do it right.
Just to give an impression of what the benefits would be: One solution, that has a long-running service method, the execution time is (approx. 150000 calls of other services within that service):
netTcpBinding, no caching: 65 minutes
netNamedPipeBinding, no caching: 40 minutes
netNamedPipeBinding, server-side-caching: 27 minutes
netNamedPipeBinding, client-side-caching: 19 minutes
The no. of calls drops fro 150000 to about 40000 in that szenario. However, my solution for client-side-caching will not work well for duplex-channels and other special commmunication types. Therefor, I am looking for a sample.
Any help would be appreciated.

WCF best practises in regards to MaxItemsInObjectGraph

I have run into the exception below a few times in the past and each time I just change the configuration to allow a bigger object graph.
"Maximum number of items that can be serialized or deserialized in an object graph is '65536'. Change the object graph or increase the MaxItemsInObjectGraph quota."
However I was speaking to a colleague and he said that WCF should not be used to send large amounts of data, instead the data should be bite sized.
So what is the general consensus about large amounts of data being returned?
In my experience using synchronous web service operations to transmit large data sets or files leads to many different problems.
Firstly, you have performance related issues - serialization time at the service boundary. Then you have availability issues. Incoming requests can time out waiting for a response, or may be rejected because there is no dispatcher thread to service the request.
It is much better to delegate large data transfer and processing to some offline asynchronous process.
For example, in your situation, you send a request and the service returns a URI to the eventual resource you want. You may have to wait for the resource to become available, but you can code your consumer appropriately.
I haven't got any concrete examples but this article seems to point to WCF being used for large data sets, and I am aware of people using it for images.
Personally, I have always had to increase this property for any real world data.

Concurrent WCF calls via shared channel

I have a web tier that forwards calls onto an application tier. The web tier uses a shared, cached channel to do so. The application tier services in question are stateless and have concurrency enabled.
But they are not being called concurrently.
If I alter the web tier to create a new channel on every call, then I do get concurrent calls onto the application tier. But I want to avoid that cost since it is functionally unnecessary for my scenario. I have no session state, and nor do I need to re-authenticate the caller each time. I understand that the creation of the channel factory is far more expensive than than the creation of the channels, but it is still a cost I'd like to avoid if possible.
I found this article on MSDN that states:
While channels and clients created by
the channels are thread-safe, they
might not support writing more than
one message to the wire concurrently.
If you are sending large messages,
particularly if streaming, the send
operation might block waiting for
another send to complete.
Firstly, I'm not sending large messages (just a lot of small ones since I'm doing load testing) but am still seeing the blocking behavior. Secondly, this is rather open-ended and unhelpful documentation. It says they "might not" support writing more than one message but doesn't explain the scenarios under which they would support concurrent messages.
Can anyone shed some light on this?
Addendum: I am also considering creating a pool of channels that the web server uses to fulfill requests. But again, I see no reason why my existing approach should block and I'd rather avoid the complexity if possible.
After much ado, this all came down to the fact that I wasn't calling Open explicitly on the channel before using it. Apparently an implicit Open can preclude concurrency in some scenarios.
You can cache the WCF proxy, but still create a channel for each service call - this will ensure concurrency, is not very expensive in comparison to creating a channel from scratch, and re-authentication for each call will not be necessary. This is explained on Wenlong Dong's blog - "Performance Improvement for WCF Client Proxy Creation in .NET 3.5 and Best Practices" (a much better source of WCF information and guidance than MSDN).
Just for completeness: Here is a blog entry explaining the observed behavior of request serialization when not opening the channel explicitly:
http://blogs.msdn.com/b/wenlong/archive/2007/10/26/best-practice-always-open-wcf-client-proxy-explicitly-when-it-is-shared.aspx

Concurrent access to WCF client proxy

I'm currently playing around a little with WCF, during this I stepped on a question where I'm not sure if I'm on the right track.
Let's assume a simple setup that looks like this: client -> service1 -> service2.
The communication is tcp-based.
So where I'm not sure is, if it makes sense that the service1 caches the client proxy for service2. So I might get a multi-threaded access to that proxy, and I have to deal with it.
I'd like to take advantage of the tcp session to get better performance, but I'm not sure if this "architecture" is supported by WCF/network/whatever at all. The problem I see is that all the communication goes over the same channel, if I'm not using locks or another sync.
I guess the better idea is to cache the proxy in a threadstatic variable.
But before I do that, I wanted to confirm that it's really not a good idea to have only one proxy instance.
tia
Martin
If you don't know that you have a performance problem, then why worry about caching? You're opening yourself to the risk of improperly implementing multithreading code, and without any clear, measurable benefit.
Have you measured performance yet, or profiled the application to see where it's spending its time? If not, then when you do, you may well find that the overhead of multiple TCP sessions is not where your performance problems lie. You may wish you had the time to optimize some other part of your application, but you will have spent that time optimizing something that didn't need to be optimized.
I am already using such a structure. I have one service that collaborates with some other services and realise the implementation. Of course, in my case the client calls some one-way method of the first service. I am getting very good benifit. Of course, I also have configured it to limit the number of concurrent calls in some of the cases.
Yes, that architecture is supported by WCF. I deal with applications every day that use similar structures, using NetTCPBinding.
The biggest thing to worry about is the ConcurrencyMode of the various services involved, and making sure that they do not block unnecessarily. It is very easy to get into a scenario where you will be guaranteed timeouts, or at the least have poor performance due to multiple, synchronous calls across service boundaries. Even OneWay calls are not guaranteed to immediately return.
careful with threadstatic, .net changes the thread so the variable can get null.
For session...perhaps you could use session enabled calls:
http://msdn.microsoft.com/en-us/library/ms733040.aspx
But i would not recomend using if you do not have any performance issue. I would use the normal way, or if service 1 is just for forwarding you could use that functionality easily with 4.0:
http://www.sdn.nl/SDN/Artikelen/tabid/58/view/View/ArticleID/2979/Whats-New-in-WCF-40.aspx
Regards
Firstly, make sure you know about the behaviour of ThreadStatic in ASP.NET applications:
http://piers7.blogspot.com/2005/11/threadstatic-callcontext-and_02.html
The same thread that started your request may not be the same thread that finishes it. Basically the only safe way of storing Thread local storage in ASP.NET applications is inside HttpContext. The next obvious approach would be to creat a wrapper client to manage your WCF client proxy and ensure each IO request is thread safe using locks.
Although my personal preference would be to use a pool of proxy clients. Whenever you need one pop it off the pool queue and when you're finished with it put it back on.

How well will WCF scale to a large number of client users?

Does anyone have any experience with how well web services build with Microsoft's WCF will scale to a large number of users?
The level I'm thinking of is in the region of 1000+ client users connecting to a collection of WCF services providing the business logic for our application, and these talking to a database - similar to a traditional 3-tier architecture.
Are there any particular gotchas that have slowed down performance, or any design lessons learnt that have enabled this level of scalability?
To ensure your WCF application can scale to the desired level I think you might need to tweak your thinking about the stats your services have to meet.
You mention servicing "1000+ client users" but to gauge if your services can perform at that level you'll also need to have some estimated usage figures, which will help you calculate some simpler stats such as the number of requests per second your app needs to handle.
Having just finished working on a WCF project we managed to get 400 requests per second on our test hardware, which combined with our expected usage pattern of each user making 300 requests a day indicated we could handle an average of 100,000 users a day (assuming a flat usage graph across the day).
In addition, since it's fairly common to make the WCF service code stateless, it's pretty easy to scale out the actual WCF code by adding additional boxes, which means the overall performance of your system is much more likely to be limited by your business logic and persistence layer than it is by WCF.
WCF configuration default limits, concurrency and scalability
Probably the 4 biggest things you can start looking at first (besides just having good service code) are items related to:
Bindings - some binding and they protocols they run on are just faster than others, tcp is going to be faster than any of the http bindings
Instance Mode - this determines how your classes are allocated against the session callers
One & Two Way Operations - if a response isn't needed back to the client, then do one-way
Throttling - Max Sessions / Concurant Calls and Instances
They did design WCF to be secure by default so the defaults are very limiting.