Concurrent WCF calls via shared channel - wcf

I have a web tier that forwards calls onto an application tier. The web tier uses a shared, cached channel to do so. The application tier services in question are stateless and have concurrency enabled.
But they are not being called concurrently.
If I alter the web tier to create a new channel on every call, then I do get concurrent calls onto the application tier. But I want to avoid that cost since it is functionally unnecessary for my scenario. I have no session state, and nor do I need to re-authenticate the caller each time. I understand that the creation of the channel factory is far more expensive than than the creation of the channels, but it is still a cost I'd like to avoid if possible.
I found this article on MSDN that states:
While channels and clients created by
the channels are thread-safe, they
might not support writing more than
one message to the wire concurrently.
If you are sending large messages,
particularly if streaming, the send
operation might block waiting for
another send to complete.
Firstly, I'm not sending large messages (just a lot of small ones since I'm doing load testing) but am still seeing the blocking behavior. Secondly, this is rather open-ended and unhelpful documentation. It says they "might not" support writing more than one message but doesn't explain the scenarios under which they would support concurrent messages.
Can anyone shed some light on this?
Addendum: I am also considering creating a pool of channels that the web server uses to fulfill requests. But again, I see no reason why my existing approach should block and I'd rather avoid the complexity if possible.

After much ado, this all came down to the fact that I wasn't calling Open explicitly on the channel before using it. Apparently an implicit Open can preclude concurrency in some scenarios.

You can cache the WCF proxy, but still create a channel for each service call - this will ensure concurrency, is not very expensive in comparison to creating a channel from scratch, and re-authentication for each call will not be necessary. This is explained on Wenlong Dong's blog - "Performance Improvement for WCF Client Proxy Creation in .NET 3.5 and Best Practices" (a much better source of WCF information and guidance than MSDN).

Just for completeness: Here is a blog entry explaining the observed behavior of request serialization when not opening the channel explicitly:
http://blogs.msdn.com/b/wenlong/archive/2007/10/26/best-practice-always-open-wcf-client-proxy-explicitly-when-it-is-shared.aspx

Related

Async WCF and Protocol Behaviors

FYI: This will be my first real foray into Async/Await; for too long I've been settling for the familiar territory of BackgroundWorker. It's time to move on.
I wish to build a WCF service, self-hosted in a Windows service running on a remote machine in the same LAN, that does this:
Accepts a request for a single .ZIP archive
Creates the archive and packages several files
Returns the archive as its response to the request
I have to support archives as large as 10GB. Needless to say, this scenario isn't covered by basic WCF designs; we must take additional steps to meet the requirement. We must eliminate timeouts while the archive is building and memory errors while it's being sent. Both of these occur under basic WCF designs, depending on the size of the file returned.
My plan is to proceed using task-based asynchronous WCF calls and streaming mode.
I have two concerns:
Is this the proper approach to the problem?
Microsoft has done a nice job at abstracting all of this, but what of the underlying protocols? What goes on 'under the hood?' Does the server keep the connection alive while the archive is building (could be several minutes) or instead does it close the connection and initiate a new one once the operation is complete, thereby requiring me to properly route the request through the client machine firewall?
For #2, clearly I'm hoping for the former (keep-alive). But after some searching I'm not easily finding an answer. Perhaps you know.
You need streaming for big payloads. That is the right approach. This has nothing at all to do with asynchronous IO. The two are independent. The client cannot even tell that the server is async internally.
I'll add my standard answers for whether to use async IO or not:
https://stackoverflow.com/a/25087273/122718 Why does the EF 6 tutorial use asychronous calls?
https://stackoverflow.com/a/12796711/122718 Should we switch to use async I/O by default?
Each request runs over a single connection that is kept alive. This goes for both streaming big amounts of data as well as big initial delays. Not sure why you are concerned about routing. Does your router kill such connections? That's a problem.
Regarding keep alive, there is nothing going over the wire to do that. TCP sessions can stay open indefinitely without any kind of wire traffic.

Why is caching WCF channels a bad thing?

I've been reading a lot of WCF articles online and it seems like most people cache the ChannelFactory objects but not the channels itself. It appears that most people are afraid to use channel caching because they don't want to handle the network faults that could render the cached channel unusable. But that could be easily fixed by catching the CommunicationException on the method, recreate the channel, and replay the method using Reflection.
Then there are people who think it's bad to do channel caching because all communication will go through a single channel. See following article.
http://social.msdn.microsoft.com/Forums/is/wcf/thread/9cbdf92a-a749-40ce-9ebe-3f2622cd78ee
Is this necessarily a bad thing? Can you not share channels across threads? Will performance suffer because multiple method calls made to this single channel will get processed serially?
I haven't found evidence that sharing channels will degrade performance. What I did find is that using a cached channel is about 5 times faster than using a non-cached channel, even if it means having to use Reflection to make the methods calls on the cached channels.
The other advantage is not having to surround all your WCF calls with try/catch/finally statements to call Close(), Abort(), or Dispose() on the channel when you are done with it. To me it seems like WCF took a step in the wrong direction by forcing developers to have to manage WCF channel resources. In .NET Remoting, you created the proxy using the Activator class and you didn't have to do anything to it to clean it up. The .NET Framework handled all of that for you.
2 main reasons:
A ChannelFactory is expensive to create and it is thread safe => perfect candidate for caching.
A Channel generated by a channel factory is not expensive to create but it is not thread safe (well in reality it is thread safe but concurrent calls will be blocked and executed sequentially) => don't cache it in a multithreaded environment.
Here's a nice article which goes into further details.

WCF: Are asynch calls more secure?

In the project I'm currently working we're using WCF.
Company policy forces us to use async calls and the reason should be security.
I've asked why this is so much more secure but I don't get clear answers.
Can someone explain why this is so much secure?
They are not. The same security (authentication, encryption) mechanisms and considerations apply whether a call blocks until it gets a response or it uses a callback.
The only way someone may be confused into thinking that asynch calls are more "safe/secure", is they think that unhandled WCF exceptions will not bring down the main thread if they are asynchronous, as they will be raised inside the callback.
In this case, I would advice extreme caution when approaching the owner of this policy to avoid career-limiting consequences. Some people can get emotionally attached to their policies.
There is no point why an async call will be more secure than a sync call. I think you should talk to the owner of the policy for the same.
No they are not more or less secure than synchronous calls. The only difference is the client waits for a response on synchronous calls, whereas on async it is notified of a response.
Are they coming from the angle that synchronous calls leave the connection open longer or something?
Just exposing a WCF operation using an async signature (BeginBlah/EndBlah) doesn't actually affect the exposed operation at all. When you view the meta data, an operation like
[OperationContract(AsyncPattern=true)]
IAsyncResult BeginSomething(AsyncCallback, object)
void EndSomething(IAsyncResult)
...actually still ends up being represented as an operation called 'Something'. And actually this is one of the nice things about WCF: the client and server can differ in whether they choose to implement/consume an operation syncronously.
So if you are using generating WCF proxies (eg through Add Service Reference) then you will get syncronous versions of each operation whether they are implemented asyncronously or not unless you tick the little checkbox to generate the async overloads. And when you do you then get async versions of operations that might only be declared syncronously on the server.
All WCF is doing is, on both the client and server, giving you a choice about your threading model: do you want WCF to wait for the result, or are you going to signal it that you've finished. How the actual transport connection is managed is - to the best of my knowlege - totally unaffected. eg: For a NetTcpBinding the socket still stays open for the duration of the call, either way.
So, to get to the point, I really struggle to imagine how this could possibly make any difference to the security envelope of a WCF service. If a service is exposed using an async pattern, and is genuinely implemented in an async way (async for outbound IO, or queues work via the thread pool or something) then there's probably an argument that it would be harder to DOS the service (by exhausting the pool of WCF IO threads), but that'd be about it.
See Syncronous and Asyncronous Operations in MSDN
NB: If you are sharing the contract interface between the client and server then obviously the syncronisity of the two ends match (because they are both using the same interface type), but that's just a limitation of using a shared interface. If you made another equivilent interface, differing only by the async pattern, you could still create a ChannelFactory against it just fine.
I agree with the other answers - definitely not more secure.
Fire up Fiddler and watch a synchronous request vs. an asynchronous request. You'll basically see the same type of traffic (although the sync may send and receive more data since it's probably a postback). But you can intercept both of those requests, manipulate them, and resend them and cause havoc on your server.
Fiddler's a great tool, by the way. It's an eye-opener in terms of what kind of data and how much data you're sending to the server.

Concurrent access to WCF client proxy

I'm currently playing around a little with WCF, during this I stepped on a question where I'm not sure if I'm on the right track.
Let's assume a simple setup that looks like this: client -> service1 -> service2.
The communication is tcp-based.
So where I'm not sure is, if it makes sense that the service1 caches the client proxy for service2. So I might get a multi-threaded access to that proxy, and I have to deal with it.
I'd like to take advantage of the tcp session to get better performance, but I'm not sure if this "architecture" is supported by WCF/network/whatever at all. The problem I see is that all the communication goes over the same channel, if I'm not using locks or another sync.
I guess the better idea is to cache the proxy in a threadstatic variable.
But before I do that, I wanted to confirm that it's really not a good idea to have only one proxy instance.
tia
Martin
If you don't know that you have a performance problem, then why worry about caching? You're opening yourself to the risk of improperly implementing multithreading code, and without any clear, measurable benefit.
Have you measured performance yet, or profiled the application to see where it's spending its time? If not, then when you do, you may well find that the overhead of multiple TCP sessions is not where your performance problems lie. You may wish you had the time to optimize some other part of your application, but you will have spent that time optimizing something that didn't need to be optimized.
I am already using such a structure. I have one service that collaborates with some other services and realise the implementation. Of course, in my case the client calls some one-way method of the first service. I am getting very good benifit. Of course, I also have configured it to limit the number of concurrent calls in some of the cases.
Yes, that architecture is supported by WCF. I deal with applications every day that use similar structures, using NetTCPBinding.
The biggest thing to worry about is the ConcurrencyMode of the various services involved, and making sure that they do not block unnecessarily. It is very easy to get into a scenario where you will be guaranteed timeouts, or at the least have poor performance due to multiple, synchronous calls across service boundaries. Even OneWay calls are not guaranteed to immediately return.
careful with threadstatic, .net changes the thread so the variable can get null.
For session...perhaps you could use session enabled calls:
http://msdn.microsoft.com/en-us/library/ms733040.aspx
But i would not recomend using if you do not have any performance issue. I would use the normal way, or if service 1 is just for forwarding you could use that functionality easily with 4.0:
http://www.sdn.nl/SDN/Artikelen/tabid/58/view/View/ArticleID/2979/Whats-New-in-WCF-40.aspx
Regards
Firstly, make sure you know about the behaviour of ThreadStatic in ASP.NET applications:
http://piers7.blogspot.com/2005/11/threadstatic-callcontext-and_02.html
The same thread that started your request may not be the same thread that finishes it. Basically the only safe way of storing Thread local storage in ASP.NET applications is inside HttpContext. The next obvious approach would be to creat a wrapper client to manage your WCF client proxy and ensure each IO request is thread safe using locks.
Although my personal preference would be to use a pool of proxy clients. Whenever you need one pop it off the pool queue and when you're finished with it put it back on.

To poll or not to poll (in a web services context)

We can use polling to find out about updates from some source, for example, clients connected to a webserver. WCF provides a nifty feature in the way of Duplex contracts, in which, I can maintain a connection to a client, and make invocations on that connection at will.
Some peeps in the office were discussing the merits of both solutions, and I wanted to get feedback on when each strategy is best used.
I would use an event-based mechanism instead of polling. In WCF, you can do this easily by following the Publish-Subscribe framework that Juval Lowy provides at his website, IDesign.net.
Depends partly on how many users you have.
Say you have 1,000,000 users you will have problems maintaining that many sessions.
But if your system can respond to 1000 poll requests a second then each client can poll every 1000 seconds.
I think Shiraz nailed this one, but I wanted to say two more things.
I've had trouble with Duplex
contracts. You have to have all of
your ducks in a row with regards to
the callback channel... you have to
check it to make sure it's open,
etc. The IDesign.net stuff would be
a minimum amount of plumbing code
you'll have to include.
If it makes sense for your solution
(this is only appropriate in certain
situations), the MSMQ binding allows
a client to send data to a service
in an async manner (like Duplex),
but the service isn't "polling" for
messages... it gets notified when
one enters the queue through some
under-the-covers plumbing.
This sort of forces you to turn the
communication around (client becomes
server, server becomes client), but
if the majority of the communication
is one-way, this would provide a lot
of benefits. The other advantage
here is obviously the queued
communication - the server can be
down and not miss any messages...
it'll pick 'em up when it comes back
online.
Something to think about.