In the project I'm currently working we're using WCF.
Company policy forces us to use async calls and the reason should be security.
I've asked why this is so much more secure but I don't get clear answers.
Can someone explain why this is so much secure?
They are not. The same security (authentication, encryption) mechanisms and considerations apply whether a call blocks until it gets a response or it uses a callback.
The only way someone may be confused into thinking that asynch calls are more "safe/secure", is they think that unhandled WCF exceptions will not bring down the main thread if they are asynchronous, as they will be raised inside the callback.
In this case, I would advice extreme caution when approaching the owner of this policy to avoid career-limiting consequences. Some people can get emotionally attached to their policies.
There is no point why an async call will be more secure than a sync call. I think you should talk to the owner of the policy for the same.
No they are not more or less secure than synchronous calls. The only difference is the client waits for a response on synchronous calls, whereas on async it is notified of a response.
Are they coming from the angle that synchronous calls leave the connection open longer or something?
Just exposing a WCF operation using an async signature (BeginBlah/EndBlah) doesn't actually affect the exposed operation at all. When you view the meta data, an operation like
[OperationContract(AsyncPattern=true)]
IAsyncResult BeginSomething(AsyncCallback, object)
void EndSomething(IAsyncResult)
...actually still ends up being represented as an operation called 'Something'. And actually this is one of the nice things about WCF: the client and server can differ in whether they choose to implement/consume an operation syncronously.
So if you are using generating WCF proxies (eg through Add Service Reference) then you will get syncronous versions of each operation whether they are implemented asyncronously or not unless you tick the little checkbox to generate the async overloads. And when you do you then get async versions of operations that might only be declared syncronously on the server.
All WCF is doing is, on both the client and server, giving you a choice about your threading model: do you want WCF to wait for the result, or are you going to signal it that you've finished. How the actual transport connection is managed is - to the best of my knowlege - totally unaffected. eg: For a NetTcpBinding the socket still stays open for the duration of the call, either way.
So, to get to the point, I really struggle to imagine how this could possibly make any difference to the security envelope of a WCF service. If a service is exposed using an async pattern, and is genuinely implemented in an async way (async for outbound IO, or queues work via the thread pool or something) then there's probably an argument that it would be harder to DOS the service (by exhausting the pool of WCF IO threads), but that'd be about it.
See Syncronous and Asyncronous Operations in MSDN
NB: If you are sharing the contract interface between the client and server then obviously the syncronisity of the two ends match (because they are both using the same interface type), but that's just a limitation of using a shared interface. If you made another equivilent interface, differing only by the async pattern, you could still create a ChannelFactory against it just fine.
I agree with the other answers - definitely not more secure.
Fire up Fiddler and watch a synchronous request vs. an asynchronous request. You'll basically see the same type of traffic (although the sync may send and receive more data since it's probably a postback). But you can intercept both of those requests, manipulate them, and resend them and cause havoc on your server.
Fiddler's a great tool, by the way. It's an eye-opener in terms of what kind of data and how much data you're sending to the server.
Related
I have a web application that uses the jquery autocomplete plugin, which essentially sends via ajax a request containing text that has been typed into a textbox to our web server, once the web server receives this request, it is then handed off to rabbitmq.
I know that we do get benefits from using messaging, but it seems like using it for blocking rpc calls is a misuse and that something like WCF is far more appropriate in this instance, is this the case or is it considered acceptable architecture?
It's possible to perform RPC synchronous requests with RabbitMQ. Here it's explained very well, with its drawback included! So it's considered an acceptable architecture. Discouraged, but acceptable whenever the synchronous response is mandatory.
As a possible counter-effect is that adding RabbitMQ in the middle, you will add some latency to the solution.
However you have the possibility to gain in terms of reliability, flexibility, scalability,...
What benefit would you get from it? And in fairness if you put the message in the queue how is is synchronous? unless the same process that placed the message in the queue is the one removing it, but that is pretty much useless no?
Now, if all you want to do is place the message in the queue and process it later on is grand.
Also the fact that you had WCF to the mixture is IMHO a symptom that something is perhaps not clear enough. You could use WCF as an API gateway and use it to write the message to the queue so this is not really about WCF or Queues, but more like sync vs async.
The way you are putting your ideas, does not look alright to me.
I've been reading a lot of WCF articles online and it seems like most people cache the ChannelFactory objects but not the channels itself. It appears that most people are afraid to use channel caching because they don't want to handle the network faults that could render the cached channel unusable. But that could be easily fixed by catching the CommunicationException on the method, recreate the channel, and replay the method using Reflection.
Then there are people who think it's bad to do channel caching because all communication will go through a single channel. See following article.
http://social.msdn.microsoft.com/Forums/is/wcf/thread/9cbdf92a-a749-40ce-9ebe-3f2622cd78ee
Is this necessarily a bad thing? Can you not share channels across threads? Will performance suffer because multiple method calls made to this single channel will get processed serially?
I haven't found evidence that sharing channels will degrade performance. What I did find is that using a cached channel is about 5 times faster than using a non-cached channel, even if it means having to use Reflection to make the methods calls on the cached channels.
The other advantage is not having to surround all your WCF calls with try/catch/finally statements to call Close(), Abort(), or Dispose() on the channel when you are done with it. To me it seems like WCF took a step in the wrong direction by forcing developers to have to manage WCF channel resources. In .NET Remoting, you created the proxy using the Activator class and you didn't have to do anything to it to clean it up. The .NET Framework handled all of that for you.
2 main reasons:
A ChannelFactory is expensive to create and it is thread safe => perfect candidate for caching.
A Channel generated by a channel factory is not expensive to create but it is not thread safe (well in reality it is thread safe but concurrent calls will be blocked and executed sequentially) => don't cache it in a multithreaded environment.
Here's a nice article which goes into further details.
I have a web tier that forwards calls onto an application tier. The web tier uses a shared, cached channel to do so. The application tier services in question are stateless and have concurrency enabled.
But they are not being called concurrently.
If I alter the web tier to create a new channel on every call, then I do get concurrent calls onto the application tier. But I want to avoid that cost since it is functionally unnecessary for my scenario. I have no session state, and nor do I need to re-authenticate the caller each time. I understand that the creation of the channel factory is far more expensive than than the creation of the channels, but it is still a cost I'd like to avoid if possible.
I found this article on MSDN that states:
While channels and clients created by
the channels are thread-safe, they
might not support writing more than
one message to the wire concurrently.
If you are sending large messages,
particularly if streaming, the send
operation might block waiting for
another send to complete.
Firstly, I'm not sending large messages (just a lot of small ones since I'm doing load testing) but am still seeing the blocking behavior. Secondly, this is rather open-ended and unhelpful documentation. It says they "might not" support writing more than one message but doesn't explain the scenarios under which they would support concurrent messages.
Can anyone shed some light on this?
Addendum: I am also considering creating a pool of channels that the web server uses to fulfill requests. But again, I see no reason why my existing approach should block and I'd rather avoid the complexity if possible.
After much ado, this all came down to the fact that I wasn't calling Open explicitly on the channel before using it. Apparently an implicit Open can preclude concurrency in some scenarios.
You can cache the WCF proxy, but still create a channel for each service call - this will ensure concurrency, is not very expensive in comparison to creating a channel from scratch, and re-authentication for each call will not be necessary. This is explained on Wenlong Dong's blog - "Performance Improvement for WCF Client Proxy Creation in .NET 3.5 and Best Practices" (a much better source of WCF information and guidance than MSDN).
Just for completeness: Here is a blog entry explaining the observed behavior of request serialization when not opening the channel explicitly:
http://blogs.msdn.com/b/wenlong/archive/2007/10/26/best-practice-always-open-wcf-client-proxy-explicitly-when-it-is-shared.aspx
Using techniques as hinted at in:
http://msdn.microsoft.com/en-us/library/system.servicemodel.servicecontractattribute.callbackcontract.aspx
I am implementing a ServerPush setup for my API to get realtime notifications from a server of events (no polling). Basically, the Server has a RegisterMe() and UnregisterMe() method and the client has a callback method called Announcement(string message) that, through the CallbackContract mechanisms in WCF, the server can call. This seems to work well.
Unfortunately, in this setup, if the Server were to crash or is otherwise unavailable, the Client won't know since it is only listening for messages. Silence on the line could mean no Announcements or it could mean that the server is not available.
Since my goal is to reduce polling rather than immediacy, I don't mind adding a void Ping() method on the Server alongside RegisterMe() and UnregisterMe() that merely exists to test connectivity of to the server. Periodically testing this method would, I believe, ensure that we're still connected (and also that no Announcements have been dropped by the transport, since this is TCP)
But is the Ping() method necessary or is this connectivity test otherwise available as part of WCF by default - like serverProxy.IsStillConnected() or something. As I understand it, the channel's State would only return Faulted or Closed AFTER a failed Ping(), but not instead of it.
2) From a broader perspective, is this callback approach solid? This is not for http or ajax - the number of connected clients will be few (tens of clients, max). Are there serious problems with this approach? As this seems to be a mild risk, how can I limit a slow/malicious client from blocking the server by not processing it's callback queue fast enough? Is there a kind of timeout specific to the callback that I can set without affecting other operations?
Your approach sounds reasonable, here are some links that may or may not help (they are not quite exactly related):
Detecting Client Death in WCF Duplex Contracts
http://tomasz.janczuk.org/2009/08/performance-of-http-polling-duplex.html
Having some health check built into your application protocol makes sense.
If you are worried about malicious clients, then add authorization.
The second link I shared above has a sample pub/sub server, you might be able to use this code. A couple things to watch out for -- consider pushing notifications via async calls or on a separate thread. And set the sendTimeout on the tcp binding.
HTH
I wrote a WCF application and encountered a similar problem. My server checked clients had not 'plug pulled' by periodically sending a ping to them. The actual send method (it was asynchronous being a server) had a timeout of 30 seconds. The client simply checked it received the data every 30 seconds, while the server would catch an exception if the timeout was reached.
Authorisation was required to connect to the server (by using the built-in feature of WCF that force the connecting person to call a particular method first) so from a malicious client perspective you could easily add code to check and ban their account if they do something suspicious, while disconnecting users who do not authenticate.
As the server I wrote was asynchronous, there wasn't any way to really block it. I guess that addresses your last point, as the asynchronous send method fires off the ping (and any other sending of data) and returns immediately. In the SendEnd method it would catch the timeout exception (sometimes multiple for the client) and disconnect them, without any blocking or freezing of the server.
Hope that helps.
You could use a publisher / subscriber service similar to the one suggested by Juval:
http://msdn.microsoft.com/en-us/magazine/cc163537.aspx
This would allow you to persist the subscribers if losing the server is a typical scenario. The publish method in this example also calls each subscribers on a separate thread, so a few dead subscribers will not block others...
Few methods in my WCF service are quite time taking - Generating Reports and Sending E-mails.
According to current requirement, it is required so that Client application just submits the request and then do not wait for the whole process to complete. It will allow user to continue doing other operations in client applications instead of waiting for the whole process to finish.
I am in a doubt over which way to go:
AsyncPattern = true OR
IsOneWay=true
Please guide.
It can be both.
Generally I see no reason for WCF operation to not be asynchronous, other than developer being lazy.
You should not compare them, because they are not comparable.
In short, AsyncPattern=True performs asynchronous invocation, regardless of whether you're returning a value or not.
OneWay works only with void methods, and puts a lock on your thread waiting for the receiver to ack it received the message.
I know this is an old post, but IMO in your scenario you should be using IsOneWay on the basis that you don't care what the server result is. Depending on whether you need to eventually notify the client (e.g. of completion or failure of the server job) then you might also need to look at changing the interface to use SessionMode=required and then using a Duplex binding.
Even if you did want to use asynchronous 2-way communication because your client DID care about the result, there are different concepts:
AsyncPattern=true on the Server - you would do this in order to free up server resources, e.g. if the underlying resource (?SSRS for reporting, Mail API etc) supported asynchronous operations. But this would benefit the server, not the client.
On the client, you can always generate your service reference proxy with "Generate Asynchronous Operations" ticked - in which case your client won't block and the callback will be used when the operation is complete.