WCF: DuplexSessionChannel, Asynchronous Operations and an Exception - wcf

Today I have a WCF question, though it probably pertains to other Networking models in .NET as well.
I have a WCF service that exposes a Send(Message) OperationContract, which is OneWay = true. Now this service has a callback channel to return Messages to the client.
Anyway I am trying (successfully) to call this Send method from my Client asynchronously. On a DuplexSessionChannel I am calling BeginSend(Message, OnSendComplete, null) and I have an OnSendComplete(IAsyncResult) method that calls EndSend(asyncResult) on the DuplexSessionChannel.
The service has a CallbackContract and uses the same BeginSend()/EndSend() pattern for sending back to the client, which is called on the callBack channel I get with OperationContext.Current.GetCallbackChannel.
The client on its DuplexSessionChannel calls BeginReceive()/EndReceive() when receiving messages back from the Services callback channel.
Even though things are working, I dont understand what the End<Operation>() methods actually do and this is what I need to have explained to me.
I ask because I am getting an occasional exception in a call to EndSend() on the Service (sending back to the client) complaining that a collection has been modified (I know what this exception means, but not why it is happening or where exactly...). I am using PollingDuplexHttpBinding with a Silverlight client.
I am not a WCF expert, but don't hold back on the details, I need the knowledge. I have seen these sort of Begin/End patterns before around other async operations during my career thus far but never really understood what was going on.
Thanks in advance.

It sounds like your question is just about the Begin/End APM (async programming model). Briefly, the APM takes a sync method like
R Foo(A a); // R is some result type, A is some argument type
and breaks it into async BeginFoo and EndFoo methods. The main advantage happens when the operation is doing some truly asynchronous system operation (e.g. talking to the network) that may be long-running (at least compared to other functions; e.g. talking to the network may take hundreds of milliseconds or more). This pattern gives you a way to tell the system to start the operation, and then call you back when the result of the operation is ready. The advantage to the pattern is you don't have to have a managed thread blocked while this call is pending (which means e.g. that you can have thousands of pending network reads/writes without needing thousands of threads, hurray, threads are expensive).
So given that, 'BeginFoo' is how you say 'start the method with these arguments', and then when you get called back (as notification that the result is ready), 'EndFoo' is how you get the result. In the general case, if 'Foo' might throw a particular exception, then this exception might come out of either the 'Begin' call or the 'End' call and you have to be prepared to handle it in both places.
In the case of something like Send() (which maybe returns void? I forget) it's a little annoying/weird because since it's one-way you kinda just want to 'fire-and-forget'. But exceptions can still happen (e.g. I tried to send but someone unplugged my network cable), and so this may yield exceptions... and given the Begin/End APM, such an exception might come out of the EndSend call. In effect, the exception is a kind of 'result' of calling Send, and so you calling EndSend provides a way for the system to throw an exception at you to say something went wrong after you called BeginSend.

Related

Net core API for spa and async

I am creating a new net core 2.2 API for use with a JavaScript client. Some examples in Microsoft have the controller having all async methods and some examples aren't. Should the methods on my API be async. Will be using IIS if this is a factor. An example method will involve calling another API and returning the result whilst another will be doing a database request using entity Framework.
It is best practice to use async for your controller methods, especially if your services are doing things like accessing a database. Whether or not your controller methods are async or not doesn't matter to IIS, the .net core runtime will be invoking them. Both will work, but you should always try to use async when possible.
First, you need to understand what async does. Simply put, it allows the thread handling the request to be returned to the pool to field other requests, if the thread enters a wait state. This is almost invariably caused by I/O operations, such as querying a database, writing/reading a file, etc. CPU-bound work such as calculations require active use of the thread and therefore cannot be handled asynchronously. As side benefit of async is the ability to "cancel" work. If the client closes the connection prematurely, this will fire a cancellation token which can be used by supported asynchronous methods to cancel work in progress. For example, assuming you passed the cancellation token into a call to something like ToListAsync(), and the client closes the connection, EF will see this an subsequently cancel the query. It's actually a little more complex than that, but you get the idea.
Therefore, you need to simply evaluate whether async is beneficial in a particular scenario. If you're going to be doing I/O and/or want to be able to cancel work in progress, then go async. Otherwise, you can stick with sync, if you like.
That said, while there's a slight performance cost to async, it's usually negligible, and the benefits it provides in terms of scalability are generally worth the trade-off. As such, it's pretty much preferred to just always go async. Additionally, if you're doing anything async, then your action should also be async. For example, everything EF Core does is async. The "sync" methods (ToList rather than ToListAsync) merely block on the async methods. As such, if you're doing a query via EF, use async. The sync methods are only there to support certain limited scenarios where there's no choice but to process sync, and in such cases, you should run in a separate thread (Task.Run) to prevent deadlocks.
UPDATE
I should also mention that things are a little murky with actions and particularly Razor Page handlers. There's an entire request pipeline, of which an action/handler is just a part of. Having a "sync" action does not preclude doing something async in your view, or in some policy handler, view component, etc. The action itself only needs to be async if it itself is doing some sort of asynchronous work.
Razor Page handlers, in particular, will often be sync, because very little processing typically happens in the handler itself; it's all in subordinate processes.
Async is very important concept to understand and Microsoft focus too much on this. But sometimes we don't realise the importance of this. Every time you are not using Async you are blocking the caller thread.
Why Use Async
Even if your API controller is using single operation (Let's say DB fetch) you should be using Async. The reason is your server has limited number of threads to handle client requests. Let's assume your application can handle 20 requests and if you are not using Async you are blocking the handler thread to do the operation (DB operation) which could be done by other thread (Async). In turn your request queue grows because your main thread is busy dealing other things and not able to look after new requests , at some stage your application will stop responding. If you would use Async the Main thread is free to handle more client requests while other operation run in the background.
More Resources
I would recommend definitely watching very informative official video from Microsoft on Performance issues.
https://www.youtube.com/watch?v=_5T4sZHbfoQ

Is the WebSocket returned from HttpContext.WebSockets.AcceptWebSocketAsync thread-safe?

In ASP.NET Core v2, is the WebSocket returned by HttpContext.WebSockets.AcceptWebSocketAsync thread-safe?
More specifically, can I call ReceiveAsync in parallel with a thread that calls SendAsync?
I'd like to be able to have a message loop receiving messages like the close event, while at the same time be able to send messages in response to server-side events (that is, not in response to received events).
I haven't found any documentation specifying what implementation AcceptWebSocketAsync returns, but in practice it appears to consistently return a ManagedWebSocket instance.
I haven't found any API documentation for ManagedWebSocket. Fortunately the source code has been published and it contains this helpful note:
Thread-safety:
It's acceptable to call ReceiveAsync and SendAsync in parallel. One of each may run concurrently.
It's acceptable to have a pending ReceiveAsync while CloseOutputAsync or CloseAsync is called.
Attemping to invoke any other operations in parallel may corrupt the instance. Attempting to invoke a send operation while another is in progress or a receive operation while another is in progress will result in an exception.
— (source1)
(source2)
tl;dr: not thread-safe in general, but the read and send in parallel scenario is supported

How to handle asynchronous errors in Go?

I am working on my first real Go project, a messaging API. I use channels to pass messages and other data between user goroutines and library goroutines that use a thread-unsafe, event-based C protocol library. For details https://github.com/apache/qpid-proton/blob/master/proton-c/bindings/go/README.md
My question is in 2 related parts:
1. What are common idioms for handling errors across channels?
The goroutine at one end blows up, how do I ensure the other end unblocks, gets an error value and doesn't get blocked again later?
For readers:
I can close the channel, but no error info.
I could pass a struct { data, error }
or use a second channel.
Pros & cons? Other ideas?
For writers: I can't close without a panic so I guess I need a second channel. Is this idiomatic?
select {
case sendChan <- data: sentOk()
case err := <- errChan: oops(err)
}
I also can't write after close so I need to store the error somewhere and check before trying to write. Any other approaches?
2. Exposing channels in APIs.
I need channels to pass error info: should I make those channels public fields or hide them in methods?
There is a tradeoff, and I don't have the experience to evaluate it:
Exposing channels lets users select directly, but it requires them to correctly impement the error handling patterns (check for errors before write, select for error as well as write). This seems complex and error-prone but maybe that because I'm not seasoned in go.
Hiding channels in a method simplifies and enforces correct use of the library. But now an async user must create their own goroutine and channel(s). They may just duplicate what the library does already, which is silly. Also there is an extra goroutine and channel on the path. Maybe that's not a big deal, but the data channel is the critical path for my library and I think it has to be hidden along with the error channel.
I could do both: expose the channels for power users and provide a simple method wrapper for people with simple needs. That's more to support but worth it if neither alone can fit all cases.
The standard net.Conn uses blocking methods, not channels, and I wrote goroutines to pump data to my C event-loop channel so I know it can be done, but I did not find it trivial. net.Conn is wrapping sytem calls not channels underneath so "exposing the channels" is not an option. Do any of the standard libraries export channels with error handling? (time.After doesn't count, there are no errors)
Thanks a lot!
Alan
Your question is a bit on the broad side but I'll try to give some guidance based on my experience writing highly concurrent code...
Personally I think making the channel a property of the object that gets initialized in a nice helpful NewMyObject() *MyObject method is good design pattern. It makes it so code using the object doesn't have to do boiler plate set up every time it wants to call some asynchronous method the type offers.
For readers: I can close the channel, but no error info. I could pass a struct { data, error } or use a second channel. Pros & cons? Other ideas?
Let the reader signal to return by closing the abort channel. The reader should simply use the temp, err := <-FromChannel paradigm and move on with execution if the data or error channel has closed. This should prevent the 'send on closed channel' panics error from the workers since they will close their channel and return. When err != nil the reader will know to move on.
For writers: I can't close without a panic so I guess I need a second channel. Is this idiomatic?
Yes. Sadly I was quite pissed of with the uni-directional behavior of channels and though it should be abstracted. Regardless, it's not. In my code I would not define this on the object that does work asynchronously. The paradigm I prefer is to use the closing signal (since sending a on a channel is not one-to-many, only one goroutine will read that). Instead, I allocate the abort channel in the calling code and if things need to shut down you close the abort channel and all the goroutines doing asynchronous work who are listening on that channel do their clean up and return. You should also use a WaitGroup so you can wait for the goroutines to return before moving on.
So my basic summary;
1) let the caller of asynchronous methods signal it's time to stop, not the other way around. A waitgroup is better used to coordinate their returns
2) use a sync.WaitGroup in the calling code to know when your goroutines are finished so you can move on
3) allocate your error channel in the calling code and take advantage of the one-to-many signal produced by closing the channel; if you send on a channel you allocate in the caller, only a single instance will read from it. If you put one on each instance you have to iterate a collection of instances to send the on each.
4) if you have a type that provide async methods that do work in the background, set up the channels to read off of in it's initializer, document the async methods saying where to listen for data, provide an example of a non-blocking select that passes an abort channel into the async method and listens on the methods data and error channels. If you need to kill a single routine you could accomplish this by closing one of the channels it owns rather than killing them all by closing the callers abort channel.
Hopefully that all makes sense.

blocked requests in io_service

I have implemented client server program using boost::asio library.
In my implementation there are times when io_service.run() blocks indefinitely. In case I pass another request to io_service, the blocked call begins to execute normally.
Is there any way to see what are the pending requests inside the io_service queue ?
I have not used work object to block the run call!
There are no official ways to query into the io_service to find all pending request. However, there are a few techniques to debug the problem:
Boost 1.47 introduced handler tracking. Simply define BOOST_ASIO_ENABLE_HANDLER_TRACKING and Boost.Asio will write debug output, including timestamps, an identifier, and the operation type, to the standard error stream.
Attach a debugger dig through the layers to find and examine operation queues. This answer covers both understanding handler tracking and using a debugger to examine an operation queue for the epoll_reactor.
Finally, if you believe it is a bug, then it may be worth updating to the latest version or checking the revision history for relevant changes. Regardless, describing the problem in more detail may allow others to help identify the source of the problem and potential solutions.
Now i spent a few hours reading and experimenting (i need more boost::asio functionality for work as well) and it turns out: Kind of.
But it is not as straightforward or readable as one might hope.
Under the hood (well, under the outermost hood) io_service has a bunch of other services registered, which do the work async_ operations of their respective fields require.
These are the "Services" described in the reference.
Now sadly, the services stay registered, wether there is work to do or not. For example if your io_service has a udp socket, it will still have all the corresponding services, even if the socket itself is inactive.
But you can ask your io_service which services it has. Lets say you want to know wether your io_service called m_io_service has an udp datagram_socket_service. Then you can call something like:
if (boost::asio::has_service<boost::asio::datagram_socket_service<boost::asio::ip::udp> >(m_io_service))
{
//Whatever
}
That does not help a lot, because it will be true no matter wether the socket is active or not. But after you know, that you have that service, you can get a ref to it using use_service instead of has_service but with the same elegant amount of <>.
And now you can inspect the service to see what it is up to. Sadly, it will not tell you what the outstanding handlers names are (probably partly because it does not know them) but if it is a socket, you can get its implemention_type and with that check whether it currently is_open or find either the local_endpoint as well as the remote_endpoint.
In case of a deadline_timer_service you can, among other stuff, find out when it expires_at.
See the reference for more information what the service is and is not willing to tell you.
http://www.boost.org/doc/libs/1_54_0/doc/html/boost_asio/reference.html
This information should then hopefully allow you to determine which async_ operation did not return.
And if not, at the very least you can cancel any unexpectedly active services.

WCF: Are asynch calls more secure?

In the project I'm currently working we're using WCF.
Company policy forces us to use async calls and the reason should be security.
I've asked why this is so much more secure but I don't get clear answers.
Can someone explain why this is so much secure?
They are not. The same security (authentication, encryption) mechanisms and considerations apply whether a call blocks until it gets a response or it uses a callback.
The only way someone may be confused into thinking that asynch calls are more "safe/secure", is they think that unhandled WCF exceptions will not bring down the main thread if they are asynchronous, as they will be raised inside the callback.
In this case, I would advice extreme caution when approaching the owner of this policy to avoid career-limiting consequences. Some people can get emotionally attached to their policies.
There is no point why an async call will be more secure than a sync call. I think you should talk to the owner of the policy for the same.
No they are not more or less secure than synchronous calls. The only difference is the client waits for a response on synchronous calls, whereas on async it is notified of a response.
Are they coming from the angle that synchronous calls leave the connection open longer or something?
Just exposing a WCF operation using an async signature (BeginBlah/EndBlah) doesn't actually affect the exposed operation at all. When you view the meta data, an operation like
[OperationContract(AsyncPattern=true)]
IAsyncResult BeginSomething(AsyncCallback, object)
void EndSomething(IAsyncResult)
...actually still ends up being represented as an operation called 'Something'. And actually this is one of the nice things about WCF: the client and server can differ in whether they choose to implement/consume an operation syncronously.
So if you are using generating WCF proxies (eg through Add Service Reference) then you will get syncronous versions of each operation whether they are implemented asyncronously or not unless you tick the little checkbox to generate the async overloads. And when you do you then get async versions of operations that might only be declared syncronously on the server.
All WCF is doing is, on both the client and server, giving you a choice about your threading model: do you want WCF to wait for the result, or are you going to signal it that you've finished. How the actual transport connection is managed is - to the best of my knowlege - totally unaffected. eg: For a NetTcpBinding the socket still stays open for the duration of the call, either way.
So, to get to the point, I really struggle to imagine how this could possibly make any difference to the security envelope of a WCF service. If a service is exposed using an async pattern, and is genuinely implemented in an async way (async for outbound IO, or queues work via the thread pool or something) then there's probably an argument that it would be harder to DOS the service (by exhausting the pool of WCF IO threads), but that'd be about it.
See Syncronous and Asyncronous Operations in MSDN
NB: If you are sharing the contract interface between the client and server then obviously the syncronisity of the two ends match (because they are both using the same interface type), but that's just a limitation of using a shared interface. If you made another equivilent interface, differing only by the async pattern, you could still create a ChannelFactory against it just fine.
I agree with the other answers - definitely not more secure.
Fire up Fiddler and watch a synchronous request vs. an asynchronous request. You'll basically see the same type of traffic (although the sync may send and receive more data since it's probably a postback). But you can intercept both of those requests, manipulate them, and resend them and cause havoc on your server.
Fiddler's a great tool, by the way. It's an eye-opener in terms of what kind of data and how much data you're sending to the server.