I am writing a GRPC service and am trying to use the asynchronous methods with the help from Asio. The service calls into a C++ library that have synchronous methods. The code in that library uses interfaces that must be implemented by the user of the library. These interfaces contains synchronous methods.
I wish to implement these interfaces by using asynchronous GRPC calls to other services. My challenge is that I cannot see how I can implement an adaptor between the synchronous - and asynchronous world. Is this at all possible in C++?
In (my) theory I want this co-routine adaptor to send the GRCP request and then the thread should continue executing other co-routines - and not be blocking - while waiting for the GRPC reply. When the reply is received the synchronous method call is returned to the library. This way I would be able to implement my GRPC service with only one thread and I do not have to worry about multi-threading issues.
When using co_await in a method then the return value is reflecting the async nature of the method, so I cannot use co_await (directly) when implementing a synchronous interface. Instead I can post a lambda containing the co_wait, but then I have to do a blocking wait on a future (or similar) and my single threaded service is deadlocked. I have been thinking of using co_yield and make a type of generator since it seems to be that the consumer of these generators can be synchronous.
Best regards
Related
Basic question: How does Spring Reactors WebClient achieve non blocking when compared to RestTemplate? Doesn't it have to block somewhere after it has dispatched the request to the external service (for example)? HTTP by nature is synchronous right? So the calling application has to wait for the response?
How does the thread know the context to react upon the response from the service?
There are several separate questions here.
How I/O operations are managed?
What's the threading model behind this runtime?
How does the application deal with the request/response model behind HTTP?
In the case of WebClient and project Reactor, the Netty event loop is used to queue/dispatch/process events. Each read/write operation is done is a non-blocking manner, meaning that no thread sits waiting for an I/O operation to complete. In this model, concurrency is not done through thread pools, but there's a small number of threads that process unit of work which should never block.
From a pure HTTP standpoint (i.e. if you were capturing the HTTP packets on the network), you'd see no big difference between a RestTemplate and a WebClient call. The HTTP transport itself doesn't support the backpressure concept. So the client still has to wait for the response - the difference here is that the application using that WebClient won't waste resources on waiting for that operation to complete - it will use them to process other events.
For more information on that, please check out the reactive programming introduction in the Reactor reference documentation and this talk given by Rossen Stoyanchev that explains things well if you're used to the typical Servlet container model.
Say your ReadProcessor needs to insert records using JDBC, or you need to integrate with a SOAP layer via a JAXWS call.
What is the best way to handle synchronous calls using Lagom asynchronous (by design) platform.
In contrast to vert.x, which provide dedicated possibilities to handle blocking calls, Lagom seems not to provide such an integrated feature.
According to their documentation (exemplarily for JDBC), one has to create own handling mechanisms, which internally create threads to run on.
So, the answer would be "do it yourself": create executors, runnables/callables and play with Futures to create an own non-blocking wrapper around your blocking calls.
I have a wcf client I generated using SVCUTIL with the /async flag.
The server is syncronic, but I only use the Begin/End methods in my client.
Also, I added the attribute UseSynchronizationContext=false in the CallbackBehavior.
My question is: How does WCF work with threads in this mode?
Or better phrased - Is WCF using the ThreadPool class to acquire new threads for the callback when I call simultanous functions? Or does it have some custom implementation?
I Googled the subject for hours, didn't find anything near an answer.
EDIT: I see I've been a little unclear here - I'm not asking about the server app, I'm asking about the client app - how does it manage the threads on which it returns the callbacks when I set the usesynchronizationcontext flag to false?
Your service's threading is unaffected by how the client calls it. When you use proxy Begin/End methods, the proxy is using a different client thread to make the service call so that your application code does not block.
With .NET 4.5 task based asynchronous calls are now preferred.
See Synchronous and Asynchronous Operations for an overview of the different patterns.
I want to implement a WCF service that responds immediately to the caller, but queues up an asynchronous job to be handled later. What is the best way to go about doing this? I've read the MSDN article on how to implement an asynchronous service operation, but that solution seems to still require the task to finish before responding to the caller.
There are many ways to accomplish this depending what you want to do and what technologies you are using (e.g. Unless you are using silverlight, you may not need to have your app call the service asynchronously) The most straight forward way to achieve your goal would be to have your service method start up a thread to perform the bulk of the processing and return immediately.
Another would be to create some kind of request (e.g. Create an entry in a datastore of some kind) and return. Another process (e.g. A windows service, etc.) could then pick up the request and perform the processing.
Any WCF service can be made asynchronous -
One of the nice things about WCF is you can write a service synchronously. When you add a ServiceReference in the client, you have the option of generating asynchronous methods.
This will automatically make the service call asynchronous. The service will return when it's done, but the client will get two methods - BeginXXX and EndXXX, as well as XXXAsync + an XXXCompleted event, either of which allows for completely asynchronous operation.
I understand that it isn't possible/sensible to use threads in RubyCocoa. However it is possible to use asynchronous Cocoa methods to avoid blocking user interface events.
I've successfully used a method on NSURLConnection to send an HTTP request and receive the response without blocking the user interface. But I'm wondering what other asynchronous Cocoa methods like this are available?
Also is it possible/sensible within a RubyCocoa application to use Ruby to spawn separate processes (as opposed to threads)? I suppose one issue would be how to wait for the process to complete, but perhaps this could be done by polling via NSTimer events?
Check this client, it's written in ruby and works pretty well.
httpclient