I am using JerseyClient to make async calls to a http server and am directly creating Futures to store the response. These can even be batch calls in which case I create a list of futures.
This is working perfectly for now but I am concerned about CPU Utilization and Thread Count as I am not creating any Thread Pool using Executor Service, nor am I using FutureTask<> to create futures.
Small Code snippet of how I am constructing each future:
Future<Response> response = requestBuilder.async().get();
Are these concerns valid? Is it okay to continue with this approach? Would this approach not scale?
Another concern is that a get() might never be performed on some of these futures? Would this lead to spawning threads that will never be killed because neither get() or cancel() is performed for the futures running on these threads?
It is always better to control number of threads created by the application so while calling REST API using Jersey async API, you should limit number of threads created.
Below code can be used for controlling max number of threads created by jersey client in async API -
ClientConfig cc = new ClientConfig();
cc.property(ClientProperties.ASYNC_THREADPOOL_SIZE , 10);
Client client = ClientBuilder.newClient(cc);
Another point related to get() method call on Future instance, get() method is just a blocking method which will make your current thread wait for completion of the future task in this case for the response to receive, it will not impact execution of threads. So, there should not be any issue if you are not calling get() or cancel() on response Future instance.
Related
I know that Express allows to execute asynchronous functions in the routes and in the middlewares, but is this correct? I read the documentation and it specifies that NO ROUTES OR ASYNCHRONOUS MIDDLEWARES SHOULD BE ASSIGNED, today, currently, does Express support asynchronous functions? Does it block the execution process? o Currently asynchronous functions DO NOT BLOCK THE EXECUTION PROCESS?,
For example, if I place in an asynchronous route, and if requests are made in that route at the same time, are they resolved in parallel?, that is:
Or when assigning asynchronous routes, will these requests be resolved one after the other ?, that is:
This is what I mean by "blocking the execution process", because if one fails, are the other requests pending? or Am I misunderstanding?
I hope you can help me.
You can use async functions just fine with Express, but whether or not they block has nothing to do with whether they are async, but everything to do with what the code in the function does. If it starts an asynchronous operation and then returns, then it won't block. But, if it executes a bunch of time consuming synchronous code before it returns, that will block.
If getDBInfo() is asynchronous and returns a promise that resolves when it completes, then your examples will have the three database operations in flight at the same time. Whether or not they actually run truly in parallel depends entirely upon your database implementation, but the code you show here allows them to run in parallel if the database implements that.
The single thread of Javascript execution will run the first call to getDBInfo(), that DB request will be started and will immediately return a promise. Then, it will hit the await and it will suspend the execution of the containing function. That will allow the event loop to then start processing the second request and it will do the same. When it hits the await, it will suspend execution of the containing function and allow the event loop to process the third request will do likewise. Then, sometime later, one of the DB calls will complete (it could be any one of the three) which will resolve its promise which will unsuspend the function and it will send the response. Then, one after another the other two DB calls will finish and send their responses.
I have a class which listens to events coming from a socket at a very fast pace. I would like to feed these events into a coroutine Channel. The following code is used:
class MyClass(channel: Channel<String>) : ... {
...
override onMessageReceived(message: String) {
MyScope.launch {
channel.send(message)
}
}
}
This does not work since sometimes the events come in so fast that they end up getting posted out of order due to the launch spawning a new coroutine and everything happening in parallel. How can I ensure the order of the send is synchronous?
I tried newSingleThreadContext which did work however it is considered experimental and has a note saying it will be removed eventually. I am looking for a more solution that is more correct and complete.
Instead of launching the sends in parallel, you should use a Channel with a capacity of Channel.UNLIMITED, and have onMessageReceived use offer instead of send.
This is a lot cheaper than launching a new job for each send, and the channel will preserve the order
Im using Spring Webflux (with spring-reactor-netty) 2.1.0.RC1 and Lettuce 5.1.1.RELEASE.
When I invoke any Redis operation using the Reactive Lettuce API the execution always switches to the same individual thread (lettuce-nioEventLoop-4-1).
That is leading to poor performance since all the execution is getting bottlenecked in that single thread.
I know I could use publishOn every time I call Redis to switch to another thread, but that is error prone and still not optimal.
Is there any way to improve that? I see that Lettuce provides the ClientResources class to customize the Thread allocation but I could not find any way to integrate that with Spring webflux.
Besides, wouldn't the current behaviour be dangerous for a careless developer? Maybe the defaults should be tuned a little. I suppose the ideal scenario would be if Lettuce could just reuse the same event loop from webflux.
I'm adding this spring boot single class snippet that can be used to reproduce what I'm describing:
#SpringBootApplication
public class ReactiveApplication {
public static void main(String[] args) {
SpringApplication.run(ReactiveApplication.class, args);
}
}
#Controller
class TestController {
private final RedisReactiveCommands<String, String> redis = RedisClient.create("redis://localhost:6379").connect().reactive();
#RequestMapping("/test")
public Mono<Void> test() {
return redis.exists("key")
.doOnSubscribe(subscription -> System.out.println("\nonSubscribe called on thread " + Thread.currentThread().getName()))
.doOnNext(aLong -> System.out.println("onNext called on thread " + Thread.currentThread().getName()))
.then();
}
}
If I keep calling the /test endpoint I get the following output:
onSubscribe called on thread reactor-http-nio-2
onNext called on thread lettuce-nioEventLoop-4-1
onSubscribe called on thread reactor-http-nio-3
onNext called on thread lettuce-nioEventLoop-4-1
onSubscribe called on thread reactor-http-nio-4
onNext called on thread lettuce-nioEventLoop-4-1
That's an excellent question!
The TL;DR;
Lettuce always publishes using the I/O thread that is bound to the netty channel. This may or may not be suitable for your workload.
The Longer Read
Redis is single-threaded, so it makes sense to keep a single TCP connection. Netty's threading model is that all I/O work is handled by the EventLoop thread that is bound to the channel. Because of this constellation, you receive all reactive signals on the same thread. It makes sense to benchmark the impact using various reactive sequences with various options.
A different usage scheme (i.e. using pooled connections) is something that changes directly the observed results as pooling uses different connections and so notifications are received on different threads.
Another alternative could be to provide an ExecutorService just for response signals (data, error, completion). In some scenarios, the cost of context switching can be neglected because of the removing congestion in the I/O thread. In other scenarios, the context switching cost might be more notable.
You can already observe the same behavior with WebFlux: Every incoming connection is a new connection, and so it's handled by a different inbound EventLoop thread. Reusing the same EventLoop thread for outbound notification (that one, that was used for inbound notifications) happens quite late when writing the HTTP response to the channel.
This duality of responsibilities (completing a command, performing I/O) can experience some gravity towards a more computation-heavy workload which drags performance out of I/O.
Additional resources:
Investigate on response thread switching #905.
i can't understand why use Asynchronous and await when fetch data from server
A network request from a client to a server, possibly over a long distance and slow internet can take an eternity in CPU time scales.
If it weren't async, the UI would block until the request is completed.
With async execution the UI thread is free to update a progress bar or render other stuff while the framework or Operating System stack is busy on another thread to send and receive the request your code made.
Most other calls that reach out to the Operating System for files or other resources are async for the same reason, while not all of them are as slow as requests to a remote server, but often you can't know in advance if it will be fast enough to not hurt your frame rate and cause visible disruption or janks in the UI.
await is used to make code after that statement starting with wait is executed only when the async request is completed. async / await is used to make async code look more like sync code to make it easier to write and reason about.
Async helps a lot with scalability and responsiveness.
Using synchronous request blocks the client until a response has been received. As you increase concurrent users you basically have a thread per user. This can create a lot of idle time, and wasted computation. One request gets one response in the order received.
Using asynchronous requests allows the client to receive requests/send responses in any random order of execution, as they are able to be received/sent. This lets your threads work smarter.
Here's a pretty simple and solid resource from Mozilla:
https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/Synchronous_and_Asynchronous_Requests#Asynchronous_request
I am curious to know how multiple async NSURLConnection connections handles internally ? I know they use an internal background thread to run it but lets say if in code i am creating two async NSURLConnection concurrently , does that will create two thread internally to run them in parllel or second connection will wait for first to complete ? In brief please confrim how multiple async NSURLConnection achieve concurrency ?
I guess it will run in parallel. You can have a look on WWDC Session Video about network programming.
Apple engineer said handling url request one by one is expensive, running them in parallel is much more reasonable. The reason is, for processing a request, actually most of the time is spent on latency, not logic processing in devices and servers. So, handling requests parallel will efficiently reduce time waste for latency.
so I guess they wont do async NSURLConnection one by one because it's contradicting this basic theory.
Besides, I have tried to download images Async using NSURLConnection. I sent out a few request once. like
for ( i = 1 to 4) {
send request i
}
The response is also not in sequence.
Each async NSURLConnection runs on it's own thread after you start the connection (async NSURLConnection has to be created and started on main thread!) and their delegate and datadelegate methods called on main thread.
Other option that you can use it as using "NSOperationQueue" and execute request using NSOperations. Please refer http://www.icodeblog.com/2012/10/19/tutorial-asynchronous-http-client-using-nsoperationqueue/ for more detail.
Thanks,
Jim