Swapping between Schedulers - spring-webflux

I have a blocking workload that I want to execute on the bounded elastic scheduler. After this work is done, a lot of work that could be executed on the parallel scheduler follows, but it will automatically continue to run on the thread from the bounded elastic scheduler.
When is it "correct" to drop the previous scheduler you set earlier in the chain? Is there ever a reason to do so if it's not strictly necessary, because of thread starvation, for example?
I can switch the scheduler of a chain by "breaking" the existing chain with flatMap, then, switchIfEmpty, and probably a few more methods. Example:
public Mono<Void> mulpleSchedulers() {
return blockingWorkload()
.doOnSuccess(result -> log.info("Thread {}", Thread.currentThread().getName())) // Thread boundedElastic-1
.subscribeOn(Schedulers.boundedElastic())
.flatMap(result -> Mono.just(result)
.subscribeOn(Schedulers.parallel()))
.doOnSuccess(result -> log.info("Thread {}", Thread.currentThread().getName())); // Thread parallel-1
}

Generally it is not a bad practice to switch the execution to another scheduler throughout your reactive chain.
To switch the execution to another scheduler in the middle of your chain you can use publishOn() operator. Then any subsequent operator call will be run on the supplied scheduler worker to such publishOn().
Take a look at 4.5.1 from reference
However, you should clearly know why do you do that. Is there any reason for this?
If you want to run some long computational process (some CPU-bound work), then it is recommended to execute it on Schedulers.parallel()
For making blocking calls it is recommended to use Schedulers.boundedElastic()
However, usually for making blocking calls we use subscribeOn(Schedulers.boundedElastic()) on "blocking publisher" directly.
Wrapping blocking calls in reactor

Related

Should parallel API call use Schedulers.parallel() or Schedulers.boundedElastic()

To be honest I have no idea how schedulers work in reactor. I have read few of them and this what I found.
Schedulers.parallel() is good for CPU-intensive but short-lived tasks.
It can execute N such tasks in parallel (by default N == number of
CPUs)
Schedulers.elastic() and Schedulers.boundedElastic() are good for more
long-lived tasks (eg. blocking IO tasks). The elastic one spawns
threads on-demand without a limit while the recently introduced
boundedElastic does the same with a ceiling on the number of created
threads.
So in my API calls there's a task where I have to poll request over and over again until its state is ready.
Flux.just(order1, order2)
.parallel(4)
.runOn(Schedulers.parallel())
.flatMap { order -> createOrder(order) }
.flatMap { orderId ->
pollConfirmOrderStatus(orderId)
.retryWhen(notReady)
}
.collectList()
.subscribe()
As you can see I use Schedulers.parallel() and it works fine, but I'm concerning about blocking CPU usage since my server doesn't have that much CPU cores. The pollConfirmOrderStatus takes about 1-2 minutes so I'm not sure if it would block other process in my server from accessing CPU. So should I use Schedulers.parallel() or Schedulers.bondedElastic() here.
If your method pollConfirmOrderStatus() doesn't block the parallel Scheduler's threads it should be fine. Otherwise then you might be blocking all the available thread in the parallel Scheduler, which might end up in a deadlock if your state never gets ready.
Here, it explains that parallel scheduler is reserved for non-blocking calls, and that you can use BlockHound to spot blocking calls from non-blocking intended threads.
https://spring.io/blog/2019/03/28/reactor-debugging-experience

Can AsyncEventListener.process wait on value-returning async Callable(s)?

I want to know if it is safe for the boolean processEvents(final List<AsyncEvent> events) method of a serial AsyncEventListener that needs to use the Geode/GemFire API to wait on a value-returning async task (e.g., a Callable<Boolean>) so that processEvents returns true only if the work done in the async task was successful?
Obviously, the point is that I don't want the AsyncEventListener.processEvents method to return true (indicating success) if in fact the processing of those events didn't actually succeed. It's just that the processing of the events is happening in another thread.
I do understand that it would be bad for the async task(s) to take very long and tie up the thread on which the AsyncEventListener is processing.
But, other than throughput, is there a danger in doing this kind of thing? Is there a better approach?
You can do whatever you want within your AsyncEventListener, the actual thread invoking the callback is independent from the thread executing the actual cache operation, so clients won't notice the "slowness" in your listener.
That said, it's in your best interest to make sure the implementation is able to "keep up" with the rate at which events are added to the queue, otherwise they will start to pile up and the whole cluster might suffer.

Reactive Redis (Lettuce) always publishing to single thread

Im using Spring Webflux (with spring-reactor-netty) 2.1.0.RC1 and Lettuce 5.1.1.RELEASE.
When I invoke any Redis operation using the Reactive Lettuce API the execution always switches to the same individual thread (lettuce-nioEventLoop-4-1).
That is leading to poor performance since all the execution is getting bottlenecked in that single thread.
I know I could use publishOn every time I call Redis to switch to another thread, but that is error prone and still not optimal.
Is there any way to improve that? I see that Lettuce provides the ClientResources class to customize the Thread allocation but I could not find any way to integrate that with Spring webflux.
Besides, wouldn't the current behaviour be dangerous for a careless developer? Maybe the defaults should be tuned a little. I suppose the ideal scenario would be if Lettuce could just reuse the same event loop from webflux.
I'm adding this spring boot single class snippet that can be used to reproduce what I'm describing:
#SpringBootApplication
public class ReactiveApplication {
public static void main(String[] args) {
SpringApplication.run(ReactiveApplication.class, args);
}
}
#Controller
class TestController {
private final RedisReactiveCommands<String, String> redis = RedisClient.create("redis://localhost:6379").connect().reactive();
#RequestMapping("/test")
public Mono<Void> test() {
return redis.exists("key")
.doOnSubscribe(subscription -> System.out.println("\nonSubscribe called on thread " + Thread.currentThread().getName()))
.doOnNext(aLong -> System.out.println("onNext called on thread " + Thread.currentThread().getName()))
.then();
}
}
If I keep calling the /test endpoint I get the following output:
onSubscribe called on thread reactor-http-nio-2
onNext called on thread lettuce-nioEventLoop-4-1
onSubscribe called on thread reactor-http-nio-3
onNext called on thread lettuce-nioEventLoop-4-1
onSubscribe called on thread reactor-http-nio-4
onNext called on thread lettuce-nioEventLoop-4-1
That's an excellent question!
The TL;DR;
Lettuce always publishes using the I/O thread that is bound to the netty channel. This may or may not be suitable for your workload.
The Longer Read
Redis is single-threaded, so it makes sense to keep a single TCP connection. Netty's threading model is that all I/O work is handled by the EventLoop thread that is bound to the channel. Because of this constellation, you receive all reactive signals on the same thread. It makes sense to benchmark the impact using various reactive sequences with various options.
A different usage scheme (i.e. using pooled connections) is something that changes directly the observed results as pooling uses different connections and so notifications are received on different threads.
Another alternative could be to provide an ExecutorService just for response signals (data, error, completion). In some scenarios, the cost of context switching can be neglected because of the removing congestion in the I/O thread. In other scenarios, the context switching cost might be more notable.
You can already observe the same behavior with WebFlux: Every incoming connection is a new connection, and so it's handled by a different inbound EventLoop thread. Reusing the same EventLoop thread for outbound notification (that one, that was used for inbound notifications) happens quite late when writing the HTTP response to the channel.
This duality of responsibilities (completing a command, performing I/O) can experience some gravity towards a more computation-heavy workload which drags performance out of I/O.
Additional resources:
Investigate on response thread switching #905.

How to ACK celery tasks with parallel code in reactor?

I have a celery task that, when called, simply ignites the execution of some parallel code inside a twisted reactor. Here's some sample (not runnable) code to illustrate:
def run_task_in_reactor():
# this takes a while to run
do_something()
do_something_more()
#celery.task
def run_task():
print "Started reactor"
reactor.callFromThread(run_task_in_reactor)
(For the sake of simplicity, please assume that the reactor is already running when the task is received by the worker; I used the signal #worker_process_init.connect to start my reactor in another thread as soon as the worker comes up)
When I call run_task.delay(), the task finishes pretty quickly (since it does not wait for run_task_in_reactor() to finish, only schedules its execution in the reactor). And, when run_task_in_reactor() finally runs, do_something() or do_something_more() can throw an exception, which will go unoticed.
Using pika to consume from my queue, I can use an ACK inside do_something_more() to make the worker notify the correct completion of the task, for instance. However, inside Celery, this does not seems to be possible (or, at least, I do't know how to accomplish the same effect)
Also, I cannot remove the reactor, since it is a requirement of some third-party code I'm using. Other ways to achieve the same result are appreciated as well.
Use reactor.blockingCallFromThread instead.

Alternatives to dispatch_get_current_queue() for completion blocks in iOS 6?

I have a method that accepts a block and a completion block. The first block should run in the background, while the completion block should run in whatever queue the method was called.
For the latter I always used dispatch_get_current_queue(), but it seems like it's deprecated in iOS 6 or higher. What should I use instead?
The pattern of "run on whatever queue the caller was on" is appealing, but ultimately not a great idea. That queue could be a low priority queue, the main queue, or some other queue with odd properties.
My favorite approach to this is to say "the completion block runs on an implementation defined queue with these properties: x, y, z", and let the block dispatch to a particular queue if the caller wants more control than that. A typical set of properties to specify would be something like "serial, non-reentrant, and async with respect to any other application-visible queue".
** EDIT **
Catfish_Man put an example in the comments below, I'm just adding it to his answer.
- (void) aMethodWithCompletionBlock:(dispatch_block_t)completionHandler
{
dispatch_async(self.workQueue, ^{
[self doSomeWork];
dispatch_async(self.callbackQueue, completionHandler);
}
}
This is fundamentally the wrong approach for the API you are describing to take. If an API accepts a block and a completion block to run, the following facts need to be true:
The "block to run" should be run on an internal queue, e.g. a queue which is private to the API and hence entirely under that API's control. The only exception to this is if the API specifically declares that the block will be run on the main queue or one of the global concurrent queues.
The completion block should always be expressed as a tuple (queue, block) unless the same assumptions as for #1 hold true, e.g. the completion block will be run on a known global queue. The completion block should furthermore be dispatched async on the passed-in queue.
These are not just stylistic points, they're entirely necessary if your API is to be safe from deadlocks or other edge-case behavior that WILL otherwise hang you from the nearest tree someday. :-)
The other answers are great, but for the me the answer is structural. I have a method like this that's on a Singleton:
- (void) dispatchOnHighPriorityNonMainQueue:(simplest_block)block forceAsync:(BOOL)forceAsync {
if (forceAsync || [NSThread isMainThread])
dispatch_async_on_high_priority_queue(block);
else
block();
}
which has two dependencies, which are:
static void dispatch_async_on_high_priority_queue(dispatch_block_t block) {
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), block);
}
and
typedef void (^simplest_block)(void); // also could use dispatch_block_t
That way I centralize my calls to dispatch on the other thread.
You should be careful about your use of dispatch_get_current_queue in the first place. From the header file:
Recommended for debugging and logging purposes only:
The code
must not make any assumptions about the queue returned, unless it
is one of the global queues or a queue the code has itself created.
The code must not assume that synchronous execution onto a queue is
safe from deadlock if that queue is not the one returned by
dispatch_get_current_queue().
You could do either one of two things:
Keep a reference to the queue you originally posted on (if you created it via dispatch_queue_create), and use that from then on.
Use system defined queues via dispatch_get_global_queue, and keep a track of which one you're using.
Effectively whilst previously relying on the system to keep track of the queue you are on, you are going to have to do it yourself.
Apple had deprecated dispatch_get_current_queue(), but left a hole in another place, so we still able to get current dispatch queue:
if let currentDispatch = OperationQueue.current?.underlyingQueue {
print(currentDispatch)
// Do stuff
}
This works for main queue at least.
Note, that underlyingQueue property is available since iOS 8.
If you need to perform the completion block in the original queue, you also may use OperationQueue directly, I mean without GCD.
For those who still need in queue comparing, you could compare queues by their label or specifies.
Check this https://stackoverflow.com/a/23220741/1531141
This is a me too answer. So I will talk about our use case.
We have a services layer and the UI layer (among other layers). The services layer runs tasks in the background. (Data manipulation tasks, CoreData tasks, Network calls etc). The service layer has a couple operation queues to satisfy the needs of the UI layer.
The UI layer relies on the services layer to do its work and then run a success completion block. This block can have UIKit code in it. A simple use case is to get all messages from the server and reload the collection view.
Here we guarantee that the blocks that are passed into the services layer are dispatched on the queue on which the service was invoked on. Since dispatch_get_current_queue is a deprecated method, we use the NSOperationQueue.currentQueue to get the caller's current queue. Important note on this property.
Calling this method from outside the context of a running operation
typically results in nil being returned.
Since we always invoke our services on a known queue (Our custom queues and Main queue) this works well for us. We do have cases where serviceA can call serviceB which can call serviceC. Since we control where the first service call is being made from, we know the rest of the services will follow the same rules.
So NSOperationQueue.currentQueue will always return one of our Queues or the MainQueue.