GRPC future vs blocking stub in client - kotlin

I'm writing a client application in Kotlin where I need to call a GRPC service that has both a future based and a blocking stub.
In case I choose to go with the future option, I will need at some point to do something like a CompletableFuture.get() call, which means I will be blocking anyway.
So, what's the difference between one and the other for this case? Or is it the same?

I will need at some point to do something like a CompletableFuture.get() call, which means I will be blocking anyway
Not necessarily. With a ListenableFuture, CompletableFuture, or other completion-hookable mechanisms, you don't have to block at all. You could register an action to do on completion, or you could wrap them in suspend calls to work with coroutines etc.
Also, even if you did call get() and block, it still allows you to block later than the start of the call, and do other things between the initial call and the get(), which would be concurrent with it:
val future = callSomethingWithFuture()
doStuffConcurrently()
val result = future.get()

Related

Proper logging in reactive application - WebFlux

last time I am thinking about proper using logger in our applications.
For example, I have a controller which returns a stream of users but in the log, I see the "Fetch Users" log is being logged by another thread than the thread on the processing pipeline but is it a good approach?
#Slf4j
class AwesomeController {
#GetMapping(path = "/users")
public Flux<User> getUsers() {
log.info("Fetch users..");
return Flux.just(...)..subscribeOn(Schedulers.newParallel("my-custom"));
}
}
In this case, two threads are used and from my perspective, not a good option, but I can't find good practices with loggers in reactive applications. I think below approach is better because allocation memory is from processing thread but not from spring webflux thread which potential can be blocking but logger.
#GetMapping(path = "/users")
public Flux<User> getUsers() {
return Flux.defer(() -> {
return Mono.fromCallable(() -> {
log.info("Fetch users..");
.....
})
}).subscribeOn(Schedulers.newParallel("my-custom"))
}
The normal thing to do would be to configure the logger as asynchronous (this usually has to be explicit as per the comments, but all modern logging frameworks support it) and then just include it "normally" (either as a separate line as you have there, or in a side-effect method such as doOnNext() if you want it half way through the reactive chain.)
If you want to be sure that the logger's call isn't blocking, then use BlockHound to make sure (this is never a bad idea anyway.) But in any case, I can't see a use case for your second example there - that makes the code rather difficult to follow with no real advantage.
One final thing to watch out for - remember that if you include the logging statement separately as you have above, rather than as part of the reactive chain, then it'll execute when the method at calltime rather than subscription time. That may not matter in scenarios like this where the two happen near simultaneously, but would be rather confusing if (for example) you're returning a publisher which may be subscribed to multiple times - in that case, you'd only ever see the "Fetch users..." statement once, which isn't obvious when glancing through the code.

What is the use of add_callback_threadsafe() method in pika?

From the description in the pika documentation, I can't quite get what add_callback_threadsafe() method does. It says, "Requests a call to the given function as soon as possible in the context of this connection’s thread". Specifically, which event does this callback get associated to? Since, add_callback_threadsafe() doesn't receive any "event" argument, how does pika know when to invoke that callback?
Also, in the official example, why do we even need to build the partial function and register the callback in the do_work() method? Could we not just call the ack_message() method after the time.sleep() is over?
The reason why the method exists:
How to add multiprocessing to consumer with pika (RabbitMQ) in python
If you use the same rabbit connection/channel with multiple threads/processes then you'll be prone to crashes. If you want to avoid that, you need to use that method.
Could we not just call the ack_message() method after the
time.sleep() is over?
No, you would be calling ack_message from a different thread than the main one, that's not thread safe and prone to crashes. You need to call ack_message from a thread-safe context, i.e., the add_callback_threadsafe().

Should I use synchronous Mono::map or asycnhronous Mono::flatMap?

The projectReactor documentation says that Mono::flatMap is asynchronous, as shown below.
So, I can write all my methods to return Mono publishers like this.
public Mono<String> myMethod(String name) {
return Mono.just("hello " + name);
}
and use it with Mono::flatMap like this:
Mono.just("name").flatMap(this::myMethod);
Does this make the execution of my method asynchronous? Does this make my code more reactive, better and faster than just using Mono::map? Is the overhead prohibitive for doing this for all my methods?
public final Mono flatMap(Function<? super T,? extends Mono<? extends R>> transformer)
Transform the item emitted by this Mono asynchronously, returning the value emitted by another Mono (possibly changing the value type).
Does this make the execution of my method asynchronous?
Let's go to the definition of Asynchronous for this:
Asynchronous programming is a means of parallel programming in which a unit of work runs separately from the main application thread and notifies the calling thread of its completion, failure or progress.
Here your unit of work is happening in the same thread, unless you do a subscribeOn with a Scheduler. So this isn't async.
Does this make my code more reactive, better and faster than just using Mono::map?
No way. Since in this case, the publisher Mono.just("hello " + name) immediately notifies the subscriber that I am done, the thread in which the processing was going on immediately picks up that event from the event loop and starts processing the response.
Rather, this might cause few more operations internally than a map which simply transforms the element.
Thus, ideally, you should use a flatMap when you have an I/O Operation (like DB calls) or Network calls, which might take some time, which you can utilize in doing some other task , if all the threads are busy.

How to test handle_cast in a GenServer properly?

I have a GenServer, which is responsible for contacting an external resource. The result of calling external resource is not important, ever failures from time to time is acceptable, so using handle_cast seems appropriate for other parts of code. I do have an interface-like module for that external resource, and I'm using one GenServer to access the resource. So far so good.
But when I tried to write test for this gen_server, I couldn't figure out how to test the handle_cast. I have interface functions for GenServer, and I tried to test those ones, but they always return :ok, except when GenServer is not running. I could not test that.
I changed the code a bit. I abstracted the code in handle_cast into another function, and created a similar handle_call callback. Then I could test handle_call easily, but that was kind of a hack.
I would like to know how people generally test async code, like that. Was my approach correct, or acceptable? If not, what to do then?
The trick is to remember that a GenServer process handles messages one by one sequentially. This means we can make sure the process received and handled a message, by making sure it handled a message we sent later. This in turn means we can change any async operation into a synchronous one, by following it with a synchronisation message, for example some call.
The test scenario would look like this:
Issue asynchronous request.
Issue synchronous request and wait for the result.
Assert on the effects of the asynchronous request.
If the server doesn't have any suitable function for synchronisation, you can consider using :sys.get_state/2 - a call meant for debugging purposes, that is properly handled by all special processes (including GenServer) and is, what's probably the most important thing, synchronous. I'd consider it perfectly valid to use it for testing.
You can read more about other useful functions from the :sys module in GenServer documentation on debugging.
A cast request is of the form:
Module:handle_cast(Request, State) -> Result
Types:
Request = term()
State = term()
Result = {noreply,NewState} |
{noreply,NewState,Timeout} |
{noreply,NewState,hibernate} |
{stop,Reason,NewState}
NewState = term()
Timeout = int()>=0 | infinity
Reason = term()
so it is quite easy to perform unit test just calling it directly (no need to even start a server), providing a Request and a State, and asserting the returned Result. Of course it may also have some side effects (like writing in an ets table, modifying the process dictionary...) so you will need to initialize those resources before, and check the effect after the assert.
For example:
test_add() ->
{noreply,15} = my_counter:handle_cast({add,5},10).

How to simulate an uncompleted Netty ChannelFuture

I'm using Netty to write a client application that sends UDP messages to a server. In short I'm using this piece of code to write the stream to the channel:
ChannelFuture future = channel.write(request, remoteInetSocketAddress);
future.awaitUninterruptibly(timeout);
if(!future.isDone()){
//abort logic
}
Everything works fine, but one thing: I'm unable to test the abort logic as I cannot make the write to fail - i.e. even if the server is down the future would be completed successfully. The write operation usually takes about 1 ms so setting very little timeout doesn't help too much.
I know the preffered way would be to use an asynch model instead of await() call, however for my scenario I need it to be synchronous and I need to be sure it get finnished at some point.
Does anyone know how could I simulate an uncompleted future?
Many thanks in advance!
MM
Depending on how your code is written you could use a mock framework such as mockito. If that is not possible, you can also use a "connected" UDP socket, i.e. a datagram socket that is bound to a local address. If you send to a bogus server you should get PortunreachableException or something similar.
Netty has a class FailedFuture that can be used for the purpose of this,
You can for example mock your class with tests that simulate the following:
ChannelFuture future;
if(ALWAYS_FAIL) {
future = channel.newFailedFuture(new Exception("I failed"));
else
future = channel.write(request, remoteInetSocketAddress);