Pipeline Redis commands with Reactive Lettuce - redis

I'm using spring boot webflux + project reactor + lettuce for connecting and querying Redis in a non blocking way.
I have configured the ReactiveRedisTemplate with LettuceConnectionFactory. The spring documentation states that the only way to use pipeline with ReactiveRedisTemplate is to use the execute(<RedisCallback>) method. In non-reactive RedisTemplate, I see that there is an executePipelined(<RedisCallback>) method which opens/closes a pipeline before executing the callback. But in case of ReactiveRedisTemplate.execute method, it uses a LettuceReactiveRedisConnection and neither Spring ReactiveRedisConnection nor Lettuce has no reference to pipeline.
So my question is, is it possible to pipleline your commands when using Spring ReactiveRedisTemplate + ReactiveLettuceConnection?
I also noticed that using ReactiveRedisTemplate.execute with a RedisCallback that has multiple Redis commands executes slower than just calling commands individually.
Sample code for pipleline with ReactiveRedisTemplate:
reactiveRedisTemplate.execute(connection -> keys.flatMap(key ->
connection.hashCommands()
.hGetAll(ByteBuffer.wrap(key.getBytes()))))
.map(Map.Entry::getValue)
.map(ByteUtils::getBytes)
.map(b -> {
try {
return mapper.readValue(b, Value.class);
} catch (IOException e1) {
return null;
}
})
.collectList();
Code without pipeline:
keys.flatMap(key -> reactiveRedisTemplate.opsForHash().entries(key))
.map(Map.Entry::getValue)
.cast(Value.class)
.collectList();
Thanks!

I don't think it's possible using RedisReactiveTemplate or the reactive api of lettuce. Indeed when you construct your reactive chain some part of it will be lazily evaluated.
getAsyncValue(a).flatMap(value -> doSomething(value)).subscribe()
For instance in this sample doSomething will be trigger only when getAsyncValue return a value.
Now if we take your RedisCallback sample and assume we have a flushAll method in the connection object. Where/when do you call it ?
tpl.execute(connection -> {
Flux<Map.Entry<ByteBuffer, ByteBuffer>> results = keys.flatMap(key ->
connection.hashCommands()
.hGetAll(ByteBuffer.wrap(key.getBytes())));
connection.fluxAll();
return results;
})
Like this nothe commands will be flushed to the server because no hashCommands would have been triggered.
Now let's look at all the signals callback we have:
doOnNext
doOnError
doOnCancel
doFirst
doOnSubscribe
doOnRequest
doOnTerminate
doAfterTerminate
doFinally
doOnComplete
doOnError or doOnCancel won't help us. But we could think about using doFinally, doOnTerminate, doAfterTerminate:
tpl.execute(connection -> keys.flatMap(key -> connection.hashCommands()
.hGetAll(ByteBuffer.wrap(key.getBytes())))
.doFinally(s -> connection.flushAll()))
But htGetAll won't finish until command are flushed to the server so doFinnaly won't be call so we won't be able to flush....
The only workaround I can think of is to use directly async api of lettuce. There is a sample in the documentation on how to do it.
Your code could looks like (not tested):
// client is a RedisClient from lettuce
StatefulRedisConnection<String, String> connection = client.connect();
RedisAsyncCommands<String, String> command = connection.async();
command.setAutoFlushCommands(false);
keys.map(command::hgetall)
.collectList()
.doOnNext(f -> command.flushCommands())
.flatMapMany(f -> Flux.fromIterable(f).flatMap(Mono::fromCompletionStage))

Related

Axonframework, how to use MessageDispatchInterceptor with reactive repository

I have read the set-based consistency validation blog and I want to validate through a dispatch interceptor. I follow the example, but I use reactive repository and it doesn't really work for me. I have tried both block and not block. with block it throws error, but without block it doesn't execute anything. here is my code.
class SubnetCommandInterceptor : MessageDispatchInterceptor<CommandMessage<*>> {
#Autowired
private lateinit var privateNetworkRepository: PrivateNetworkRepository
override fun handle(messages: List<CommandMessage<*>?>): BiFunction<Int, CommandMessage<*>, CommandMessage<*>> {
return BiFunction<Int, CommandMessage<*>, CommandMessage<*>> { index: Int?, command: CommandMessage<*> ->
if (CreateSubnetCommand::class.simpleName == (command.payloadType.simpleName)){
val interceptCommand = command.payload as CreateSubnetCommand
privateNetworkRepository
.findById(interceptCommand.privateNetworkId)
// ..some validation logic here ex.
// .filter { network -> network.isSubnetOverlap() }
.switchIfEmpty(Mono.error(IllegalArgumentException("Requested subnet is overlap with the previous subnet.")))
// .block() also doesn't work here it throws error
// block()/blockFirst()/blockLast() are blocking, which is not supported in thread reactor-
}
command
}
}
}
Subscribing to a reactive repository inside a message dispatcher is not really recommended and might lead to weird behavior as underling ThreadLocal (used by Axox) is not adapted to be used in reactive programing
Instead, check out Axon's Reactive Extension and reactive interceptors section.
For example what you might do:
reactiveCommandGateway.registerDispatchInterceptor(
cmdMono -> cmdMono.flatMap(cmd->privateNetworkRepository
.findById(cmd.privateNetworkId))
.switchIfEmpty(
Mono.error(IllegalArgumentException("Requested subnet is overlap with the previous subnet."))
.then(cmdMono)));

Wait for Multiple Spring WebClient Mono Responses

I am trying to call external service in a micro-service application to get all responses in parallel and combine them before starting the other computation. I know i can use block() call on each Mono object but that will defeat the purpose of using reactive api. is it possible to fire up all requests in parallel and combine them at one point.
Sample code is as below. In this case "Done" prints before actual response comes up. I also know that subscribe call is non blocking.
I want "Done" to be printed after all responses has been collected, so need some kind of blocking. however do not want to block each and every request
final List<Mono<String>> responseOne = new ArrayList<>();
IntStream.range(0, 10).forEach(i -> {
Mono<String> responseMono =
WebClient.create("https://jsonplaceholder.typicode.com/posts")
.post()
.retrieve()
.bodyToMono(String.class)
;
System.out.println("create mono response lazy initialization");
responseOne.add(responseMono);
});
Flux.merge(responseOne).collectList().subscribe( res -> {
System.out.println(res);
});
System.out.println("Done");
Based on the suggestion, I came up with this which seems to work for me.
StopWatch watch = new StopWatch();
watch.start();
final List<Mono<String>> responseOne = new ArrayList<>();
IntStream.range(0, 10).forEach(i -> {
Mono<String> responseMono =
WebClient.create("https://jsonplaceholder.typicode.com/posts")
.post()
.retrieve()
.bodyToMono(String.class);
System.out.println("create mono response lazy initialization");
responseOne.add(responseMono);
});
CompletableFuture<List<String>> futureCount = new CompletableFuture<>();
List<String> res = new ArrayList<>();
Mono.zip(responseOne, Arrays::asList)
.flatMapIterable(objects -> objects) // make flux of objects
.doOnComplete(() -> {
futureCount.complete(res);
}) // will be printed on completion of the flux created above
.subscribe(responseString -> {
res.add((String) responseString);
}
);
watch.stop();
List<String> response = futureCount.get();
System.out.println(response);
// do rest of the computation
System.out.println(watch.getLastTaskTimeMillis());
If you want your calls to be parallel it is a good idea to use Mono.zip
Now, you want Done to be printed after the collection of all the responses
So, you can modify your code as below
final List<Mono<String>> responseMonos = IntStream.range(0, 10).mapToObj(
index -> WebClient.create("https://jsonplaceholder.typicode.com/posts").post().retrieve()
.bodyToMono(String.class)).collect(Collectors.toList()); // create iterable of mono of network calls
Mono.zip(responseMonos, Arrays::asList) // make parallel network calls and collect it to a list
.flatMapIterable(objects -> objects) // make flux of objects
.doOnComplete(() -> System.out.println("Done")) // will be printed on completion of the flux created above
.subscribe(responseString -> System.out.println("responseString = " + responseString)); // subscribe and start emitting values from flux
It's also not a good idea to call subscribe or block explicitly in your reactive code.
is it possible to fire up all requests in parallel and combine them at one point.
That's exactly what your code is doing already. If you don't believe me, stick .delayElement(Duration.ofSeconds(2)) after your bodyToMono() call. You'll see that your list prints out after just over 2 seconds, rather than 20 (which is what it would be if executing sequentially 10 times.)
The combining part is happening in your Flux.merge().collectList() call.
In this case "Done" prints before actual response comes up.
That's to be expected, as your last System.out.println() call is executing outside of the reactive callback chain. If you want "Done" to print after your list is printed (which you've confusingly given the variable name s in the consumer passed to your subscribe() call) then you'll need to put it inside that consumer, not outside it.
If you're interfacing with an imperative API, and you therefore need to block on the list, you can just do:
List<String> list = Flux.merge(responseOne).collectList().block();
...which will still execute the calls in parallel (so still gain you some advantage), but then block until all of them are complete and combined into a list. (If you're just using reactor for this type of usage however, it's debatable if it's worthwhile.)

Cache the result of a Mono from a WebClient call in a Spring WebFlux web application

I am looking to cache a Mono (only if it is successful) which is the result of a WebClient call.
From reading the project reactor addons docs I don't feel that CacheMono is a good fit as it caches the errors as well which I do not want.
So instead of using CacheMono I am doing the below:
Cache<MyRequestObject, Mono<MyResponseObject>> myCaffeineCache =
Caffeine.newBuilder()
.maximumSize(100)
.expireAfterWrite(Duration.ofSeconds(60))
.build();
MyRequestObject myRequestObject = ...;
Mono<MyResponseObject> myResponseObject = myCaffeineCache.get(myRequestObject,
requestAsKey -> WebClient.create()
.post()
.uri("http://www.example.com")
.syncBody(requestAsKey)
.retrieve()
.bodyToMono(MyResponseObject.class)
.cache()
.doOnError(t -> myCaffeineCache.invalidate(requestAsKey)));
Here I am calling cache on the Mono and then adding it to the caffeine cache.
Any errors will enter doOnError to invalidate the cache.
Is this a valid approach to caching a Mono WebClient response?
This is one of the very few use cases where you'd be actually allowed to call non-reactive libraries and wrap them with reactive types, and have processing done in side-effects operators like doOnXYZ, because:
Caffeine is an in-memory cache, so as far as I know there's no I/O involved
Caches often don't offer strong guarantees about caching values (it's very much "fire and forget)
You can then in this case query the cache to see if a cached version is there (wrap it and return right away), and cache a successful real response in a doOn operator, like this:
public class MyService {
private WebClient client;
private Cache<MyRequestObject, MyResponseObject> myCaffeineCache;
public MyService() {
this.client = WebClient.create();
this.myCaffeineCache = Caffeine.newBuilder().maximumSize(100)
.expireAfterWrite(Duration.ofSeconds(60)).build();
}
public Mono<MyResponseObject> fetchResponse(MyRequestObject request) {
MyResponseObject cachedVersion = this.myCaffeineCache.get(myRequestObject);
if (cachedVersion != null) {
return Mono.just(cachedVersion);
} else {
return this.client.post()
.uri("http://www.example.com")
.syncBody(request.getKey())
.retrieve()
.bodyToMono(MyResponseObject.class)
.doOnNext(response -> this.myCaffeineCache.put(request.getKey(), response));
}
}
Note that I wouldn't cache reactive types here, since there's no I/O involved nor backpressure once the value is returned by the cache. On the contrary, it's making things more difficult with subscription and other reactive streams constraints.
Also you're right about the cache operator since it isn't about caching the value per se, but more about replaying what happened to other subscribers. I believe that cache and replay operators are actually synonyms for Flux.
Actually, you don't have to save errors with CacheMono.
private Cache<MyRequestObject, MyResponseObject> myCaffeineCache;
...
Mono<MyResponseObject> myResponseObject =
CacheMono.lookup(key -> Mono.justOrEmpty(myCaffeineCache.getIfPresent(key))
.map(Signal::next), myRequestObject)
.onCacheMissResume(() -> /* Your web client or other Mono here */)
.andWriteWith((key, signal) -> Mono.fromRunnable(() ->
Optional.ofNullable(signal.get())
.ifPresent(value -> myCaffeineCache.put(key, value))));
When you switch to external cache, this may be usefull. Don't forget using reactive clients for external caches.

WebFlux: Only one item arriving at the backend

On the backend im doing:
#PostMapping(path = "/products", consumes = MediaType.APPLICATION_STREAM_JSON_VALUE)
public void saveProducts(#Valid #RequestBody Flux<Product> products) {
products.subscribe(product -> log.info("product: " + product.toString()));
}
And on the frontend im calling this using:
this.targetWebClient
.post()
.uri(productUri)
.accept(MediaType.APPLICATION_STREAM_JSON)
.contentType(MediaType.APPLICATION_STREAM_JSON)
.body(this.sourceWebClient
.get()
.uri(uriBuilder -> uriBuilder.path(this.sourceEndpoint + "/id")
.queryParam("date", date)
.build())
.accept(MediaType.APPLICATION_STREAM_JSON)
.retrieve()
.bodyToFlux(Product.class), Product.class)
.exchange()
.subscribe();
What happens now is that I have 472 products which need to get saved but only one of them is actually saving. The stream closes after the first and I cant find out why.
If I do:
...
.retrieve()
.bodyToMono(Void.class);
instead, the request isnt even arriving at the backend.
I also tried fix amount of elements:
.body(Flux.just(new Product("123"), new Product("321")...
And with that also only the first arrived.
EDIT
I changed the code:
#PostMapping(path = "/products", consumes =
MediaType.APPLICATION_STREAM_JSON_VALUE)
public Mono<Void> saveProducts(#Valid #RequestBody Flux<Product> products) {
products.subscribe(product -> this.service.saveProduct(product));
return Mono.empty();
}
and:
this.targetWebClient
.post()
.uri(productUri)
.accept(MediaType.APPLICATION_STREAM_JSON)
.contentType(MediaType.APPLICATION_STREAM_JSON)
.body(this.sourceWebClient
.get()
.uri(uriBuilder -> uriBuilder.path(this.sourceEndpoint + "/id")
.queryParam("date", date)
.build())
.accept(MediaType.APPLICATION_STREAM_JSON)
.retrieve()
.bodyToFlux(Product.class), Product.class)
.exchange()
.block();
That led to the behaviour that one product was saved twice (because the backend endpoint was called twice) but again only just one item. And also we got an error on the frontend side:
IOException: Connection reset by peer
Same for:
...
.retrieve()
.bodyToMono(Void.class)
.subscribe();
Doing the following:
this.targetWebClient
.post()
.uri(productUri)
.accept(MediaType.APPLICATION_STREAM_JSON)
.contentType(MediaType.APPLICATION_STREAM_JSON)
.body(this.sourceWebClient
.get()
.uri(uriBuilder -> uriBuilder.path(this.sourceEndpoint + "/id")
.queryParam("date", date)
.build())
.accept(MediaType.APPLICATION_STREAM_JSON)
.retrieve()
.bodyToFlux(Product.class), Product.class)
.retrieve();
Leads to the behaviour that the backend again isnt called at all.
The Reactor documentation does say that nothing happens until you subscribe, but it doesn't mean you should subscribe in your Spring WebFlux code.
Here are a few rules you should follow in Spring WebFlux:
If you need to do something in a reactive fashion, the return type of your method should be Mono or Flux
Within a method returning a reactive typoe, you should never call block or subscribe, toIterable, or any other method that doesn't return a reactive type itself
You should never do I/O-related in side-effects DoOnXYZ operators, as they're not meant for that and this will cause issues at runtime
In your case, your backend should use a reactive repository to save your data and should look like:
#PostMapping(path = "/products", consumes = MediaType.APPLICATION_STREAM_JSON_VALUE)
public Mono<Void> saveProducts(#Valid #RequestBody Flux<Product> products) {
return productRepository.saveAll(products).then();
}
In this case, the Mono<Void> return type means that your controller won't return anything as a response body but will signal still when it's done processing the request. This might explain why you're seeing that behavior - by the time the controller is done processing the request, all products are not saved in the database.
Also, remember the rules noted above. Depending on where your targetWebClient is used, calling .subscribe(); on it might not be the solution. If it's a test method that returns void, you might want to call block on it and get the result to test assertions on it. If this is a component method, then you should probably return a Publisher type as a return value.
EDIT:
#PostMapping(path = "/products", consumes =
MediaType.APPLICATION_STREAM_JSON_VALUE)
public Mono<Void> saveProducts(#Valid #RequestBody Flux<Product> products) {
products.subscribe(product -> this.service.saveProduct(product));
return Mono.empty();
}
Doing this isn't right:
calling subscribe decouples the processing of the request/response from that saveProduct operation. It's like starting that processing in a different executor.
returning Mono.empty() signals Spring WebFlux that you're done right away with the request processing. So Spring WebFlux will close and clean the request/response resources; but your saveProduct process is still running and won't be able to read from the request since Spring WebFlux closed and cleaned it.
As suggested in the comments, you can wrap blocking operations with Reactor (even though it's not advised and you may encounter performance issues) and make sure that you're connecting all the operations in a single reactive pipeline.

How to wrap a Flux with a blocking operation in the subscribe?

In the documentation it is written that you should wrap blocking code into a Mono: http://projectreactor.io/docs/core/release/reference/#faq.wrap-blocking
But it is not written how to actually do it.
I have the following code:
#PostMapping(path = "some-path", consumes = MediaType.APPLICATION_STREAM_JSON_VALUE)
public Mono<Void> doeSomething(#Valid #RequestBody Flux<Something> something) {
something.subscribe(something -> {
// some blocking operation
});
// how to return Mono<Void> here?
}
The first problem I have here is that I need to return something but I cant.
If I would return a Mono.empty for example the request would be closed before the work of the flux is done.
The second problem is: how do I actually wrap the blocking code like it is suggested in the documentation:
Mono blockingWrapper = Mono.fromCallable(() -> {
return /* make a remote synchronous call */
});
blockingWrapper = blockingWrapper.subscribeOn(Schedulers.elastic());
You should not call subscribe within a controller handler, but just build a reactive pipeline and return it. Ultimately, the HTTP client will request data (through the Spring WebFlux engine) and that's what subscribes and requests data to the pipeline.
Subscribing manually will decouple the request processing from that other operation, which will 1) remove any guarantee about the order of operations and 2) break the processing if that other operation is using HTTP resources (such as the request body).
In this case, the source is not blocking, but only the transform operation is. So we'd better use publishOn to signal that the rest of the chain should be executed on a specific Scheduler. If the operation here is I/O bound, then Schedulers.elastic() is the best choice, if it's CPU-bound then Schedulers .paralell is better. Here's an example:
#PostMapping(path = "/some-path", consumes = MediaType.APPLICATION_STREAM_JSON_VALUE)
public Mono<Void> doSomething(#Valid #RequestBody Flux<Something> something) {
return something.collectList()
.publishOn(Schedulers.elastic())
.map(things -> {
return processThings(things);
})
.then();
}
public ProcessingResult processThings(List<Something> things) {
//...
}
For more information on that topic, check out the Scheduler section in the reactor docs. If your application tends to do a lot of things like this, you're losing a lot of the benefits of reactive streams and you might consider switching to a Servlet-based model where you can configure thread pools accordingly.