On the backend im doing:
#PostMapping(path = "/products", consumes = MediaType.APPLICATION_STREAM_JSON_VALUE)
public void saveProducts(#Valid #RequestBody Flux<Product> products) {
products.subscribe(product -> log.info("product: " + product.toString()));
}
And on the frontend im calling this using:
this.targetWebClient
.post()
.uri(productUri)
.accept(MediaType.APPLICATION_STREAM_JSON)
.contentType(MediaType.APPLICATION_STREAM_JSON)
.body(this.sourceWebClient
.get()
.uri(uriBuilder -> uriBuilder.path(this.sourceEndpoint + "/id")
.queryParam("date", date)
.build())
.accept(MediaType.APPLICATION_STREAM_JSON)
.retrieve()
.bodyToFlux(Product.class), Product.class)
.exchange()
.subscribe();
What happens now is that I have 472 products which need to get saved but only one of them is actually saving. The stream closes after the first and I cant find out why.
If I do:
...
.retrieve()
.bodyToMono(Void.class);
instead, the request isnt even arriving at the backend.
I also tried fix amount of elements:
.body(Flux.just(new Product("123"), new Product("321")...
And with that also only the first arrived.
EDIT
I changed the code:
#PostMapping(path = "/products", consumes =
MediaType.APPLICATION_STREAM_JSON_VALUE)
public Mono<Void> saveProducts(#Valid #RequestBody Flux<Product> products) {
products.subscribe(product -> this.service.saveProduct(product));
return Mono.empty();
}
and:
this.targetWebClient
.post()
.uri(productUri)
.accept(MediaType.APPLICATION_STREAM_JSON)
.contentType(MediaType.APPLICATION_STREAM_JSON)
.body(this.sourceWebClient
.get()
.uri(uriBuilder -> uriBuilder.path(this.sourceEndpoint + "/id")
.queryParam("date", date)
.build())
.accept(MediaType.APPLICATION_STREAM_JSON)
.retrieve()
.bodyToFlux(Product.class), Product.class)
.exchange()
.block();
That led to the behaviour that one product was saved twice (because the backend endpoint was called twice) but again only just one item. And also we got an error on the frontend side:
IOException: Connection reset by peer
Same for:
...
.retrieve()
.bodyToMono(Void.class)
.subscribe();
Doing the following:
this.targetWebClient
.post()
.uri(productUri)
.accept(MediaType.APPLICATION_STREAM_JSON)
.contentType(MediaType.APPLICATION_STREAM_JSON)
.body(this.sourceWebClient
.get()
.uri(uriBuilder -> uriBuilder.path(this.sourceEndpoint + "/id")
.queryParam("date", date)
.build())
.accept(MediaType.APPLICATION_STREAM_JSON)
.retrieve()
.bodyToFlux(Product.class), Product.class)
.retrieve();
Leads to the behaviour that the backend again isnt called at all.
The Reactor documentation does say that nothing happens until you subscribe, but it doesn't mean you should subscribe in your Spring WebFlux code.
Here are a few rules you should follow in Spring WebFlux:
If you need to do something in a reactive fashion, the return type of your method should be Mono or Flux
Within a method returning a reactive typoe, you should never call block or subscribe, toIterable, or any other method that doesn't return a reactive type itself
You should never do I/O-related in side-effects DoOnXYZ operators, as they're not meant for that and this will cause issues at runtime
In your case, your backend should use a reactive repository to save your data and should look like:
#PostMapping(path = "/products", consumes = MediaType.APPLICATION_STREAM_JSON_VALUE)
public Mono<Void> saveProducts(#Valid #RequestBody Flux<Product> products) {
return productRepository.saveAll(products).then();
}
In this case, the Mono<Void> return type means that your controller won't return anything as a response body but will signal still when it's done processing the request. This might explain why you're seeing that behavior - by the time the controller is done processing the request, all products are not saved in the database.
Also, remember the rules noted above. Depending on where your targetWebClient is used, calling .subscribe(); on it might not be the solution. If it's a test method that returns void, you might want to call block on it and get the result to test assertions on it. If this is a component method, then you should probably return a Publisher type as a return value.
EDIT:
#PostMapping(path = "/products", consumes =
MediaType.APPLICATION_STREAM_JSON_VALUE)
public Mono<Void> saveProducts(#Valid #RequestBody Flux<Product> products) {
products.subscribe(product -> this.service.saveProduct(product));
return Mono.empty();
}
Doing this isn't right:
calling subscribe decouples the processing of the request/response from that saveProduct operation. It's like starting that processing in a different executor.
returning Mono.empty() signals Spring WebFlux that you're done right away with the request processing. So Spring WebFlux will close and clean the request/response resources; but your saveProduct process is still running and won't be able to read from the request since Spring WebFlux closed and cleaned it.
As suggested in the comments, you can wrap blocking operations with Reactor (even though it's not advised and you may encounter performance issues) and make sure that you're connecting all the operations in a single reactive pipeline.
Related
I'm actually confused on assembly time and subscription time. I know mono's are lazy and does not get executed until its subscribed. Below is a method.
public Mono<UserbaseEntityResponse> getUserbaseDetailsForEntityId(String id) {
GroupRequest request = ImmutableGroupRequest
.builder()
.cloudId(id)
.build();
Mono<List<GroupResponse>> response = ussClient.getGroups(request);
List<UserbaseEntityResponse.GroupPrincipal> groups = new CopyOnWriteArrayList<>();
response.flatMapIterable(elem -> elem)
.toIterable().iterator().forEachRemaining(
groupResponse -> {
groupResponse.getResources().iterator().forEachRemaining(
resource -> {
groups.add(ImmutableGroupPrincipal
.builder()
.groupId(resource.getId())
.name(resource.getDisplayName())
.addAllUsers(convertMemebersToUsers(resource))
.build());
}
);
}
);
log.debug("Response Object - " + groups.toString());
ImmutableUserbaseEntityResponse res = ImmutableUserbaseEntityResponse
.builder()
.userbaseId(id)
.addAllGroups(groups)
.build();
Flux<UserbaseEntityResponse.GroupPrincipal> f = Flux.fromIterable(res.getGroups())
.parallel()
.runOn(Schedulers.parallel())
.doOnNext(groupPrincipal -> getResourcesForGroup((ImmutableGroupPrincipal)groupPrincipal, res.getUserbaseId()))
.sequential();
return Mono.just(res);
}
This gets executed Mono<List<GroupResponse>> response = ussClient.getGroups(request); without calling subscribe, however below will not get executed unless I call subscribe on that.
Flux<UserbaseEntityResponse.GroupPrincipal> f = Flux.fromIterable(res.getGroups())
.parallel()
.runOn(Schedulers.parallel())
.doOnNext(groupPrincipal -> getResourcesForGroup((ImmutableGroupPrincipal)groupPrincipal, res.getUserbaseId()))
.sequential();
Can I get some more input on assembly time vs subscription?
"Nothing happens until you subscribe" isn't quite true in all cases. There's three scenarios in which a publisher (Mono or Flux) will be executed:
You subscribe;
You block;
The publisher is "hot".
Note that the above scenarios all apply to an entire reactive chain - i.e. if I subscribe to a publisher, everything upstream (dependent on that publisher) also executes. That's why frameworks can, and should call subscribe when they need to, causing the reactive chain defined in a controller to execute.
In your case it's actually the second of these - you're blocking, which is essentially a "subscribe and wait for the result(s)". Usually the methods that block are clearly labelled, but again that's not always the case - in your case it's the toIterable() method on Flux doing the blocking:
Transform this Flux into a lazy Iterable blocking on Iterator.next() calls.
But ah, you say, I'm not calling Iterator.next() - what gives?!
Well, implicitly you are by calling forEachRemaining():
The default implementation behaves as if:
while (hasNext())
action.accept(next());
...and as per the above rule, since ussClient.getGroups(request) is upstream of this blocking call, it gets executed.
I'm trying to execute a list requests using WebClient, then filter them finding the first one that succeed (if any) and return that. Or fall back to a default response if non succeeded.
The problem I'm facing is that when I call .collectList() on a Flux<ServerResponse>, the list is always empty. I would have expected the list to contain N number of ServerResponse based on the number of requests I issued earlier.
public Mono<ServerResponse> retry(ServerRequest request) {
return Flux.fromIterable(request.headers().header(SEQUENCE_HEADER_NAME))
.map(URI::create)
// Build a "list" of responses
.flatMap(uri -> webClientBuilder.baseUrl(uri.toString()).build()
.method(Objects.requireNonNull(request.method()))
.headers(headers -> request.headers().asHttpHeaders().forEach((key, values) -> {
if (!SEQUENCE_HEADER_NAME.equals(key)) {
headers.addAll(key, values);
}
}))
.body(BodyInserters.fromDataBuffers(request.body(BodyExtractors.toDataBuffers())))
.exchange()
.flatMap(clientResponse -> ServerResponse.status(clientResponse.statusCode())
.headers(headers -> headers.addAll(clientResponse.headers().asHttpHeaders()))
.body(BodyInserters.fromDataBuffers(clientResponse.body(BodyExtractors.toDataBuffers()))))
)
// "Wait" for all of them to complete so we can filter
.collectList()
.flatMap(clientResponses -> {
List<ServerResponse> filteredResponses = clientResponses.stream()
.filter(response -> response.statusCode().is2xxSuccessful())
.collect(Collectors.toList());
if (filteredResponses.isEmpty()) {
log.error("No request succeeded; defaulting to {}", HttpStatus.BAD_REQUEST.toString());
return ServerResponse.badRequest().build();
}
if (filteredResponses.size() > 1) {
log.error("Multiple requests succeeded; defaulting to {}", HttpStatus.BAD_REQUEST.toString());
return ServerResponse.badRequest().build();
}
return Mono.just(filteredResponses.get(0));
});
}
Any ideas why .collectList() always returns an empty list?
Well, it seems to me you have a confused requirement in that you want the First Mono that responds but you are trying to put that functionality into a Flux which is meant to process all items in the flow efficiently. Mono in Webflux is meant to create a flow that will perform a series of transformations on the item in the flow efficiently. Nothing in your requirement of testing a bunch of URIs for the first one that succeeds is what WebFlux is good for so I have to question why try to force that into the framework.
You might argue that a Flux is giving you better asynchronous processing but I don't think that's the case when it is a bunch of WebClient calls. WebClient is still HTTP under the hood and so each item in the flow stops and starts around WebClient. If you want to do HTTP asynchronously you should use a ThreadPool and Callable.
I am trying to call external service in a micro-service application to get all responses in parallel and combine them before starting the other computation. I know i can use block() call on each Mono object but that will defeat the purpose of using reactive api. is it possible to fire up all requests in parallel and combine them at one point.
Sample code is as below. In this case "Done" prints before actual response comes up. I also know that subscribe call is non blocking.
I want "Done" to be printed after all responses has been collected, so need some kind of blocking. however do not want to block each and every request
final List<Mono<String>> responseOne = new ArrayList<>();
IntStream.range(0, 10).forEach(i -> {
Mono<String> responseMono =
WebClient.create("https://jsonplaceholder.typicode.com/posts")
.post()
.retrieve()
.bodyToMono(String.class)
;
System.out.println("create mono response lazy initialization");
responseOne.add(responseMono);
});
Flux.merge(responseOne).collectList().subscribe( res -> {
System.out.println(res);
});
System.out.println("Done");
Based on the suggestion, I came up with this which seems to work for me.
StopWatch watch = new StopWatch();
watch.start();
final List<Mono<String>> responseOne = new ArrayList<>();
IntStream.range(0, 10).forEach(i -> {
Mono<String> responseMono =
WebClient.create("https://jsonplaceholder.typicode.com/posts")
.post()
.retrieve()
.bodyToMono(String.class);
System.out.println("create mono response lazy initialization");
responseOne.add(responseMono);
});
CompletableFuture<List<String>> futureCount = new CompletableFuture<>();
List<String> res = new ArrayList<>();
Mono.zip(responseOne, Arrays::asList)
.flatMapIterable(objects -> objects) // make flux of objects
.doOnComplete(() -> {
futureCount.complete(res);
}) // will be printed on completion of the flux created above
.subscribe(responseString -> {
res.add((String) responseString);
}
);
watch.stop();
List<String> response = futureCount.get();
System.out.println(response);
// do rest of the computation
System.out.println(watch.getLastTaskTimeMillis());
If you want your calls to be parallel it is a good idea to use Mono.zip
Now, you want Done to be printed after the collection of all the responses
So, you can modify your code as below
final List<Mono<String>> responseMonos = IntStream.range(0, 10).mapToObj(
index -> WebClient.create("https://jsonplaceholder.typicode.com/posts").post().retrieve()
.bodyToMono(String.class)).collect(Collectors.toList()); // create iterable of mono of network calls
Mono.zip(responseMonos, Arrays::asList) // make parallel network calls and collect it to a list
.flatMapIterable(objects -> objects) // make flux of objects
.doOnComplete(() -> System.out.println("Done")) // will be printed on completion of the flux created above
.subscribe(responseString -> System.out.println("responseString = " + responseString)); // subscribe and start emitting values from flux
It's also not a good idea to call subscribe or block explicitly in your reactive code.
is it possible to fire up all requests in parallel and combine them at one point.
That's exactly what your code is doing already. If you don't believe me, stick .delayElement(Duration.ofSeconds(2)) after your bodyToMono() call. You'll see that your list prints out after just over 2 seconds, rather than 20 (which is what it would be if executing sequentially 10 times.)
The combining part is happening in your Flux.merge().collectList() call.
In this case "Done" prints before actual response comes up.
That's to be expected, as your last System.out.println() call is executing outside of the reactive callback chain. If you want "Done" to print after your list is printed (which you've confusingly given the variable name s in the consumer passed to your subscribe() call) then you'll need to put it inside that consumer, not outside it.
If you're interfacing with an imperative API, and you therefore need to block on the list, you can just do:
List<String> list = Flux.merge(responseOne).collectList().block();
...which will still execute the calls in parallel (so still gain you some advantage), but then block until all of them are complete and combined into a list. (If you're just using reactor for this type of usage however, it's debatable if it's worthwhile.)
I am looking to cache a Mono (only if it is successful) which is the result of a WebClient call.
From reading the project reactor addons docs I don't feel that CacheMono is a good fit as it caches the errors as well which I do not want.
So instead of using CacheMono I am doing the below:
Cache<MyRequestObject, Mono<MyResponseObject>> myCaffeineCache =
Caffeine.newBuilder()
.maximumSize(100)
.expireAfterWrite(Duration.ofSeconds(60))
.build();
MyRequestObject myRequestObject = ...;
Mono<MyResponseObject> myResponseObject = myCaffeineCache.get(myRequestObject,
requestAsKey -> WebClient.create()
.post()
.uri("http://www.example.com")
.syncBody(requestAsKey)
.retrieve()
.bodyToMono(MyResponseObject.class)
.cache()
.doOnError(t -> myCaffeineCache.invalidate(requestAsKey)));
Here I am calling cache on the Mono and then adding it to the caffeine cache.
Any errors will enter doOnError to invalidate the cache.
Is this a valid approach to caching a Mono WebClient response?
This is one of the very few use cases where you'd be actually allowed to call non-reactive libraries and wrap them with reactive types, and have processing done in side-effects operators like doOnXYZ, because:
Caffeine is an in-memory cache, so as far as I know there's no I/O involved
Caches often don't offer strong guarantees about caching values (it's very much "fire and forget)
You can then in this case query the cache to see if a cached version is there (wrap it and return right away), and cache a successful real response in a doOn operator, like this:
public class MyService {
private WebClient client;
private Cache<MyRequestObject, MyResponseObject> myCaffeineCache;
public MyService() {
this.client = WebClient.create();
this.myCaffeineCache = Caffeine.newBuilder().maximumSize(100)
.expireAfterWrite(Duration.ofSeconds(60)).build();
}
public Mono<MyResponseObject> fetchResponse(MyRequestObject request) {
MyResponseObject cachedVersion = this.myCaffeineCache.get(myRequestObject);
if (cachedVersion != null) {
return Mono.just(cachedVersion);
} else {
return this.client.post()
.uri("http://www.example.com")
.syncBody(request.getKey())
.retrieve()
.bodyToMono(MyResponseObject.class)
.doOnNext(response -> this.myCaffeineCache.put(request.getKey(), response));
}
}
Note that I wouldn't cache reactive types here, since there's no I/O involved nor backpressure once the value is returned by the cache. On the contrary, it's making things more difficult with subscription and other reactive streams constraints.
Also you're right about the cache operator since it isn't about caching the value per se, but more about replaying what happened to other subscribers. I believe that cache and replay operators are actually synonyms for Flux.
Actually, you don't have to save errors with CacheMono.
private Cache<MyRequestObject, MyResponseObject> myCaffeineCache;
...
Mono<MyResponseObject> myResponseObject =
CacheMono.lookup(key -> Mono.justOrEmpty(myCaffeineCache.getIfPresent(key))
.map(Signal::next), myRequestObject)
.onCacheMissResume(() -> /* Your web client or other Mono here */)
.andWriteWith((key, signal) -> Mono.fromRunnable(() ->
Optional.ofNullable(signal.get())
.ifPresent(value -> myCaffeineCache.put(key, value))));
When you switch to external cache, this may be usefull. Don't forget using reactive clients for external caches.
In the documentation it is written that you should wrap blocking code into a Mono: http://projectreactor.io/docs/core/release/reference/#faq.wrap-blocking
But it is not written how to actually do it.
I have the following code:
#PostMapping(path = "some-path", consumes = MediaType.APPLICATION_STREAM_JSON_VALUE)
public Mono<Void> doeSomething(#Valid #RequestBody Flux<Something> something) {
something.subscribe(something -> {
// some blocking operation
});
// how to return Mono<Void> here?
}
The first problem I have here is that I need to return something but I cant.
If I would return a Mono.empty for example the request would be closed before the work of the flux is done.
The second problem is: how do I actually wrap the blocking code like it is suggested in the documentation:
Mono blockingWrapper = Mono.fromCallable(() -> {
return /* make a remote synchronous call */
});
blockingWrapper = blockingWrapper.subscribeOn(Schedulers.elastic());
You should not call subscribe within a controller handler, but just build a reactive pipeline and return it. Ultimately, the HTTP client will request data (through the Spring WebFlux engine) and that's what subscribes and requests data to the pipeline.
Subscribing manually will decouple the request processing from that other operation, which will 1) remove any guarantee about the order of operations and 2) break the processing if that other operation is using HTTP resources (such as the request body).
In this case, the source is not blocking, but only the transform operation is. So we'd better use publishOn to signal that the rest of the chain should be executed on a specific Scheduler. If the operation here is I/O bound, then Schedulers.elastic() is the best choice, if it's CPU-bound then Schedulers .paralell is better. Here's an example:
#PostMapping(path = "/some-path", consumes = MediaType.APPLICATION_STREAM_JSON_VALUE)
public Mono<Void> doSomething(#Valid #RequestBody Flux<Something> something) {
return something.collectList()
.publishOn(Schedulers.elastic())
.map(things -> {
return processThings(things);
})
.then();
}
public ProcessingResult processThings(List<Something> things) {
//...
}
For more information on that topic, check out the Scheduler section in the reactor docs. If your application tends to do a lot of things like this, you're losing a lot of the benefits of reactive streams and you might consider switching to a Servlet-based model where you can configure thread pools accordingly.