In the documentation it is written that you should wrap blocking code into a Mono: http://projectreactor.io/docs/core/release/reference/#faq.wrap-blocking
But it is not written how to actually do it.
I have the following code:
#PostMapping(path = "some-path", consumes = MediaType.APPLICATION_STREAM_JSON_VALUE)
public Mono<Void> doeSomething(#Valid #RequestBody Flux<Something> something) {
something.subscribe(something -> {
// some blocking operation
});
// how to return Mono<Void> here?
}
The first problem I have here is that I need to return something but I cant.
If I would return a Mono.empty for example the request would be closed before the work of the flux is done.
The second problem is: how do I actually wrap the blocking code like it is suggested in the documentation:
Mono blockingWrapper = Mono.fromCallable(() -> {
return /* make a remote synchronous call */
});
blockingWrapper = blockingWrapper.subscribeOn(Schedulers.elastic());
You should not call subscribe within a controller handler, but just build a reactive pipeline and return it. Ultimately, the HTTP client will request data (through the Spring WebFlux engine) and that's what subscribes and requests data to the pipeline.
Subscribing manually will decouple the request processing from that other operation, which will 1) remove any guarantee about the order of operations and 2) break the processing if that other operation is using HTTP resources (such as the request body).
In this case, the source is not blocking, but only the transform operation is. So we'd better use publishOn to signal that the rest of the chain should be executed on a specific Scheduler. If the operation here is I/O bound, then Schedulers.elastic() is the best choice, if it's CPU-bound then Schedulers .paralell is better. Here's an example:
#PostMapping(path = "/some-path", consumes = MediaType.APPLICATION_STREAM_JSON_VALUE)
public Mono<Void> doSomething(#Valid #RequestBody Flux<Something> something) {
return something.collectList()
.publishOn(Schedulers.elastic())
.map(things -> {
return processThings(things);
})
.then();
}
public ProcessingResult processThings(List<Something> things) {
//...
}
For more information on that topic, check out the Scheduler section in the reactor docs. If your application tends to do a lot of things like this, you're losing a lot of the benefits of reactive streams and you might consider switching to a Servlet-based model where you can configure thread pools accordingly.
Related
I am working on webflux application using coroutines. It's main purpose is "backend for frontend" for mobile app. Most of request is handled by fetching data and merging data from microservices. I am currently working on adding database to this service. I thought i understood this concept in coroutines. I tried adding this to code
suspend fun fetchData(): String {
return withContext(Dispatchers.IO) {
Thread.sleep(10000) // fetch data from database
""
}
}
I was surprised that this code used on one of endpoints slowed down every endpoint response time. Endpoints unrelated to this part of code where affected. My guess is I am using wrong pool thread for this operation. I also tried Reactor approach but got the same result:
suspend fun fetchData(): String {
return Mono.fromCallable {
Thread.sleep(10000) // fetch data from database
""
}.subscribeOn(Schedulers.boundedElastic()).awaitSingle()
}
What am I doing wrong? Why main thread seems to get blocked?
I'm actually confused on assembly time and subscription time. I know mono's are lazy and does not get executed until its subscribed. Below is a method.
public Mono<UserbaseEntityResponse> getUserbaseDetailsForEntityId(String id) {
GroupRequest request = ImmutableGroupRequest
.builder()
.cloudId(id)
.build();
Mono<List<GroupResponse>> response = ussClient.getGroups(request);
List<UserbaseEntityResponse.GroupPrincipal> groups = new CopyOnWriteArrayList<>();
response.flatMapIterable(elem -> elem)
.toIterable().iterator().forEachRemaining(
groupResponse -> {
groupResponse.getResources().iterator().forEachRemaining(
resource -> {
groups.add(ImmutableGroupPrincipal
.builder()
.groupId(resource.getId())
.name(resource.getDisplayName())
.addAllUsers(convertMemebersToUsers(resource))
.build());
}
);
}
);
log.debug("Response Object - " + groups.toString());
ImmutableUserbaseEntityResponse res = ImmutableUserbaseEntityResponse
.builder()
.userbaseId(id)
.addAllGroups(groups)
.build();
Flux<UserbaseEntityResponse.GroupPrincipal> f = Flux.fromIterable(res.getGroups())
.parallel()
.runOn(Schedulers.parallel())
.doOnNext(groupPrincipal -> getResourcesForGroup((ImmutableGroupPrincipal)groupPrincipal, res.getUserbaseId()))
.sequential();
return Mono.just(res);
}
This gets executed Mono<List<GroupResponse>> response = ussClient.getGroups(request); without calling subscribe, however below will not get executed unless I call subscribe on that.
Flux<UserbaseEntityResponse.GroupPrincipal> f = Flux.fromIterable(res.getGroups())
.parallel()
.runOn(Schedulers.parallel())
.doOnNext(groupPrincipal -> getResourcesForGroup((ImmutableGroupPrincipal)groupPrincipal, res.getUserbaseId()))
.sequential();
Can I get some more input on assembly time vs subscription?
"Nothing happens until you subscribe" isn't quite true in all cases. There's three scenarios in which a publisher (Mono or Flux) will be executed:
You subscribe;
You block;
The publisher is "hot".
Note that the above scenarios all apply to an entire reactive chain - i.e. if I subscribe to a publisher, everything upstream (dependent on that publisher) also executes. That's why frameworks can, and should call subscribe when they need to, causing the reactive chain defined in a controller to execute.
In your case it's actually the second of these - you're blocking, which is essentially a "subscribe and wait for the result(s)". Usually the methods that block are clearly labelled, but again that's not always the case - in your case it's the toIterable() method on Flux doing the blocking:
Transform this Flux into a lazy Iterable blocking on Iterator.next() calls.
But ah, you say, I'm not calling Iterator.next() - what gives?!
Well, implicitly you are by calling forEachRemaining():
The default implementation behaves as if:
while (hasNext())
action.accept(next());
...and as per the above rule, since ussClient.getGroups(request) is upstream of this blocking call, it gets executed.
I am looking to cache a Mono (only if it is successful) which is the result of a WebClient call.
From reading the project reactor addons docs I don't feel that CacheMono is a good fit as it caches the errors as well which I do not want.
So instead of using CacheMono I am doing the below:
Cache<MyRequestObject, Mono<MyResponseObject>> myCaffeineCache =
Caffeine.newBuilder()
.maximumSize(100)
.expireAfterWrite(Duration.ofSeconds(60))
.build();
MyRequestObject myRequestObject = ...;
Mono<MyResponseObject> myResponseObject = myCaffeineCache.get(myRequestObject,
requestAsKey -> WebClient.create()
.post()
.uri("http://www.example.com")
.syncBody(requestAsKey)
.retrieve()
.bodyToMono(MyResponseObject.class)
.cache()
.doOnError(t -> myCaffeineCache.invalidate(requestAsKey)));
Here I am calling cache on the Mono and then adding it to the caffeine cache.
Any errors will enter doOnError to invalidate the cache.
Is this a valid approach to caching a Mono WebClient response?
This is one of the very few use cases where you'd be actually allowed to call non-reactive libraries and wrap them with reactive types, and have processing done in side-effects operators like doOnXYZ, because:
Caffeine is an in-memory cache, so as far as I know there's no I/O involved
Caches often don't offer strong guarantees about caching values (it's very much "fire and forget)
You can then in this case query the cache to see if a cached version is there (wrap it and return right away), and cache a successful real response in a doOn operator, like this:
public class MyService {
private WebClient client;
private Cache<MyRequestObject, MyResponseObject> myCaffeineCache;
public MyService() {
this.client = WebClient.create();
this.myCaffeineCache = Caffeine.newBuilder().maximumSize(100)
.expireAfterWrite(Duration.ofSeconds(60)).build();
}
public Mono<MyResponseObject> fetchResponse(MyRequestObject request) {
MyResponseObject cachedVersion = this.myCaffeineCache.get(myRequestObject);
if (cachedVersion != null) {
return Mono.just(cachedVersion);
} else {
return this.client.post()
.uri("http://www.example.com")
.syncBody(request.getKey())
.retrieve()
.bodyToMono(MyResponseObject.class)
.doOnNext(response -> this.myCaffeineCache.put(request.getKey(), response));
}
}
Note that I wouldn't cache reactive types here, since there's no I/O involved nor backpressure once the value is returned by the cache. On the contrary, it's making things more difficult with subscription and other reactive streams constraints.
Also you're right about the cache operator since it isn't about caching the value per se, but more about replaying what happened to other subscribers. I believe that cache and replay operators are actually synonyms for Flux.
Actually, you don't have to save errors with CacheMono.
private Cache<MyRequestObject, MyResponseObject> myCaffeineCache;
...
Mono<MyResponseObject> myResponseObject =
CacheMono.lookup(key -> Mono.justOrEmpty(myCaffeineCache.getIfPresent(key))
.map(Signal::next), myRequestObject)
.onCacheMissResume(() -> /* Your web client or other Mono here */)
.andWriteWith((key, signal) -> Mono.fromRunnable(() ->
Optional.ofNullable(signal.get())
.ifPresent(value -> myCaffeineCache.put(key, value))));
When you switch to external cache, this may be usefull. Don't forget using reactive clients for external caches.
On the backend im doing:
#PostMapping(path = "/products", consumes = MediaType.APPLICATION_STREAM_JSON_VALUE)
public void saveProducts(#Valid #RequestBody Flux<Product> products) {
products.subscribe(product -> log.info("product: " + product.toString()));
}
And on the frontend im calling this using:
this.targetWebClient
.post()
.uri(productUri)
.accept(MediaType.APPLICATION_STREAM_JSON)
.contentType(MediaType.APPLICATION_STREAM_JSON)
.body(this.sourceWebClient
.get()
.uri(uriBuilder -> uriBuilder.path(this.sourceEndpoint + "/id")
.queryParam("date", date)
.build())
.accept(MediaType.APPLICATION_STREAM_JSON)
.retrieve()
.bodyToFlux(Product.class), Product.class)
.exchange()
.subscribe();
What happens now is that I have 472 products which need to get saved but only one of them is actually saving. The stream closes after the first and I cant find out why.
If I do:
...
.retrieve()
.bodyToMono(Void.class);
instead, the request isnt even arriving at the backend.
I also tried fix amount of elements:
.body(Flux.just(new Product("123"), new Product("321")...
And with that also only the first arrived.
EDIT
I changed the code:
#PostMapping(path = "/products", consumes =
MediaType.APPLICATION_STREAM_JSON_VALUE)
public Mono<Void> saveProducts(#Valid #RequestBody Flux<Product> products) {
products.subscribe(product -> this.service.saveProduct(product));
return Mono.empty();
}
and:
this.targetWebClient
.post()
.uri(productUri)
.accept(MediaType.APPLICATION_STREAM_JSON)
.contentType(MediaType.APPLICATION_STREAM_JSON)
.body(this.sourceWebClient
.get()
.uri(uriBuilder -> uriBuilder.path(this.sourceEndpoint + "/id")
.queryParam("date", date)
.build())
.accept(MediaType.APPLICATION_STREAM_JSON)
.retrieve()
.bodyToFlux(Product.class), Product.class)
.exchange()
.block();
That led to the behaviour that one product was saved twice (because the backend endpoint was called twice) but again only just one item. And also we got an error on the frontend side:
IOException: Connection reset by peer
Same for:
...
.retrieve()
.bodyToMono(Void.class)
.subscribe();
Doing the following:
this.targetWebClient
.post()
.uri(productUri)
.accept(MediaType.APPLICATION_STREAM_JSON)
.contentType(MediaType.APPLICATION_STREAM_JSON)
.body(this.sourceWebClient
.get()
.uri(uriBuilder -> uriBuilder.path(this.sourceEndpoint + "/id")
.queryParam("date", date)
.build())
.accept(MediaType.APPLICATION_STREAM_JSON)
.retrieve()
.bodyToFlux(Product.class), Product.class)
.retrieve();
Leads to the behaviour that the backend again isnt called at all.
The Reactor documentation does say that nothing happens until you subscribe, but it doesn't mean you should subscribe in your Spring WebFlux code.
Here are a few rules you should follow in Spring WebFlux:
If you need to do something in a reactive fashion, the return type of your method should be Mono or Flux
Within a method returning a reactive typoe, you should never call block or subscribe, toIterable, or any other method that doesn't return a reactive type itself
You should never do I/O-related in side-effects DoOnXYZ operators, as they're not meant for that and this will cause issues at runtime
In your case, your backend should use a reactive repository to save your data and should look like:
#PostMapping(path = "/products", consumes = MediaType.APPLICATION_STREAM_JSON_VALUE)
public Mono<Void> saveProducts(#Valid #RequestBody Flux<Product> products) {
return productRepository.saveAll(products).then();
}
In this case, the Mono<Void> return type means that your controller won't return anything as a response body but will signal still when it's done processing the request. This might explain why you're seeing that behavior - by the time the controller is done processing the request, all products are not saved in the database.
Also, remember the rules noted above. Depending on where your targetWebClient is used, calling .subscribe(); on it might not be the solution. If it's a test method that returns void, you might want to call block on it and get the result to test assertions on it. If this is a component method, then you should probably return a Publisher type as a return value.
EDIT:
#PostMapping(path = "/products", consumes =
MediaType.APPLICATION_STREAM_JSON_VALUE)
public Mono<Void> saveProducts(#Valid #RequestBody Flux<Product> products) {
products.subscribe(product -> this.service.saveProduct(product));
return Mono.empty();
}
Doing this isn't right:
calling subscribe decouples the processing of the request/response from that saveProduct operation. It's like starting that processing in a different executor.
returning Mono.empty() signals Spring WebFlux that you're done right away with the request processing. So Spring WebFlux will close and clean the request/response resources; but your saveProduct process is still running and won't be able to read from the request since Spring WebFlux closed and cleaned it.
As suggested in the comments, you can wrap blocking operations with Reactor (even though it's not advised and you may encounter performance issues) and make sure that you're connecting all the operations in a single reactive pipeline.
I'm trying to find an elegant way to retry an operation when a WCF channel is in faulted state. I've tried using the Policy Injection AB to reconnect and retry the operation when a faulted state exception occurs on first call, but the PolicyInjection.Wrap method doesn't seem to like wrapping the TransparentProxy objects (proxy returned from ChannelFactory.CreateChannel).
Is there any other mechanism I could try or how could I try get the PIAB solution working correctly - any links, examples, etc. would be greatly appreciated.
Here is the code I was using that was failing:
var channelFactory = new ChannelFactory(endpointConfigurationName);
var proxy = channelFactory.CreateChannel(...);
proxy = PolicyInjection.Wrap<IService>(proxy);
Thank you.
I would rather use callback functions, something like this:
private SomeServiceClient proxy;
//This method invokes a service method and recreates the proxy if it's in a faulted state
private void TryInvoke(Action<SomeServiceClient> action)
{
try
{
action(this.proxy);
}
catch (FaultException fe)
{
if (proxy.State == CommunicationState.Faulted)
{
this.proxy.Abort();
this.proxy = new SomeServiceClient();
//Probably, there is a better way than recursion
TryInvoke(action);
}
}
}
//Any real method
private void Connect(Action<UserModel> callback)
{
TryInvoke(sc => callback(sc.Connect()));
}
And in your code you should call
ServiceProxy.Instance.Connect(user => MessageBox.Show(user.Name));
instead of
var user = ServiceProxy.Instance.Connect();
MessageBox.Show(user.Name);
Although my code uses proxy-class approach, you can write a similar code with Channels.
Thank you so much for your reply. What I ended up doing was creating a decorator type class that implemented the interface of my service, which then just wrapped the transparent proxy generated by the ChannelFactory. I was then able to use the Policy Injection Application Block to create a layer on top of this that would inject code into each operation call that would try the operation, and if a CommunicationObjectFaultedException occurred, would abort the channel, recreate it and retry the operation. It's working great now - although it works great, the only downside though is the wrapper class mentioned having to implement every service operation, but this was the only way I could use the PIAB as this made sense for me for in case I did find a way in future, it was easy enough to change just using interfaces.