Webflux WebClient retry and Spring Cloud Circuit Breaker Resilience4J Retry pattern walk into a bar - spring-webflux

Would like to ask a question about two technologies.
We first started with an application that has to call other third parties rest API, hence, we used the Webflux WebClient in our SpringBoot Webflux project. So far so good, we had a successful app for a while.
Then the third party app (not ours) started to become flaky, sometimes will fail on our requests. We had to implement some kind of retry logic. After the implementation of the retry logic, such as WebClient reties, the business flow is now working fine.
We mainly took logics from the framework directly. For instance, a talk from #simon-baslé, Cancel, Retry and Timeouts at the recent SpringOne gave many working examples.
.retryWhen(backoff(5, Duration.ofMillis(10).maxbackOff(Duration.ofSeconds(1)).jitter(0.4)).timeout(Duration.ofSeconds(5)
On the other hand, lately, there are more and more apps moving towards Circuit Breaker pattern. The Spring Cloud Circuit Breaker project, backed by Resilience4J is a popular implementation using Resilience4J for patterns such as Circuit Breaker, Bulkhead, and of course Retry.
Hence, I am having a question, is there a benefit of using/combining both in terms of retry?
Any gain in terms of having the two together? Any drawbacks?
Or only one of the two is enough, in which case, which one please? And why?
Thank you

we (Resilience4j Team) have implemented custom Spring Reactor operators for CircuitBreaker, Retry and Timeout. Internally Retry and Timeout use operators from Spring Reactor, but Resilience4j adds functionality on top of it:
External configuration of Retry, Timeout and CircuitBreaker via config files
Spring Cloud Config support to dynamically adjust the configuration
Metrics, metrics, metrics ;)
Please see https://resilience4j.readme.io/docs/examples-1 and https://resilience4j.readme.io/docs/getting-started-3
You can even use Annotations to make it more simple:
#CircuitBreaker(name = BACKEND)
#RateLimiter(name = BACKEND)
#Retry(name = BACKEND)
#TimeLimiter(name = BACKEND, fallbackMethod = "fallback")
public Mono<String> method(String param1) {
return ...
}
private Mono<String> fallback(String param1, TimeoutException ex) {
return ...;
}
Please be aware that we are providing our own Spring Boot starter. I'm NOT talking about the Spring Cloud CircuitBreaker project.

Related

Cross cutting concerns with Vertx

I need to handle cross cutting concerns through out my micro service built using vertx. Back in sometime we used spring AOP, but the tech stack is changed to Vertx and finding out a way if it's something even possible. Would like to take suggestions on all the possible options in this case.
AOP is just another way to generate proxy classes. In Vert.x, you would usually achieve the same goal using Handlers and Context.
So let's say you'd like to have a logging around your endpoints. You can do it as follows:
Router filterRouter = Router.router(vertx);
filterRouter.get().handler((ctx)->{
System.out.println("Before");
ctx.next();
System.out.println("After");
});
filterRouter.mountSubRouter("/", router);
If you don't want to proceed with the filter chain, you just won't invoke ctx.next() in your handler.

WebFlux Controllers Returning Flux and Backpressure

In Spring WebFlux I have a controller similar to this:
#RestController
#RequestMapping("/data")
public class DataController {
#GetMapping(produces = MediaType.APPLICATION_JSON_VALUE)
public Flux<Data> getData() {
return <data from database using reactive driver>
}
}
What exactly is subscribing to the publisher?
What (if anything) is providing backpressure?
For context I'm trying to evaluate if there are advantages to using Spring WebFlux in this specific situation over Spring MVC.
Note: I am not a developer of spring framework, so any comments are welcome.
What exactly is subscribing to the publisher?
It is a long living subscription to the port (the server initialisation itself). Therefore, the ReactorHttpServer.class has the method:
#Override
protected void startInternal() {
DisposableServer server = this.reactorServer.handle(this.reactorHandler).bind().block();
setPort(((InetSocketAddress) server.address()).getPort());
this.serverRef.set(server);
}
The Subscriber is the bind method, which (as far as I can see) does request(Long.MAX_VALUE), so no back pressure management here.
The important part for request handling is the method handle(this.reactorHandler). The reactorHandler is an instance of ReactorHttpHandlerAdapter. Further up the stack (within the apply method of ReactorHttpHandlerAdapter) is the DispatcherHandler.class. The java doc of this class starts with " Central dispatcher for HTTP request handlers/controllers. Dispatches to registered handlers for processing a request, providing convenient mapping facilities.". It has the central method:
#Override
public Mono<Void> handle(ServerWebExchange exchange) {
if (this.handlerMappings == null) {
return createNotFoundError();
}
return Flux.fromIterable(this.handlerMappings)
.concatMap(mapping -> mapping.getHandler(exchange))
.next()
.switchIfEmpty(createNotFoundError())
.flatMap(handler -> invokeHandler(exchange, handler))
.flatMap(result -> handleResult(exchange, result));
}
Here, the actual request processing happens. The response is written within handleResult. It now depends on the actual server implementation, how the result is written.
For the default server, i.e. Reactor Netty it will be a ReactorServerHttpResponse.class. Here you can see the method writeWithInternal. This one takes the publisher result of the handler method and writes it to the underlying HTTP connection:
#Override
protected Mono<Void> writeWithInternal(Publisher<? extends DataBuffer> publisher) {
return this.response.send(toByteBufs(publisher)).then();
}
One implementation of NettyOutbound.send( ... ) is reactor.netty.channel.ChannelOperations. For your specific case of a Flux return, this implementation manages the NIO within MonoSendMany.class. This class does subscribe( ... ) with a SendManyInner.class, which does back pressure management by implementing Subscriber which onSubscribe does request(128). I guess Netty internally uses TCP ACK to signal successful transmission.
So,
What (if anything) is providing backpressure?
... yes, backpressure is provided, e.g. by SendManyInner.class, however also other implementations exist.
For context I'm trying to evaluate if there are advantages to using Spring WebFlux in this specific situation over Spring MVC.
I think, it is definitely worth evaluating. For performance I however guess, the result will depend on the amount of concurrent requests and maybe also on the type of your Data class. Generally speaking, Webflux is usually the preferred choice for high throughput, low latency situations, and we generally see better hardware utilization in our environments. Assuming you take your data from a database you probably will have the best results with a database driver that too supports reactive. Besides performance, the back pressure management is always a good reason to have a look at Webflux. Since we adopted to Webflux, our data platform never had problems with stability anymore (not to claim, there are no other ways to have a stable system, but here many issues are solved out of the box).
As a side note: I recommend, having a closer look at Schedulers we just recently gained 30% cpu time by choosing the right one for slow DB accesses.
EDIT:
In https://docs.spring.io/spring/docs/current/spring-framework-reference/web-reactive.html#webflux-fn-handler-functions the reference documentation explicitly says:
ServerRequest and ServerResponse are immutable interfaces that offer JDK 8-friendly access to the HTTP request and response. Both request and response provide Reactive Streams back pressure against the body streams.
What exactly is subscribing to the publisher?
The framework (so Spring, in this case.)
In general, you shouldn't subscribe in your own application - the framework should be subscribing to your publisher when necessary. In the context of spring, that's whenever a relevant request hits that controller.
What (if anything) is providing backpressure?
In this case, it's only restricted by the speed of the connection (I believe Webflux will look at the underlying TCP layer) and then request data as required. Whether your upstream flux listens to that backpressure though is another story - it may do, or it may just flood the consumer with as much data as it can.
For context I'm trying to evaluate if there are advantages to using Spring WebFlux in this specific situation over Spring MVC.
The main advantage is being able to hold huge numbers of connections open with only a few threads - so no overhead of context switching. (That's not the sole advantage, but most of the advantages generally boil down to that point.) Usually, this is only an advantage worth considering if you need to hold in the region of thousands of connections open at once.
The main disadvantage is the fact reactive code looks very different from standard Java code, and is usually necessarily more complex as a result. Debugging is also harder - vanilla stack traces become all but useless for instance (though their are tools & techniques to make this easier.)

How are ClientRequest & ClientHttpRequest connected in Spring WebFlux

When we use WebClient API from spring-webflux, it internally uses ClientRequest class
But we also have ClientHttpRequest in the spring-web module
Why do we have two different classes that sound & are very similar. Can someone explain the differences between these two classes?
org.springframework.web.reactive.function.client.ClientRequest is meant to be the class that Spring developers can use in with the WebClient. It's got advanced features like a request attributes map, a logPrefix for logging purposes, static builders, etc. It is also using higher level concepts like ExchangeStrategies.
On the other hand, org.springframework.http.client.reactive.ClientHttpRequest is the base abstraction for HTTP client requests, at the raw HTTP level. This is used to implement the various adaptation layers for HTTP clients (Reactor Netty, Jetty).
So unless you're doing with low level stuff, you shouldn't need to use ClientHttpRequest directly in your application.

WebClient instrumentation in spring sleuth

I'm wondering whether sleuth has reactive WebClient instrumentation supported.
I did't find it from the document:
Instruments common ingress and egress points from Spring applications (servlet filter, async endpoints, rest template, scheduled actions, message channels, Zuul filters, and Feign client).
My case:
I may use WebClient in either a WebFilter or my rest resource to produce Mono.
And I want:
A sub span auto created as child of root span
trace info propagated via headers
If the instrumentation is not supported at the moment, Am I supposed to manually get the span from context and do it by myself like this:
OpenTracing instrumentation on reactive WebClient
Thanks
Leon
Even though this is an old question this would help others...
WebClient instrumentation will only work if new instance is created via Spring as a Bean. Check Spring Cloud Sleuth reference guide.
You have to register WebClient as a bean so that the tracing instrumentation gets applied. If you create a WebClient instance with a new keyword, the instrumentation does NOT work.
If you go to Sleuth's documentation for the Finchley release train, and you do find and you search for WebClient you'll find it - https://cloud.spring.io/spring-cloud-static/Finchley.RC2/single/spring-cloud.html#__literal_webclient_literal . In other words we do support it out of the box.
UPDATE:
New link - https://docs.spring.io/spring-cloud-sleuth/docs/current/reference/html/integrations.html#sleuth-http-client-webclient-integration
let me paste the contents
3.2.2. WebClient
This feature is available for all tracer implementations.
We inject a ExchangeFilterFunction implementation that creates a span
and, through on-success and on-error callbacks, takes care of closing
client-side spans.
To block this feature, set spring.sleuth.web.client.enabled to false.
You have to register WebClient as a bean so that the tracing
instrumentation gets applied. If you create a WebClient instance with
a new keyword, the instrumentation does NOT work.

ServiceStack and NHibernate Unit Of Work Pattern

Long story as brief as possible...
I have an existing application that I'm trying to get ServiceStack into to create our new API. This app is currently an MVC3 app and uses the UnitOfWork pattern using Attribute Injection on MVC routes to create/finalize a transaction where the attribute is applied.
Trying to accomplish something similar using ServiceStack
This gist
shows the relevant ServiceStack configuration settings. What I am curious about is the global request/response filters -- these will create a new unit of work for each request and close it before sending the response to the client (there is a check in there so if an error occurs writing to the db, we return an appropriate response to the client, and not a false "success" message)
My questions are:
Is this a good idea or not, or is there a better way to do
this with ServiceStack.
In the MVC site we only create a new unit
of work on an action that will add/update/delete data - should we do
something similar here or is it fine to create a transaction only to retrieve data?
As mentioned in ServiceStack's IOC wiki the Funq IOC registers dependencies as a singleton by default. So to register it with RequestScope you need to specify it as done here:
container.RegisterAutoWiredAs<NHibernateUnitOfWork, IUnitOfWork()
.ReusedWithin(ReuseScope.Request);
Although this is not likely what you want as it registers as a singleton, i.e. the same instance returned for every request:
container.Register<ISession>((c) => {
var uow = (INHibernateUnitOfWork) c.Resolve<IUnitOfWork>();
return uow.Session;
});
You probably want to make this:
.ReusedWithin(ReuseScope.Request); //per request
.ReusedWithin(ReuseScope.None); //Executed each time its injected
Using a RequestScope also works for Global Request/Response filters which will get the same instance as used in the Service.
1) Whether you are using ServiceStack, MVC, WCF, Nancy, or any other web framework, the most common method to use is the session-per-request pattern. In web terms, this means creating a new unit of work in the beginning of the request and disposing of the unit of work at the end of the request. Almost all web frameworks have hooks for these events.
Resources:
https://stackoverflow.com/a/13206256/670028
https://stackoverflow.com/search?q=servicestack+session+per+request
2) You should always interact with NHibernate within a transaction.
Please see any of the following for an explanation of why:
http://ayende.com/blog/3775/nh-prof-alerts-use-of-implicit-transactions-is-discouraged
http://www.hibernatingrhinos.com/products/nhprof/learn/alert/DoNotUseImplicitTransactions
Note that when switching to using transactions with reads, be sure to make yourself aware of NULL behavior: http://www.zvolkov.com/clog/2009/07/09/why-nhibernate-updates-db-on-commit-of-read-only-transaction/#comments