I'm currently working with Spring AMQP version 1.3.6.RELEASE and Spring Retry 1.1.2.RELEASE. According to the Spring AMQP documentation section 3.3.1 one can add retry capabilities by passing in a RetryTemplate.
Are there any existing capabilities to provide a RecoveryCallback<T> implementation? I was reviewing the RabbitTemplate.java implementation and I couldn't find any.
The use case I'm considering is that if a *Send() execution fails because the broker is down I'd like to implement my own custom logic.
I understand that I could wrap the convertAndSend() call in my own RetryTemplate implementation and implement a try { ... } catch (AmqpException e) { ... } but I did not want to go down that road if Spring AMQP provided a cleaner implementation.
You are correct: there is no such an ability right now.
Feel free to raise a JIRA issue and we'll address it soon.
Thanks.
And I think you go right way with a workaround: you really just use your own RetryTemplate instance with raw RabbitTemplate.convertAndSend invocation in the doWithRetry inline implementation.
Related
I'm trying to configure a microservice with Sleuth and ActiveMQ.
When starting a request I can properly see appName, traceId and spanId in logs of producer, but after dequeuing the message in listener I find only appName, without traceId and spanId.
How can I get this fields filled?
Right now I'm working with spring.sleuth.messaging.jms.enabled=false to avoid this exception at startup:
Bean named 'connectionFactory' is expected to be of type 'org.apache.activemq.ActiveMQConnectionFactory' but was actually of type 'org.springframework.cloud.sleuth.instrument.messaging.LazyConnectionFactory'
My dependencies:
org.springframework.boot.spring-boot-starter-activemq 2.5.1
org.springframework.cloud.spring-cloud-sleuth 3.0.3
Thank you all!
My understanding is that the properties you're looking for are set on the JMS message when the message is sent and then retrieved from the message when it is consumed. Since you're setting spring.sleuth.messaging.jms.enabled=false you're disabling this functionality. See the documentation which states:
We instrument the JmsTemplate so that tracing headers get injected into the message. We also support #JmsListener annotated methods on the consumer side.
To block this feature, set spring.sleuth.messaging.jms.enabled to false.
You'll need to find an alternate solution for the connection factory problem if you want to use Sleuth with Spring JMS. If you're injecting org.apache.activemq.ActiveMQConnectionFactory somewhere then you should almost certainly be using javax.jms.ConnectionFactory instead. Using the concrete type is bad for portability and use-cases like this where wrapper implementations are used dynamically.
I have a reactive application using Spring Webflux. I have used sleuth annotations like #NewSpan to create spans, but I am getting warning like
CglibAopProxy|Unable to proxy interface-implementing method
[public final void reactor.core.publisher.Flux.subscribe(org.reactivestreams.Subscriber)]
because it is marked as final: Consider using interface-based JDK proxies instead!
I know Flux.subscribe is a final method so proxies are not generated correctly, but I can still see those spans on Zipkin.
I need to know what are the implications of this warning. And how can I avoid this?
Would like to ask a question about two technologies.
We first started with an application that has to call other third parties rest API, hence, we used the Webflux WebClient in our SpringBoot Webflux project. So far so good, we had a successful app for a while.
Then the third party app (not ours) started to become flaky, sometimes will fail on our requests. We had to implement some kind of retry logic. After the implementation of the retry logic, such as WebClient reties, the business flow is now working fine.
We mainly took logics from the framework directly. For instance, a talk from #simon-baslé, Cancel, Retry and Timeouts at the recent SpringOne gave many working examples.
.retryWhen(backoff(5, Duration.ofMillis(10).maxbackOff(Duration.ofSeconds(1)).jitter(0.4)).timeout(Duration.ofSeconds(5)
On the other hand, lately, there are more and more apps moving towards Circuit Breaker pattern. The Spring Cloud Circuit Breaker project, backed by Resilience4J is a popular implementation using Resilience4J for patterns such as Circuit Breaker, Bulkhead, and of course Retry.
Hence, I am having a question, is there a benefit of using/combining both in terms of retry?
Any gain in terms of having the two together? Any drawbacks?
Or only one of the two is enough, in which case, which one please? And why?
Thank you
we (Resilience4j Team) have implemented custom Spring Reactor operators for CircuitBreaker, Retry and Timeout. Internally Retry and Timeout use operators from Spring Reactor, but Resilience4j adds functionality on top of it:
External configuration of Retry, Timeout and CircuitBreaker via config files
Spring Cloud Config support to dynamically adjust the configuration
Metrics, metrics, metrics ;)
Please see https://resilience4j.readme.io/docs/examples-1 and https://resilience4j.readme.io/docs/getting-started-3
You can even use Annotations to make it more simple:
#CircuitBreaker(name = BACKEND)
#RateLimiter(name = BACKEND)
#Retry(name = BACKEND)
#TimeLimiter(name = BACKEND, fallbackMethod = "fallback")
public Mono<String> method(String param1) {
return ...
}
private Mono<String> fallback(String param1, TimeoutException ex) {
return ...;
}
Please be aware that we are providing our own Spring Boot starter. I'm NOT talking about the Spring Cloud CircuitBreaker project.
In Spring WebFlux I have a controller similar to this:
#RestController
#RequestMapping("/data")
public class DataController {
#GetMapping(produces = MediaType.APPLICATION_JSON_VALUE)
public Flux<Data> getData() {
return <data from database using reactive driver>
}
}
What exactly is subscribing to the publisher?
What (if anything) is providing backpressure?
For context I'm trying to evaluate if there are advantages to using Spring WebFlux in this specific situation over Spring MVC.
Note: I am not a developer of spring framework, so any comments are welcome.
What exactly is subscribing to the publisher?
It is a long living subscription to the port (the server initialisation itself). Therefore, the ReactorHttpServer.class has the method:
#Override
protected void startInternal() {
DisposableServer server = this.reactorServer.handle(this.reactorHandler).bind().block();
setPort(((InetSocketAddress) server.address()).getPort());
this.serverRef.set(server);
}
The Subscriber is the bind method, which (as far as I can see) does request(Long.MAX_VALUE), so no back pressure management here.
The important part for request handling is the method handle(this.reactorHandler). The reactorHandler is an instance of ReactorHttpHandlerAdapter. Further up the stack (within the apply method of ReactorHttpHandlerAdapter) is the DispatcherHandler.class. The java doc of this class starts with " Central dispatcher for HTTP request handlers/controllers. Dispatches to registered handlers for processing a request, providing convenient mapping facilities.". It has the central method:
#Override
public Mono<Void> handle(ServerWebExchange exchange) {
if (this.handlerMappings == null) {
return createNotFoundError();
}
return Flux.fromIterable(this.handlerMappings)
.concatMap(mapping -> mapping.getHandler(exchange))
.next()
.switchIfEmpty(createNotFoundError())
.flatMap(handler -> invokeHandler(exchange, handler))
.flatMap(result -> handleResult(exchange, result));
}
Here, the actual request processing happens. The response is written within handleResult. It now depends on the actual server implementation, how the result is written.
For the default server, i.e. Reactor Netty it will be a ReactorServerHttpResponse.class. Here you can see the method writeWithInternal. This one takes the publisher result of the handler method and writes it to the underlying HTTP connection:
#Override
protected Mono<Void> writeWithInternal(Publisher<? extends DataBuffer> publisher) {
return this.response.send(toByteBufs(publisher)).then();
}
One implementation of NettyOutbound.send( ... ) is reactor.netty.channel.ChannelOperations. For your specific case of a Flux return, this implementation manages the NIO within MonoSendMany.class. This class does subscribe( ... ) with a SendManyInner.class, which does back pressure management by implementing Subscriber which onSubscribe does request(128). I guess Netty internally uses TCP ACK to signal successful transmission.
So,
What (if anything) is providing backpressure?
... yes, backpressure is provided, e.g. by SendManyInner.class, however also other implementations exist.
For context I'm trying to evaluate if there are advantages to using Spring WebFlux in this specific situation over Spring MVC.
I think, it is definitely worth evaluating. For performance I however guess, the result will depend on the amount of concurrent requests and maybe also on the type of your Data class. Generally speaking, Webflux is usually the preferred choice for high throughput, low latency situations, and we generally see better hardware utilization in our environments. Assuming you take your data from a database you probably will have the best results with a database driver that too supports reactive. Besides performance, the back pressure management is always a good reason to have a look at Webflux. Since we adopted to Webflux, our data platform never had problems with stability anymore (not to claim, there are no other ways to have a stable system, but here many issues are solved out of the box).
As a side note: I recommend, having a closer look at Schedulers we just recently gained 30% cpu time by choosing the right one for slow DB accesses.
EDIT:
In https://docs.spring.io/spring/docs/current/spring-framework-reference/web-reactive.html#webflux-fn-handler-functions the reference documentation explicitly says:
ServerRequest and ServerResponse are immutable interfaces that offer JDK 8-friendly access to the HTTP request and response. Both request and response provide Reactive Streams back pressure against the body streams.
What exactly is subscribing to the publisher?
The framework (so Spring, in this case.)
In general, you shouldn't subscribe in your own application - the framework should be subscribing to your publisher when necessary. In the context of spring, that's whenever a relevant request hits that controller.
What (if anything) is providing backpressure?
In this case, it's only restricted by the speed of the connection (I believe Webflux will look at the underlying TCP layer) and then request data as required. Whether your upstream flux listens to that backpressure though is another story - it may do, or it may just flood the consumer with as much data as it can.
For context I'm trying to evaluate if there are advantages to using Spring WebFlux in this specific situation over Spring MVC.
The main advantage is being able to hold huge numbers of connections open with only a few threads - so no overhead of context switching. (That's not the sole advantage, but most of the advantages generally boil down to that point.) Usually, this is only an advantage worth considering if you need to hold in the region of thousands of connections open at once.
The main disadvantage is the fact reactive code looks very different from standard Java code, and is usually necessarily more complex as a result. Debugging is also harder - vanilla stack traces become all but useless for instance (though their are tools & techniques to make this easier.)
We have defined spring beans in Mule-config.xml. Certain public methods in this bean class needs to be periodically executed. We attempted to used spring quartz and spring task scheduler (adding beans in mule-config.xml)- but method is not executing in a schedule way - it is not triggered. Even using annotation (scheduled) does not work. Any work around for this? Any issue with spring scheduler with mule? Kindly help.
Thanks
If you want to use the Schedule annotation, take a look at this recent answer on the subject for a workaround.
Otherwise, Spring Quartz should work fine too. What have you tried? Share your config and specify the Mule version you're using. I'll review my answer accordingly.