add custom baggage to current span and accessed by log MDC - spring-cloud-sleuth

i'm trying to add additional Baggage to the existing span on a HTTP server, i want to add a path variable to the span to be accessed from log MDC and to be propagated on the wire to the next server i call via http or kafka.
my setup : spring cloud sleuth Hoxton.SR5 and spring boot 2.2.5
i tried adding the following setup and configuration:
spring:
sleuth:
propagation-keys: context-id, context-type
log:
slf4j:
whitelisted-mdc-keys: context-id, context-type
and added http interceptor :
public class HttpContextInterceptor implements HandlerInterceptor {
private final Tracer tracer;
private final HttpContextSupplier httpContextSupplier;
#Override
public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {
if (httpContextSupplier != null) {
addContext(request, handler);
}
return true;
}
private void addContext(HttpServletRequest request, Object handler) {
final Context context = httpContextSupplier.getContext(request);
if (!StringUtils.isEmpty(context.getContextId())) {
ExtraFieldPropagation.set(tracer.currentSpan().context(), TracingHeadersConsts.HEADER_CONTEXT_ID, context.getContextId());
}
if (!StringUtils.isEmpty(context.getContextType())) {
ExtraFieldPropagation.set(tracer.currentSpan().context(), TracingHeadersConsts.HEADER_CONTEXT_TYPE, context.getContextType());
}
}
}
and http filter to affect the current span(according to the spring docs)
public class TracingFilter extends OncePerRequestFilter {
private final Tracer tracer;
#Override
protected void doFilterInternal(HttpServletRequest request, HttpServletResponse response, FilterChain filterChain) throws ServletException, IOException {
try (Tracer.SpanInScope ws = tracer.withSpanInScope(tracer.currentSpan())){
filterChain.doFilter(request, response);
}
}
}
the problem is the logs doesn't contain my custom context-id, context-type, although is see it in the span context.
what i'm missing ?

Similar question Spring cloud sleuth adding tag
and answer to it https://stackoverflow.com/a/66554834
For some context: This is from the Spring Docs.
In order to automatically set the baggage values to Slf4j’s MDC, you have to set the spring.sleuth.baggage.correlation-fields property with a list of allowed local or remote keys. E.g. spring.sleuth.baggage.correlation-fields=country-code will set the value of the country-code baggage into MDC.
Note that the extra field is propagated and added to MDC starting with the next downstream trace context. To immediately add the extra field to MDC in the current trace context, configure the field to flush on update.
// configuration
#Bean
BaggageField countryCodeField() {
return BaggageField.create("country-code");
}
#Bean
ScopeDecorator mdcScopeDecorator() {
return MDCScopeDecorator.newBuilder()
.clear()
.add(SingleCorrelationField.newBuilder(countryCodeField())
.flushOnUpdate()
.build())
.build();
}
// service
#Autowired
BaggageField countryCodeField;
countryCodeField.updateValue("new-value");

A way to flush MDC in current span is also described in official Sleuth 2.0 -> 3.0 migration guide
#Configuration
class BusinessProcessBaggageConfiguration {
BaggageField BUSINESS_PROCESS = BaggageField.create("bp");
/** {#link BaggageField#updateValue(TraceContext, String)} now flushes to MDC */
#Bean
CorrelationScopeCustomizer flushBusinessProcessToMDCOnUpdate() {
return b -> b.add(
SingleCorrelationField.newBuilder(BUSINESS_PROCESS).flushOnUpdate().build()
);
}
}

Related

Spring Integration testing a Files.inboundAdapter flow

I have this flow that I am trying to test but nothing works as expected. The flow itself works well but testing seems a bit tricky.
This is my flow:
#Configuration
#RequiredArgsConstructor
public class FileInboundFlow {
private final ThreadPoolTaskExecutor threadPoolTaskExecutor;
private String filePath;
#Bean
public IntegrationFlow fileReaderFlow() {
return IntegrationFlows.from(Files.inboundAdapter(new File(this.filePath))
.filterFunction(...)
.preventDuplicates(false),
endpointConfigurer -> endpointConfigurer.poller(
Pollers.fixedDelay(500)
.taskExecutor(this.threadPoolTaskExecutor)
.maxMessagesPerPoll(15)))
.transform(new UnZipTransformer())
.enrichHeaders(this::headersEnricher)
.transform(Message.class, this::modifyMessagePayload)
.route(Map.class, this::channelsRouter)
.get();
}
private String channelsRouter(Map<String, File> payload) {
boolean isZip = payload.values()
.stream()
.anyMatch(file -> isZipFile(file));
return isZip ? ZIP_CHANNEL : XML_CHANNEL; // ZIP_CHANNEL and XML_CHANNEL are PublishSubscribeChannel
}
#Bean
public SubscribableChannel xmlChannel() {
var channel = new PublishSubscribeChannel(this.threadPoolTaskExecutor);
channel.setBeanName(XML_CHANNEL);
return channel;
}
#Bean
public SubscribableChannel zipChannel() {
var channel = new PublishSubscribeChannel(this.threadPoolTaskExecutor);
channel.setBeanName(ZIP_CHANNEL);
return channel;
}
//There is a #ServiceActivator on each channel
#ServiceActivator(inputChannel = XML_CHANNEL)
public void handleXml(Message<Map<String, File>> message) {
...
}
#ServiceActivator(inputChannel = ZIP_CHANNEL)
public void handleZip(Message<Map<String, File>> message) {
...
}
//Plus an #Transformer on the XML_CHANNEL
#Transformer(inputChannel = XML_CHANNEL, outputChannel = BUS_CHANNEL)
private List<BusData> xmlFileToIngestionMessagePayload(Map<String, File> xmlFilesByName) {
return xmlFilesByName.values()
.stream()
.map(...)
.collect(Collectors.toList());
}
}
I would like to test multiple cases, the first one is checking the message payload published on each channel after the end of fileReaderFlow.
So I defined this test classe:
#SpringBootTest
#SpringIntegrationTest
#ExtendWith(SpringExtension.class)
class FileInboundFlowTest {
#Autowired
private MockIntegrationContext mockIntegrationContext;
#TempDir
static Path localWorkDir;
#BeforeEach
void setUp() {
copyFileToTheFlowDir(); // here I copy a file to trigger the flow
}
#Test
void checkXmlChannelPayloadTest() throws InterruptedException {
Thread.sleep(1000); //waiting for the flow execution
PublishSubscribeChannel xmlChannel = this.getBean(XML_CHANNEL, PublishSubscribeChannel.class); // I extract the channel to listen to the message sent to it.
xmlChannel.subscribe(message -> {
assertThat(message.getPayload()).isInstanceOf(Map.class); // This is never executed
});
}
}
As expected that test does not work because the assertThat(message.getPayload()).isInstanceOf(Map.class); is never executed.
After reading the documentation I didn't find any hint to help me solved that issue. Any help would be appreciated! Thanks a lot
First of all that channel.setBeanName(XML_CHANNEL); does not effect the target bean. You do this on the bean creation phase and dependency injection container knows nothing about this setting: it just does not consult with it. If you really would like to dictate an XML_CHANNEL for bean name, you'd better look into the #Bean(name) attribute.
The problem in the test that you are missing the fact of async logic of the flow. That Files.inboundAdapter() works if fully different thread and emits messages outside of your test method. So, even if you could subscribe to the channel in time, before any message is emitted to it, that doesn't mean your test will work correctly: the assertThat() will be performed on a different thread. Therefore no real JUnit report for your test method context.
So, what I'd suggest to do is:
Have Files.inboundAdapter() stopped in the beginning of the test before any setup you'd like to do in the test. Or at least don't place files into that filePath, so the channel adapter doesn't emit messages.
Take the channel from the application context and if you wish subscribe or use a ChannelInterceptor.
Have an async barrier, e.g. CountDownLatch to pass to that subscriber.
Start the channel adapter or put file into the dir for scanning.
Wait for the async barrier before verifying some value or state.

spring.rabbitmq.listener.simple.retry.enabled=true is ignored if I configure manually DirectMessageListenerContainer

I'm trying to activate deadletterqueue on rabbitmq with properties
spring.rabbitmq.listener.simple.retry.enabled=true
spring.rabbitmq.listener.simple.retry.max-attempts=10
It works fine when I use annotation
public class SimpleConsumer {
#RabbitListener(queues = "messages.queue")
public void handleMessage(String message){
throw new RuntimeException();
}
}
but if I configure manually MessageListenerContainer, it doesn't work.
Below my configurations:
#Bean
SimpleMessageListenerContainer directMessageListenerContainer(
ConnectionFactory connectionFactory,
Queue simpleQueue,
MessageConverter jsonMessageConverter,
SimpleConsumer simpleConsumer)
{
return new SimpleMessageListenerContainer(connectionFactory){{
setQueues(simpleQueue);
setMessageListener(new MessageListenerAdapter(simpleConsumer, jsonMessageConverter));
// setDefaultRequeueRejected(false);
}};
}
If I set setDefaultRequeueRejected to true it try to resolve consumer infinite time (if throw exception).
If I set setDefaultRequeueRejected to false it try to resolve consumer one time and then use deadLetterConsumer.
What #RabbitListener(queues = "messages.queue") do under the hood for use spring.rabbitmq.listener configurations?
below my code on github
https://github.com/crakdelpol/dead-letter-spike.git
see branch "retry-by-configuration"
It adds a retry interceptor to the container's advice chain. See the documentation.
Spring Retry provides a couple of AOP interceptors and a great deal of flexibility to specify the parameters of the retry (number of attempts, exception types, backoff algorithm, and others). Spring AMQP also provides some convenience factory beans for creating Spring Retry interceptors in a convenient form for AMQP use cases, with strongly typed callback interfaces that you can use to implement custom recovery logic. See the Javadoc and properties of StatefulRetryOperationsInterceptor and StatelessRetryOperationsInterceptor for more detail.
...
#Bean
public StatefulRetryOperationsInterceptor interceptor() {
return RetryInterceptorBuilder.stateful()
.maxAttempts(5)
.backOffOptions(1000, 2.0, 10000) // initialInterval, multiplier, maxInterval
.build();
}
Then add the interceptor to the container adviceChain.
EDIT
See the documentation I pointed you to; you need to add the recoverer to the interceptor:
The MessageRecover is called when all retries have been exhausted. The RejectAndDontRequeueRecoverer does exactly that. The default MessageRecoverer consumes the errant message and emits a WARN message.
Here is a complete example:
#SpringBootApplication
public class So67433138Application {
public static void main(String[] args) {
SpringApplication.run(So67433138Application.class, args);
}
#Bean
Queue queue() {
return QueueBuilder.durable("so67433138")
.deadLetterExchange("")
.deadLetterRoutingKey("so67433138.dlq")
.build();
}
#Bean
Queue dlq() {
return new Queue("so67433138.dlq");
}
#Bean
SimpleMessageListenerContainer container(ConnectionFactory cf) {
SimpleMessageListenerContainer smlc = new SimpleMessageListenerContainer(cf);
smlc.setQueueNames("so67433138");
smlc.setAdviceChain(RetryInterceptorBuilder.stateless()
.maxAttempts(5)
.backOffOptions(1_000, 2.0, 10_000)
.recoverer(new RejectAndDontRequeueRecoverer())
.build());
smlc.setMessageListener(msg -> {
System.out.println(new String(msg.getBody()));
throw new RuntimeException("test");
});
return smlc;
}
#RabbitListener(queues = "so67433138.dlq")
void dlq(String in) {
System.out.println("From DLQ: " + in);
}
}
test
test
test
test
test
2021-05-12 11:19:42.034 WARN 70667 ---[ container-1] o.s.a.r.r.RejectAndDontRequeueRecoverer : Retries exhausted for message ...
...
From DLQ: test

Webflux security authorisation test with bearer token (JWT) and custom claim

I have a Spring Boot (2.3.6.RELEASE) service that is acting as a resource server, it has been implemented using Webflux, client jwts are provided by a third party identity server.
I am attempting to test the security of the endpoints using JUnit 5 and #SpringBootTest. (For the record security appears to work as required during manual testing)
I am mutating the WebTestClient to include a JWT with an appropriate claim (myClaim), however in my custom ReactiveAuthorizationManager there is no bearer token in the requests header, thus with nothing to decode or claim to validate the request fails authorisation, as it should.
My test setup is thus:
#SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
#ActiveProfiles("test")
class ControllerTest {
#Autowired
private ApplicationContext applicationContext;
private WebTestClient webTestClient;
#BeforeEach
void init() {
webTestClient = WebTestClient
.bindToApplicationContext(applicationContext)
.apply(springSecurity())
.configureClient()
.build();
}
#Test
void willAllowAccessForJwtWithValidClaim() {
webTestClient.mutateWith(mockJwt().jwt(jwt -> jwt.claim("myClaim", "{myValue}")))
.get()
.uri("/securedEndpoint")
.exchange()
.expectStatus()
.isOk();
}
}
I have been attempting to follow this guide
I have tried the client with and without .filter(basicAuthentication()) just in case :)
It appears to me that the mockJwt() isint being put into the requests Authorization header field.
I also think that the ReactiveJwtDecoder being injected into my ReactiveAuthorizationManager will attempt to decode the test JWT against the identity provider which will fail.
I could mock the ReactiveAuthorizationManager or the ReativeJwtDecoder.
Is there anything I am missing?
Perhaps there is a way to create "test" JWTs using the Identity Services JWK set uri?
Additional detail:
Details of the ReactiveAuthorizationManager and Security Config
public class MyReactiveAuthorizationManager implements ReactiveAuthorizationManager<AuthorizationContext> {
private static final AuthorizationDecision UNAUTHORISED = new AuthorizationDecision(false);
private final ReactiveJwtDecoder jwtDecoder;
public JwtRoleReactiveAuthorizationManager(final ReactiveJwtDecoder jwtDecoder) {
this.jwtDecoder = jwtDecoder;
}
#Override
public Mono<AuthorizationDecision> check(final Mono<Authentication> authentication, final AuthorizationContext context) {
final ServerWebExchange exchange = context.getExchange();
if (null == exchange) {
return Mono.just(UNAUTHORISED);
}
final List<String> authorisationHeaders = exchange.getRequest().getHeaders().getOrEmpty(HttpHeaders.AUTHORIZATION);
if (authorisationHeaders.isEmpty()) {
return Mono.just(UNAUTHORISED);
}
final String bearer = authorisationHeaders.get(0);
return jwtDecoder.decode(bearer.replace("Bearer ", ""))
.flatMap(jwt -> determineAuthorisation(jwt.getClaimAsStringList("myClaim")));
}
private Mono<AuthorizationDecision> determineAuthorisation(final List<String> claimValues) {
if (Objects.isNull(claimValues)) {
return Mono.just(UNAUTHORISED);
} else {
return Mono.just(new AuthorizationDecision(!Collections.disjoint(claimValues, List.of("myValues")));
}
}
}
#EnableWebFluxSecurity
public class JwtSecurityConfig {
#Bean
public SecurityWebFilterChain configure(final ServerHttpSecurity http,
final ReactiveAuthorizationManager reactiveAuthorizationManager) {
http
.csrf().disable()
.logout().disable()
.authorizeExchange().pathMatchers("/securedEndpoint").access(reactiveAuthorizationManager)
.anyExchange().permitAll()
.and()
.oauth2ResourceServer()
.jwt();
return http.build();
}
}
Loosely speaking, it turns out that what I am actually doing is using a custom claim as an "Authority", that is saying "myClaim" must contain a value of "x" to allow access to a given path.
This is a little different to the claim being a simple custom claim, i.e. an additional bit of data (a users preferred colour scheme perhaps) in the token.
With that in mind I realised that the behaviour I was observing under testing was probably correct, so instead of implementing a ReactiveAuthorizationManager I choose to configure a ReactiveJwtAuthenticationConverter:
#Bean
public ReactiveJwtAuthenticationConverter jwtAuthenticationConverter() {
final JwtGrantedAuthoritiesConverter converter = new JwtGrantedAuthoritiesConverter();
converter.setAuthorityPrefix(""); // 1
converter.setAuthoritiesClaimName("myClaim");
final Converter<Jwt, Flux<GrantedAuthority>> rxConverter = new ReactiveJwtGrantedAuthoritiesConverterAdapter(converter);
final ReactiveJwtAuthenticationConverter jwtAuthenticationConverter = new ReactiveJwtAuthenticationConverter();
jwtAuthenticationConverter.setJwtGrantedAuthoritiesConverter(rxConverter);
return jwtAuthenticationConverter;
}
(Comment 1; The JwtGrantedAuthoritiesConverter prepends "SCOPE_" to the claim value, this can be controlled using setAuthorityPrefix see)
This required a tweak to the SecurityWebFilterChain configuration:
http
.csrf().disable()
.logout().disable()
.authorizeExchange().pathMatchers("securedEndpoint").hasAnyAuthority("myValue)
.anyExchange().permitAll()
.and()
.oauth2ResourceServer()
.jwt(jwt -> jwt.jwtAuthenticationConverter(jwtAuthenticationConverter));
Tests
#SpringBootTest
class ControllerTest {
private WebTestClient webTestClient;
#Autowired
public void setUp(final ApplicationContext applicationContext) {
webTestClient = WebTestClient
.bindToApplicationContext(applicationContext) // 2
.apply(springSecurity()) // 3
.configureClient()
.build();
}
#Test
void myTest() {
webTestClient
.mutateWith(mockJwt().authorities(new SimpleGrantedAuthority("myValue"))) // 4
.build()
.get()
.uri("/securedEndpoint")
.exchange()
.expectStatus()
.isOk()
}
}
To make the tests "work it appears that the WebTestClient needs to bind to the application context (at comment 2).
Ideally I would have prefered to have the WebTestClient bind to the server, however the apply(springSecurity()) (at comment 3) doesnt return an appropriate type for apply when using bindToServer
There are a number of different ways to "mock" the JWT when testing, the one used (at comment 4) for alternatives see the spring docs here
I hope this helps somebody else in the future, security and OAuth2 can be confusing :)
Thanks go #Toerktumlare for pointing me in the direction of useful documentation.

stop polling files when rabbitmq is down: spring integration

I'm working on a project where we are polling files from a sftp server and streaming it out into a object on the rabbitmq queue. Now when the rabbitmq is down it still polls and deletes the file from the server and losses the file while sending it on queue when rabbitmq is down. I'm using ExpressionEvaluatingRequestHandlerAdvice to remove the file on successful transformation. My code looks like this:
#Bean
public SessionFactory<ChannelSftp.LsEntry> sftpSessionFactory() {
DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory(true);
factory.setHost(sftpProperties.getSftpHost());
factory.setPort(sftpProperties.getSftpPort());
factory.setUser(sftpProperties.getSftpPathUser());
factory.setPassword(sftpProperties.getSftpPathPassword());
factory.setAllowUnknownKeys(true);
return new CachingSessionFactory<>(factory);
}
#Bean
public SftpRemoteFileTemplate sftpRemoteFileTemplate() {
return new SftpRemoteFileTemplate(sftpSessionFactory());
}
#Bean
#InboundChannelAdapter(channel = TransformerChannel.TRANSFORMER_OUTPUT, autoStartup = "false",
poller = #Poller(value = "customPoller"))
public MessageSource<InputStream> sftpMessageSource() {
SftpStreamingMessageSource messageSource = new SftpStreamingMessageSource(sftpRemoteFileTemplate,
null);
messageSource.setRemoteDirectory(sftpProperties.getSftpDirPath());
messageSource.setFilter(new SftpPersistentAcceptOnceFileListFilter(new SimpleMetadataStore(),
"streaming"));
messageSource.setFilter(new SftpSimplePatternFileListFilter("*.txt"));
return messageSource;
}
#Bean
#Transformer(inputChannel = TransformerChannel.TRANSFORMER_OUTPUT,
outputChannel = SFTPOutputChannel.SFTP_OUTPUT,
adviceChain = "deleteAdvice")
public org.springframework.integration.transformer.Transformer transformer() {
return new SFTPTransformerService("UTF-8");
}
#Bean
public ExpressionEvaluatingRequestHandlerAdvice deleteAdvice() {
ExpressionEvaluatingRequestHandlerAdvice advice = new ExpressionEvaluatingRequestHandlerAdvice();
advice.setOnSuccessExpressionString(
"#sftpRemoteFileTemplate.remove(headers['file_remoteDirectory'] + headers['file_remoteFile'])");
advice.setPropagateEvaluationFailures(false);
return advice;
}
I don't want the files to get removed/polled from the remote sftp server when the rabbitmq server is down. How can i achieve this ?
UPDATE
Apologies for not mentioning that I'm using spring cloud stream rabbit binder. And here is the transformer service:
public class SFTPTransformerService extends StreamTransformer {
public SFTPTransformerService(String charset) {
super(charset);
}
#Override
protected Object doTransform(Message<?> message) throws Exception {
String fileName = message.getHeaders().get("file_remoteFile", String.class);
Object fileContents = super.doTransform(message);
return new customFileDTO(fileName, (String) fileContents);
}
}
UPDATE-2
I added TransactionSynchronizationFactory on the customPoller as suggested. Now it doesn't poll file when rabbit server is down, but when the server is up, it keeps on polling the same file over and over again!! I cannot figure it out why? I guess i cannot use PollerSpec cause im on 4.3.2 version.
#Bean(name = "customPoller")
public PollerMetadata pollerMetadataDTX(StartStopTrigger startStopTrigger,
CustomTriggerAdvice customTriggerAdvice) {
PollerMetadata pollerMetadata = new PollerMetadata();
pollerMetadata.setAdviceChain(Collections.singletonList(customTriggerAdvice));
pollerMetadata.setTrigger(startStopTrigger);
pollerMetadata.setMaxMessagesPerPoll(Long.valueOf(sftpProperties.getMaxMessagePoll()));
ExpressionEvaluatingTransactionSynchronizationProcessor syncProcessor =
new ExpressionEvaluatingTransactionSynchronizationProcessor();
syncProcessor.setBeanFactory(applicationContext.getAutowireCapableBeanFactory());
syncProcessor.setBeforeCommitChannel(
applicationContext.getBean(TransformerChannel.TRANSFORMER_OUTPUT, MessageChannel.class));
syncProcessor
.setAfterCommitChannel(
applicationContext.getBean(SFTPOutputChannel.SFTP_OUTPUT, MessageChannel.class));
syncProcessor.setAfterCommitExpression(new SpelExpressionParser().parseExpression(
"#sftpRemoteFileTemplate.remove(headers['file_remoteDirectory'] + headers['file_remoteFile'])"));
DefaultTransactionSynchronizationFactory defaultTransactionSynchronizationFactory =
new DefaultTransactionSynchronizationFactory(syncProcessor);
pollerMetadata.setTransactionSynchronizationFactory(defaultTransactionSynchronizationFactory);
return pollerMetadata;
}
I don't know if you need this info but my CustomTriggerAdvice and StartStopTrigger looks like this :
#Component
public class CustomTriggerAdvice extends AbstractMessageSourceAdvice {
#Autowired private StartStopTrigger startStopTrigger;
#Override
public boolean beforeReceive(MessageSource<?> source) {
return true;
}
#Override
public Message<?> afterReceive(Message<?> result, MessageSource<?> source) {
if (result == null) {
if (startStopTrigger.getStart()) {
startStopTrigger.stop();
}
} else {
if (!startStopTrigger.getStart()) {
startStopTrigger.stop();
}
}
return result;
}
}
public class StartStopTrigger implements Trigger {
private PeriodicTrigger startTrigger;
private boolean start;
public StartStopTrigger(PeriodicTrigger startTrigger, boolean start) {
this.startTrigger = startTrigger;
this.start = start;
}
#Override
public Date nextExecutionTime(TriggerContext triggerContext) {
if (!start) {
return null;
}
start = true;
return startTrigger.nextExecutionTime(triggerContext);
}
public void stop() {
start = false;
}
public void start() {
start = true;
}
public boolean getStart() {
return this.start;
}
}
Well, would be great to see what your SFTPTransformerService and determine how it is possible to perform an onSuccessExpression when there should be an exception in case of down broker.
You also should not only throw an exception do not perform delete, but consider to add a RequestHandlerRetryAdvice to re-send the file to the RabbitMQ: https://docs.spring.io/spring-integration/docs/5.0.6.RELEASE/reference/html/messaging-endpoints-chapter.html#retry-advice
UPDATE
So, well, since Gary guessed that you use Spring Cloud Stream to send message to the Rabbit Binder after your internal process (very sad that you didn't share that information originally), you need to take a look to the Binder error handling on the matter: https://docs.spring.io/spring-cloud-stream/docs/Elmhurst.RELEASE/reference/htmlsingle/#_retry_with_the_rabbitmq_binder
And that is true that ExpressionEvaluatingRequestHandlerAdvice is applied only for the SFTPTransformerService and nothing more. The downstream error (in the Binder) is not included in this process already.
UPDATE 2
Yeah... I think Gary is right, and we don't have choice unless configure a TransactionSynchronizationFactory on the customPoller level instead of that ExpressionEvaluatingRequestHandlerAdvice: ExpressionEvaluatingRequestHandlerAdvice .
The DefaultTransactionSynchronizationFactory can be configured with the ExpressionEvaluatingTransactionSynchronizationProcessor, which has similar goal as the mentioned ExpressionEvaluatingRequestHandlerAdvice, but on the transaction level which will include your process starting with the SFTP Channel Adapter and ending on the Rabbit Binder level with the send to AMQP attempts.
See Reference Manual for more information: https://docs.spring.io/spring-integration/reference/html/transactions.html#transaction-synchronization.
The point with the ExpressionEvaluatingRequestHandlerAdvice (and any AbstractRequestHandlerAdvice) that they have a boundary only around handleRequestMessage() method, therefore only during the component they are declared.

How to implement a Restlet JAX-RS handler which is a thin proxy to a RESTful API, possibly implemented in the same java process?

We have two RESTful APIs - one is internal and another one is public, the two being implemented by different jars. The public API sort of wraps the internal one, performing the following steps:
Do some work
Call internal API
Do some work
Return the response to the user
It may happen (though not necessarily) that the two jars run in the same Java process.
We are using Restlet with the JAX-RS extension.
Here is an example of a simple public API implementation, which just forwards to the internal API:
#PUT
#Path("abc")
public MyResult method1(#Context UriInfo uriInfo, InputStream body) throws Exception {
String url = uriInfo.getAbsolutePath().toString().replace("/api/", "/internalapi/");
RestletClientResponse<MyResult> reply = WebClient.put(url, body, MyResult.class);
RestletUtils.addResponseHeaders(reply.responseHeaders);
return reply.returnObject;
}
Where WebClient.put is:
public class WebClient {
public static <T> RestletClientResponse<T> put(String url, Object body, Class<T> returnType) throws Exception {
Response restletResponse = Response.getCurrent();
ClientResource resource = new ClientResource(url);
Representation reply = null;
try {
Client timeoutClient = new Client(Protocol.HTTP);
timeoutClient.setConnectTimeout(30000);
resource.setNext(timeoutClient);
reply = resource.put(body, MediaType.APPLICATION_JSON);
T result = new JacksonConverter().toObject(new JacksonRepresentation<T>(reply, returnType), returnType, resource);
Status status = resource.getStatus();
return new RestletClientResponse<T>(result, (Form)resource.getResponseAttributes().get(HeaderConstants.ATTRIBUTE_HEADERS), status);
} finally {
if (reply != null) {
reply.release();
}
resource.release();
Response.setCurrent(restletResponse);
}
}
}
and RestletClientResponse<T> is:
public class RestletClientResponse<T> {
public T returnObject = null;
public Form responseHeaders = null;
public Status status = null;
public RestletClientResponse(T returnObject, Form responseHeaders, Status status) {
this.returnObject = returnObject;
this.responseHeaders = responseHeaders;
this.status = status;
}
}
and RestletUtils.addResponseHeaders is:
public class RestletUtils {
public static void addResponseHeader(String key, Object value) {
Form responseHeaders = (Form)org.restlet.Response.getCurrent().getAttributes().get(HeaderConstants.ATTRIBUTE_HEADERS);
if (responseHeaders == null) {
responseHeaders = new Form();
org.restlet.Response.getCurrent().getAttributes().put(HeaderConstants.ATTRIBUTE_HEADERS, responseHeaders);
}
responseHeaders.add(key, value.toString());
}
public static void addResponseHeaders(Form responseHeaders) {
for (String headerKey : responseHeaders.getNames()) {
RestletUtils.addResponseHeader(headerKey, responseHeaders.getValues(headerKey));
}
}
}
The problem is that if the two jars run in the same Java process, then an exception thrown from the internal API is not routed to the JAX-RS exception mapper of the internal API - the exception propagates up to the public API and is translated to the Internal Server Error (500).
Which means I am doing it wrong. So, my question is how do I invoke the internal RESTful API from within the public API implementation given the constraint that both the client and the server may run in the same Java process.
Surely, there are other problems, but I have a feeling that fixing the one I have just described is going to fix others as well.
The problem has nothing to do with the fact that both internal and public JARs are in the same JVM. They are perfectly separated by WebResource.put() method, which creates a new HTTP session. So, an exception in the internal API doesn't propagate to the public API.
The internal server error in the public API is caused by the post-processing mechanism, which interprets the output of the internal API and crashes for some reason. Don't blame the internal API, it is perfectly isolated and can't cause any troubles (even though it's in the same JVM).