How to handle errors when RabbitMQ exchange doesn't exist (and messages are sent through a messaging gateway interface) - rabbitmq

I'd like to know what is the canonical way to handle errors in the following situation (code is a minimal working example):
Messages are sent through a messaging gateway which defines its defaultRequestChannel and a #Gateway method:
#MessagingGateway(name = MY_GATEWAY, defaultRequestChannel = INPUT_CHANNEL)
public interface MyGateway
{
#Gateway
public void sendMessage(String message);
Messages are read from the channel and sent through an AMQP outbound adapter:
#Bean
public IntegrationFlow apiMutuaInputFlow()
{
return IntegrationFlows
.from(INPUT_CHANNEL)
.handle(Amqp.outboundAdapter(rabbitConfig.myTemplate()))
.get();
}
The RabbitMQ configuration is skeletal:
#Configuration
public class RabbitMqConfiguration
{
#Autowired
private ConnectionFactory rabbitConnectionFactory;
#Bean
public RabbitTemplate myTemplate()
{
RabbitTemplate r = new RabbitTemplate(rabbitConnectionFactory);
r.setExchange(INPUT_QUEUE_NAME);
r.setConnectionFactory(rabbitConnectionFactory);
return r;
}
}
I generally include a bean to define the RabbitMQ configuration I'm relying upon (exchange, queues and bindings), and it actually works fine. But while testing for failure scenarios, I found a situation I don't know how to properly handle using Spring Integration. The steps are:
Remove the beans that configure RabbitMQ
Run the flow against an unconfigured, vanilla RabbitMQ instance.
What I would expect is:
The message cannot be delivered because the exchange cannot be found.
Either I find some way to get an exception from the messaging gateway on the caller thread.
Either I find some way to otherwise intercept this error.
What I find:
The message cannot be delivered because the exchange cannot be found, and indeed this error message is logged every time the #Gateway method is called.
2020-02-11 08:18:40.746 ERROR 42778 --- [ 127.0.0.1:5672] o.s.a.r.c.CachingConnectionFactory : Channel shutdown: channel error; protocol method: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - no exchange 'my.exchange' in vhost '/', class-id=60, method-id=40)
The gateway is not failing, nor have I find a way to configure it to do so (e.g.: adding throws clauses to the interface methods, configuring a transactional channel, setting wait-for-confirm and a confirm-timeout).
I haven't found a way to otherwise catch that CachingConectionFactory error (e.g.: configuring a transactional channel).
I haven't found a way to catch an error message on another channel (specified on the gateway's errorChannel), or in Spring Integration's default errorChannel.
I understand such a failure may not be propagated upstream by the messaging gateway, whose job is isolating callers from the messaging API, but I definitely expect such an error to be interceptable.
Could you point me in the right direction?
Thank you.

RabbitMQ is inherently async, which is one reason that it performs so well.
You can, however, block the caller by enabling confirms and returns and setting this option:
/**
* Set to true if you want to block the calling thread until a publisher confirm has
* been received. Requires a template configured for returns. If a confirm is not
* received within the confirm timeout or a negative acknowledgment or returned
* message is received, an exception will be thrown. Does not apply to the gateway
* since it blocks awaiting the reply.
* #param waitForConfirm true to block until the confirmation or timeout is received.
* #since 5.2
* #see #setConfirmTimeout(long)
* #see #setMultiSend(boolean)
*/
public void setWaitForConfirm(boolean waitForConfirm) {
this.waitForConfirm = waitForConfirm;
}
(With the DSL .waitForConfirm(true)).
This also requires a confirm correlation expression. Here's an example from one of the test cases
#Bean
public IntegrationFlow flow(RabbitTemplate template) {
return f -> f.handle(Amqp.outboundAdapter(template)
.exchangeName("")
.routingKeyFunction(msg -> msg.getHeaders().get("rk", String.class))
.confirmCorrelationFunction(msg -> msg)
.waitForConfirm(true));
}
#Bean
public CachingConnectionFactory cf() {
CachingConnectionFactory ccf = new CachingConnectionFactory(
RabbitAvailableCondition.getBrokerRunning().getConnectionFactory());
ccf.setPublisherConfirmType(CachingConnectionFactory.ConfirmType.CORRELATED);
ccf.setPublisherReturns(true);
return ccf;
}
#Bean
public RabbitTemplate template(ConnectionFactory cf) {
RabbitTemplate rabbitTemplate = new RabbitTemplate(cf);
rabbitTemplate.setMandatory(true); // for returns
rabbitTemplate.setReceiveTimeout(10_000);
return rabbitTemplate;
}
Bear in mind this will slow down things considerably (similar to using transactions) so you may want to reconsider whether you want to do this on every send (unless performance is not an issue).

Related

Webflux, with Websocket how to prevent subscribing twice of reactive redis messaging operation

I have a websocket implementation using redis messaging operation on webflux. And what it does is it listens to topic and returns the values via websocket endpoint.
The problem I have is each time a user sends a message via websocket to the endpoint it seems a brand new redis subscription is made, resulting in the accumulation of subscribers on the redis message topic and the websocket responses are increased with the number of redis topic message subscribtions as well (example user sends 3 messages, redis topic subscriptions are increased to three, websocket connection responses three times).
Would like to know if there is a way to reuse the same subscription to the messaging topic so it would prevent multiple redis topic subscriptions.
The code I use is as follows:
Websocket Handler
public class SendingMessageHandler implements WebSocketHandler {
private final Gson gson = new Gson();
private final MessagingService messagingService;
public SendingMessageHandler(MessagingService messagingService) {
this.messagingService = messagingService;
}
#Override
public Mono<Void> handle(WebSocketSession session) {
Flux<WebSocketMessage> stringFlux = session.receive()
.map(WebSocketMessage::getPayloadAsText)
.flatMap(inputData ->
messagingService.playGame(inputData)
.map(data ->
session.textMessage(gson.toJson(data))
)
);
return session.send(stringFlux);
}
}
Message Handling service
public class MessagingService{
private final ReactiveRedisOperations<String, GamePubSub> reactiveRedisOperations;
public MessagingService(ReactiveRedisOperations<String, GamePubSub> reactiveRedisOperations) {
this.reactiveRedisOperations = reactiveRedisOperations;
}
public Flux<Object> playGame(UserInput userInput){
return reactiveRedisOperations.listenTo("TOPIC_NAME");
}
}
Thank you in advance.
Instead of using ReactiveRedisOperations, MessageListener is the way to go here. You can register a listener once, and use the following as the listener.
data -> session.textMessage(gson.toJson(data))
The registration should happen only once at the beginning of the connection. You can override void afterConnectionEstablished(WebSocketSession session) of SendingMessageHandler to accomplish this. That way a new subscription created per every new Websocket connection, per every message.
Also, don't forget to override afterConnectionClosed, and unsubscribe from the redis topic, and clean up the listener within it.
Instructions on how to use MessageListener.

Spring cloud stream dlq processing with spring cloud function for rabbitmq

I have read the spring cloud stream binder reference document which mentioned DLQ processing using #RabbitListener. https://docs.spring.io/spring-cloud-stream-binder-rabbit/docs/3.0.10.RELEASE/reference/html/spring-cloud-stream-binder-rabbit.html#rabbit-dlq-processing
Can we achieve the same via Spring cloud function like we can do the same for consumers?
Like
#Bean
public Consumer<Message> dlqprocess(DLQProcess dlqprocess) {
return t -> dlqprocess.do(t);
}
I am not sure whether we can do this or not. If this allows what are the other configuration we have to do?
If you aim is to requeue failed messages, the function can just throw exceptions as described in docs.
Furthermore, if you need more fine-grained control about send and requeued messages you can use StreamBrdidge. Here you need to explicitly define DLQ binding in the configuration file:
spring.cloud.stream.bindings.myDlq-out-0.destination=DLX
spring.cloud.stream.rabbit.bindings.myDlq-out-0.producer.exchangeType=direct
spring.cloud.stream.rabbit.bindings.myDlq-out-0.producer.routingKeyExpression='myDestination.myGroup'
spring.cloud.stream.source=myDlq
Finally, the function controls whether to send and requeue the message:
#Bean
public Consumer<Message> process(StreamBridge streamBridge) {
return t -> {
// ....
if(republish) streamBridge.send("myDlq-out-0", t);
if(sendToDestination) streamBridge.send("myDestination-out-0", t);
// ....
};
}

Spring AMQP RabbitMQ RPC - Handle response exceptions

I am trying to use a RPC AMQP RabbitMQ queue to send and receive messages. The problem is that I have set a setReplyTimeout value. When that happens I get a "org.springframework.amqp.AmqpRejectAndDontRequeueException: Reply received after timeout". I have a DLQ set up on the incoming queue, but it appears that the exception is received when spring tries to return the message on its queue that is automatically created. Thus how can I handle exceptions when sending messages back to a producer? Ideally I would want any message that gets an exception while being sent to a producer sent to a DLQ.
I am using
#RabbitListener(queues = QueueConfig.QUEUE_ALL, containerFactory = "containerFactoryQueueAll")
It requires a SimpleRabbitListenerContainerFactory which does not have setQueues. Also rabbitTemplate does not have a rabbitTemplate.setReplyQueue
Thanks,
Brian
Instead of using the default built-in reply listener container with the direct reply-to pseudo queue, use a Reply Listener Container with a named queue that is configured to route undeliverable messages to a DLQ.
The RabbitTemplate is configured as the container's listener:
#Bean
public RabbitTemplate amqpTemplate() {
RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory());
rabbitTemplate.setMessageConverter(msgConv());
rabbitTemplate.setReplyQueue(replyQueue());
rabbitTemplate.setReplyTimeout(60000);
rabbitTemplate.setUseDirectReplyToContainer(false);
return rabbitTemplate;
}
#Bean
public SimpleMessageListenerContainer replyListenerContainer() {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(connectionFactory());
container.setQueues(replyQueue());
container.setMessageListener(amqpTemplate());
return container;
}
#Bean
public Queue replyQueue() {
return new Queue("my.reply.queue");
}
Note that the documentation needs to be updated, but you also need
rabbitTemplate.setUseDirectReplyToContainer(false);
IMPORTANT
If you have multiple instances of the client, each needs its own reply queue.

spring cloud stream unable to parse message posted to RabbitMq using Spring RestTemplate

I have an issue in getting the message to spring-cloud-stream spring-boot app.
I am using rabbitMq as message engine.
Message producer is a non spring-boot app, which sends a message using Spring RestTemplate.
Queue Name: "audit.logging.rest"
The consumer application is setup to listen that queue. This app is spring-boot app(spring-cloud-stream).
Below is the consumer code
application.yml
cloud:
stream:
bindings:
restChannel:
binder: rabbit
destination: audit.logging
group: rest
AuditServiceApplication.java
#SpringBootApplication
public class AuditServiceApplication {
#Bean
public ByteArrayMessageConverter byteArrayMessageConverter() {
return new ByteArrayMessageConverter();
}
#Input
#StreamListener(AuditChannelProperties.REST_CHANNEL)
public void receive(AuditTestLogger logger) {
...
}
AuditTestLogger.java
public class AuditTestLogger {
private String applicationName;
public String getApplicationName() {
return applicationName;
}
public void setApplicationName(String applicationName) {
this.applicationName = applicationName;
}
}
Below is the request being sent from the producer App in JSON format.
{"applicationName" : "AppOne" }
Found couple of issues:
Issue1:
What I noticed is the below method is getting triggered only when the method Parameter is mentioned as Object, as spring-cloud-stream is not able to parse the message into Java POJO object.
#Input
#StreamListener(AuditChannelProperties.REST_CHANNEL)
public void receive(AuditTestLogger logger) {
Issue2:
When I changed the method to receive object. I see the object is of type RMQTextMessage which cannot be parsed. However I see actual posted message within it against text property.
I had written a ByteArrayMessageConverter which even didn't help.
Is there any way to tell spring cloud stream to extract the message from RMQTextMessage using MessageConverter and get the actual message out of it.
Thanks in Advance..
RMQTextMessage? Looks like it is a part of rabbitmq-jms-client.
In case of RabbitMQ Binder you should rely only on the Spring AMQP.
Now let's figure out what your producer application is doing.
Since you get RMQTextMessage as value for the #StreamListener method that says me that the sender really uses rabbitmq-jms-client for producing, and therefore the real AMQP message in queue has that RMQTextMessage as a wrapper for real payload.
Why don't use Spring AMQP there as well?
It's a late reply but I have the exact problem and solved it by sending and receiving the messages in application/json format. use this in the spring cloud stream config.
content-type: application/json

Java Spring RabbitMq consumer

I am trying to create a RabbitMq consumer in Java Spring framework. Where I need to implement RabbitMq RPC model, so basically consumer shall receive some message from RabbitMq broker, process it, and send it back to the associated reply queue.
Can somebody please point me a neat sample code which implements this requirement in Spring ?
Thanks in advance.
Consider using the Spring AMQP Project.
See the documentation about async consumers. You just need to implement a POJO method and use a MessageListenerAdapter (which is inserted by default when using XML configuration) - if your POJO method returns a result, the framework will automatically send the reply to the replyTo in the inbound message, which can be a simple queue name, or exchange/routingKey.
<rabbit:listener-container connection-factory="rabbitConnectionFactory">
<rabbit:listener queues="some.queue" ref="somePojo" method="handle"/>
</rabbit:listener-container>
public class SomePojo {
public String handle(String in) {
return in.toUpperCase();
}
}
Or, you can use the annotation #RabbitListener in your POJO - again, see the documentation.
Thanks Gary, it worked for me. I used #RabbitListener annotation.
Strangely it only works when I provide queue alone, However specifying a binding of exchange, routing key and queue doesn't work. Not sure what the issue here.
Here is client code snippet in python.
#!/usr/bin/env python
import pika
connection = pika.BlockingConnection(pika.ConnectionParameters(
host='localhost'))
channel = connection.channel()
channel.queue_declare(queue='myQueue',durable='true')
channel.basic_publish(exchange='myExchange',
routing_key='rpc_queue',
body='Hello World!')
print " [x] Sent 'Hello World!'"
connection.close()
Here is spring consumer code.
#RabbitListener(
bindings = #QueueBinding(
value = #Queue(value = "myQueue", durable = "true"),
exchange = #Exchange(value = "myExchange"),
key = "rpc_queue")
)
public void processOrder(Message message) {
String messageBody= new String(message.getBody());
System.out.println("Received : "+messageBody);
}
Not sure whats going wrong with this binding.