Spring AMQP RabbitMQ RPC - Handle response exceptions - rabbitmq

I am trying to use a RPC AMQP RabbitMQ queue to send and receive messages. The problem is that I have set a setReplyTimeout value. When that happens I get a "org.springframework.amqp.AmqpRejectAndDontRequeueException: Reply received after timeout". I have a DLQ set up on the incoming queue, but it appears that the exception is received when spring tries to return the message on its queue that is automatically created. Thus how can I handle exceptions when sending messages back to a producer? Ideally I would want any message that gets an exception while being sent to a producer sent to a DLQ.
I am using
#RabbitListener(queues = QueueConfig.QUEUE_ALL, containerFactory = "containerFactoryQueueAll")
It requires a SimpleRabbitListenerContainerFactory which does not have setQueues. Also rabbitTemplate does not have a rabbitTemplate.setReplyQueue
Thanks,
Brian

Instead of using the default built-in reply listener container with the direct reply-to pseudo queue, use a Reply Listener Container with a named queue that is configured to route undeliverable messages to a DLQ.
The RabbitTemplate is configured as the container's listener:
#Bean
public RabbitTemplate amqpTemplate() {
RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory());
rabbitTemplate.setMessageConverter(msgConv());
rabbitTemplate.setReplyQueue(replyQueue());
rabbitTemplate.setReplyTimeout(60000);
rabbitTemplate.setUseDirectReplyToContainer(false);
return rabbitTemplate;
}
#Bean
public SimpleMessageListenerContainer replyListenerContainer() {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(connectionFactory());
container.setQueues(replyQueue());
container.setMessageListener(amqpTemplate());
return container;
}
#Bean
public Queue replyQueue() {
return new Queue("my.reply.queue");
}
Note that the documentation needs to be updated, but you also need
rabbitTemplate.setUseDirectReplyToContainer(false);
IMPORTANT
If you have multiple instances of the client, each needs its own reply queue.

Related

How to handle errors when RabbitMQ exchange doesn't exist (and messages are sent through a messaging gateway interface)

I'd like to know what is the canonical way to handle errors in the following situation (code is a minimal working example):
Messages are sent through a messaging gateway which defines its defaultRequestChannel and a #Gateway method:
#MessagingGateway(name = MY_GATEWAY, defaultRequestChannel = INPUT_CHANNEL)
public interface MyGateway
{
#Gateway
public void sendMessage(String message);
Messages are read from the channel and sent through an AMQP outbound adapter:
#Bean
public IntegrationFlow apiMutuaInputFlow()
{
return IntegrationFlows
.from(INPUT_CHANNEL)
.handle(Amqp.outboundAdapter(rabbitConfig.myTemplate()))
.get();
}
The RabbitMQ configuration is skeletal:
#Configuration
public class RabbitMqConfiguration
{
#Autowired
private ConnectionFactory rabbitConnectionFactory;
#Bean
public RabbitTemplate myTemplate()
{
RabbitTemplate r = new RabbitTemplate(rabbitConnectionFactory);
r.setExchange(INPUT_QUEUE_NAME);
r.setConnectionFactory(rabbitConnectionFactory);
return r;
}
}
I generally include a bean to define the RabbitMQ configuration I'm relying upon (exchange, queues and bindings), and it actually works fine. But while testing for failure scenarios, I found a situation I don't know how to properly handle using Spring Integration. The steps are:
Remove the beans that configure RabbitMQ
Run the flow against an unconfigured, vanilla RabbitMQ instance.
What I would expect is:
The message cannot be delivered because the exchange cannot be found.
Either I find some way to get an exception from the messaging gateway on the caller thread.
Either I find some way to otherwise intercept this error.
What I find:
The message cannot be delivered because the exchange cannot be found, and indeed this error message is logged every time the #Gateway method is called.
2020-02-11 08:18:40.746 ERROR 42778 --- [ 127.0.0.1:5672] o.s.a.r.c.CachingConnectionFactory : Channel shutdown: channel error; protocol method: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - no exchange 'my.exchange' in vhost '/', class-id=60, method-id=40)
The gateway is not failing, nor have I find a way to configure it to do so (e.g.: adding throws clauses to the interface methods, configuring a transactional channel, setting wait-for-confirm and a confirm-timeout).
I haven't found a way to otherwise catch that CachingConectionFactory error (e.g.: configuring a transactional channel).
I haven't found a way to catch an error message on another channel (specified on the gateway's errorChannel), or in Spring Integration's default errorChannel.
I understand such a failure may not be propagated upstream by the messaging gateway, whose job is isolating callers from the messaging API, but I definitely expect such an error to be interceptable.
Could you point me in the right direction?
Thank you.
RabbitMQ is inherently async, which is one reason that it performs so well.
You can, however, block the caller by enabling confirms and returns and setting this option:
/**
* Set to true if you want to block the calling thread until a publisher confirm has
* been received. Requires a template configured for returns. If a confirm is not
* received within the confirm timeout or a negative acknowledgment or returned
* message is received, an exception will be thrown. Does not apply to the gateway
* since it blocks awaiting the reply.
* #param waitForConfirm true to block until the confirmation or timeout is received.
* #since 5.2
* #see #setConfirmTimeout(long)
* #see #setMultiSend(boolean)
*/
public void setWaitForConfirm(boolean waitForConfirm) {
this.waitForConfirm = waitForConfirm;
}
(With the DSL .waitForConfirm(true)).
This also requires a confirm correlation expression. Here's an example from one of the test cases
#Bean
public IntegrationFlow flow(RabbitTemplate template) {
return f -> f.handle(Amqp.outboundAdapter(template)
.exchangeName("")
.routingKeyFunction(msg -> msg.getHeaders().get("rk", String.class))
.confirmCorrelationFunction(msg -> msg)
.waitForConfirm(true));
}
#Bean
public CachingConnectionFactory cf() {
CachingConnectionFactory ccf = new CachingConnectionFactory(
RabbitAvailableCondition.getBrokerRunning().getConnectionFactory());
ccf.setPublisherConfirmType(CachingConnectionFactory.ConfirmType.CORRELATED);
ccf.setPublisherReturns(true);
return ccf;
}
#Bean
public RabbitTemplate template(ConnectionFactory cf) {
RabbitTemplate rabbitTemplate = new RabbitTemplate(cf);
rabbitTemplate.setMandatory(true); // for returns
rabbitTemplate.setReceiveTimeout(10_000);
return rabbitTemplate;
}
Bear in mind this will slow down things considerably (similar to using transactions) so you may want to reconsider whether you want to do this on every send (unless performance is not an issue).

Consume DirectExchange messages using routing key and exchange in spring boot

Am trying to consume the message from exiting queue which is of type Direct Exchange(created with the help of exchange and routing key). I have only the exchange name and routing key and not the queue name. There were support for plain Java, but there was no place where I can find it for Spring boot.
#RabbitListener
#RabbitHandler
public void consumeMessage(Object message) {
LOGGER.debug("Message Consumed.... : {}", message.toString());
}
How can I consume messages with routing key and exchange name not the queue name as #RabbitListener asks for queue.
Consumers consume from queues not exchanges. You must bind a queue to the exchange with the routing key.
EDIT
There are several ways to automatically declare a queue on the broker.
#RabbitListener(bindings =
#QueueBinding(exchange = #Exchange("myExchange"),
key = "myRk", value = #Queue("")))
public void listen(String in) {
System.out.println(in);
}
This will bind an anonymous queue (auto-delete) which will be deleted when the application is stopped.
#RabbitListener(bindings =
#QueueBinding(exchange = #Exchange("myExchange"),
key = "myRk", value = #Queue("foo")))
public void listen(String in) {
System.out.println(in);
}
Will bind a permanent queue foo to the exchange with the routing key.
You can also simply declare #Bean s for the queue, exchange and binding.
See Configuring the broker.

Default queue binding in RabbitAdmin

Today, our application have a vhost, a ConnectionFactory and a RabbitAdmin followed by multiple queue and exchange declaration. Now we have a requirement where I need a new vhost, hence a ConnectionFactory & RabbitAdmin
After creating the new vhost, the problem I'm facing is, all existing queue and exchange is getting created in both vhost. To address this issue, I used declare-by="RabbitAdminName" property which I can use in both queue and exchange level. As my application is having multiple queue and exchange hence I prefer not to disturb all existing config by adding declare-by in each queue definition.
Is there a way(global config to change the default behavior) to tell rabbit that only the intended new queue will go to the new vhost/ConnectoinFactory/RabbitAdmin and not the already existing queue. Any help is highly appreciable(I'm looking for the xml way of deceleration)
By default, all queues, exchanges, and bindings are declared by all
RabbitAdmin instances (that have auto-startup="true") in the
application context.
Reference: spring.io
There is currently no global setting for this; you have to configure each queue etc and set the declare-by property to limit the declaration to an explicit admin.
So you would need to do this for your old queues to only declare those queues on the old vhost.
We could add a flag to the admin to exclude any beans that do not explicitly request declaration by this admin.
Please open a new feature issue.
I solved it as follows:
#Bean
public RabbitAdmin admin() {
RabbitAdmin rabbitAdmin = new RabbitAdmin(cf1());
rabbitAdmin.afterPropertiesSet();
return rabbitAdmin;
}
#Bean
public RabbitAdmin admin2() {
RabbitAdmin rabbitAdmin = new RabbitAdmin(cf2());
rabbitAdmin.afterPropertiesSet();
return rabbitAdmin;
}
#Bean
public Queue queue() {
Queue queue = new Queue("foo");
queue.setAdminsThatShouldDeclare(admin());
return queue;
}
#Bean
public Exchange exchange() {
DirectExchange exchange = new DirectExchange("bar");
exchange.setAdminsThatShouldDeclare(admin());
return exchange;
}
#Bean
public Binding binding() {
Binding binding = new Binding("foo", DestinationType.QUEUE, exchange().getName(), "foo", null);
binding.setAdminsThatShouldDeclare(admin());
return binding;
}
By: https://docs.spring.io/spring-amqp/docs/1.4.5.RELEASE/reference/html/amqp.html#conditional-declaration

Competing consumers on Apache Camel RabbitMQ endpoint

I have four exact replicas of a service that among other things catch messages from a certain queue using Apache Camel RabbitMQ endpoints. Each route looks like this:
//Start Process from RabbitMQ queue
from("rabbitmq://" +
System.getenv("ADVERTISE_ADDRESS") +
"/" +
System.getenv("RABBITMQ_EXCHANGE_NAME") +
"?routingKey=" +
System.getenv("RABBITMQ_ROUTING_KEY") +
"&autoAck=true")
.process(exchange -> exchange.getIn().setBody(exchange.getIn().getBody()))
.unmarshal().json(JsonLibrary.Jackson, TwitterBean.class)
.transform().method(ResponseTransformer.class, "transformtwitterBean")
.marshal().json(JsonLibrary.Jackson)
.setHeader(Exchange.HTTP_METHOD, constant("POST"))
.setHeader(Exchange.CONTENT_TYPE, constant("application/json"))
.to("http4://" + System.getenv("ADVERTISE_ADDRESS") + ":" + System.getenv("CAMUNDA_PORT") + "/rest/process-definition/key/MainProcess/start")
.log("Response: ${body}");
Right now each endpoint processes the message.
Even though the "concurrent consumers"-option by default is one.
I assumed that maybe my messages weren't acknowledged,
so I set the autoAck option to true.
This didn't help, how can I make these services competing consumers?
EDIT:
A code snippet from the configuration of my publisher app:
#Configuration
public class RabbitMqConfig {
#Bean
Queue queue() {
return new Queue(System.getenv("RABBITMQ_QUEUE_NAME"), true);
}
#Bean
DirectExchange exchange() {
return new DirectExchange(System.getenv("RABBITMQ_EXCHANGE_NAME"), true, true);
}
#Bean
Binding binding(Queue queue, DirectExchange exchange) {
return BindingBuilder.bind(queue).to(exchange).with(System.getenv("RABBITMQ_ROUTING_KEY"));
}
#Bean
public MessageConverter jsonMessageConverter(){
return new Jackson2JsonMessageConverter();
}
public AmqpTemplate rabbitTemplate(ConnectionFactory connectionFactory) {
final RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory);
rabbitTemplate.setMessageConverter(jsonMessageConverter());
return rabbitTemplate;
}
}
The issue you have is that you're not naming your queue on the service side
Based on the camel apache rabbitmq documentation, this means that a random name is generated for the queue.
So:
you have a publisher that sends a message to an exchange
then each of your service creates a queue with a random name, and binds it to the exchange
Each service having it's own queue, bound to the same exchange, will get the same messages.
To avoid this you need to provide a queue name,
so that each service will connect to the same queue, which will mean they will share the message consumption with the other service instances.
Sounds like you don't have a Queue, but a Topic. See here for a comparison.
The message broker is responsible to give a queue message to only one consumer, no matter how much of them are present.

Delay message to send to listener using Spring AMQP

I have a requirement to send message to MessageListener after certain duration , So is there any way to achieve using Spring AMQP.
Eg .
Producer produces the message and message goes to RabbitMQ Q , The message gets received Listener listening to that Q immediately, I want to delay that message to be received at consumer side say after some configuration parameter say 1000ms
The RabbitMQ provides for this purpose Delayed Exchange feature.
Starting with version 1.6 Spring AMQP also provides a high level API on the matter: http://docs.spring.io/spring-amqp/reference/html/_reference.html#delayed-message-exchange:
<rabbit:topic-exchange name="topic" delayed="true" />
MessageProperties properties = new MessageProperties();
properties.setDelay(15000);
template.send(exchange, routingKey,
MessageBuilder.withBody("foo".getBytes()).andProperties(properties).build());
UPDATE
Before Spring AMQP 1.6 you should do like this:
#Bean
CustomExchange delayExchange() {
Map<String, Object> args = new HashMap<String, Object>();
args.put("x-delayed-type", "direct");
return new CustomExchange("my-exchange", "x-delayed-message", true, false, args);
}
...
MessageProperties properties = new MessageProperties();
properties.setHeader("x-delay", 15000);
template.send(exchange, routingKey,
MessageBuilder.withBody("foo".getBytes()).andProperties(properties).build());
Also see this question and its answer: Scheduled/Delay messaging in Spring AMQP RabbitMq
If you use spring boot, it can be like this:
#Bean
Queue queue() {
return QueueBuilder.durable(queueName)
.withArgument("x-dead-letter-exchange", dlx)
.withArgument("x-dead-letter-routing-key", dlq)
.build();
}
#Bean
TopicExchange exchange() {
return (TopicExchange) ExchangeBuilder.topicExchange(topicExchangeName)
.delayed()
.build();
#Bean
Binding binding() {
return BindingBuilder.bind(queue()).to(exchange()).with(queueName);
}