Spring AMQP - Detect rejected messages? - rabbitmq

Is there a way in Spring AMQP to detect when a consumer has rejected a message?
My application declares the exchange, but the consumer declares the queue. I know that the consumer can set a dead letter exchange, but I want to remove this responsibility from the consumer. I need to somehow be notified that a consumer has rejected a message.

"I need to know"
I assume you mean the code that sent the message.
No; there is no such mechanism; the producer and consumer are independent.

Related

Rabbit MQ - can a message be persisted until all subscribed consumers received it?

I'm having a little trouble figuring if Rabbit MQ can publish a message to a single queue with multiple subscribers, where the message will not get deleted until all subscribers to that queue have gotten the message.
The closest I can find is https://www.rabbitmq.com/tutorials/amqp-concepts.html, where it states:
AMQP 0-9-1 has a built-in feature called message acknowledgements (sometimes referred to as acks) that consumers use to confirm message delivery and/or processing. If an application crashes (the AMQP broker notices this when the connection is closed), if an acknowledgement for a message was expected but not received by the AMQP broker, the message is re-queued (and possibly immediately delivered to another consumer, if any exists).
Does this mean if the queue has more than one subscriber, it will wait until the message is consumed by all subscribers?
You should use multiple queues bound to the same exchange, using the same binding. Then, when a message matches the binding, it will be delivered to all queues, which presumably each have a consumer.
If you have multiple consumers on a single queue, RabbitMQ will round-robin deliveries among those consumers (which is not what you want).
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.

RabbitMQ: Publishing message when consumer is down and later consumer can't consume message without named queue

I have a producer and a consumer. Multiple instances of the consumer are running. When producer publishes a message, my intention is to consume the message by all the instances. So, I am using the direct exchange. Producer publishes a message to the direct exchange with a topic. Consumers are listening to that topic with the exclusive queue. This process is working fine when the consumer is up and producer publishes a message. But when consumers are down and producer publishes a message, consumers are not consuming this message when up.
I googled about the issue. A suggestion was to use named queue. But if I use named queue, messages will be consumed following the round-robin algorithm. That does not meet my expectation to consume the same message by all the consumers.
Is there any other solution?
Appreciated your help.
There are two solutions to your issue.
Using named queue is one of them.
Set your exchange in fanout mode and subscribe your named queues to it. Doing so, when a publisher send a message in your exchange, it will be dispatched to all the queues listening.
You can then have one or more consumer for each queue (allowing you to scale). You'll have to define a named queue / consumer. When one consumer disconnect, his queue still receive messages and when he comes back he can consume them.
You should be able to do what you want that way.
The other way is more for your personnal knowledge since you said you want to use RabbitMQ. But in that particular case you could use Kafkha, your consummer could then, after reconnection, resume at the message index he was when he disconnected.
Please update me if it doesn't work :)

Fanout exchanges are basically load balancers right?

I have been learning AMQP using rabbitMQ and I came across this concept called fanout exchanges. From the illustration diagram, all I could see is that it's some kind of load balancer. Could anyone please explain what is it's actual purpose?
I assume that you mean that only one queue will get a message once it arrives to fanout exchange. So from that point of view:
No, I don't think its a load-balancer (I admit that terminology can be confusing).
In Rabbit MQ there are different types of exchanges, its true and fanout exchange is only one type of them. The basic model of Rabbit MQ assumes that you can connect as many queues as you want to the same exchange. Now, all the queues that are connected to the exchange will get the message (Rabbit MQ just replicates the message) - so exchange can't act as a load balancer.
The only difference between the exchange types is an algorithm of matching routing key. It's like a "to" field in a regular envelope. When a message arrives to exchange, it checks the routing key (a.k.a. binding) and depending on type of exchange "finds" to which queue the message should be routed.
When queue gets registered to exchange it always uses this binding. It like queue says to the binding "hey, all messages which are supposed to arrive to John Smith (its a routing key), please pass them to me". Then when the message arrives, it always has a "to" field in the envelope - so exchange checks whether the message is intended to be sent to John Smith, and if so - just routes it to the queue.
It's possible that there will be many queues interested to get a message from John Smith, in this case the message will be replicated. As for fanout exchange - it just doesn't pay any attention to the routing key and instead just sends the message to all the connected queues.
Now, there is another abstraction called consumer. Consumers can be connected to the single queue (again, many consumers can be connected to the queue).
The trick is that only one consumer can get the message for processing at a time.
So if you want a load balancer - you can use a single queue, connected to your exchange (it can be fanout of course), but then connect many consumers to that queue, and rabbit will send the message to the first consumer (it uses round robin internally to pick the first consumer) - if the consumer can't handle it, the message will be re-queued and rabbit will attempt to send it to another consumer.

Returning NACKed requests in RabbitMQ work queues

I'm trying to implement a work queue architecture using RabbitMQ. I have a single sender application and multiple consumers.
I use manual ack on the consumers, so in case of failure in handling a request, it will be re-queued for another consumer to handle.
I was wondering what would happen if all the consumers return nack on a specific request. Is there a way to recognize this behavior and mark the request as 'dead' so it's rerouted to the dead letter exchange? In such a case, I'd like to have a separate consumer open on the queue bound to the dead letter exchange and receive all the messages that failed to be handled by any consumer (for logging purposes or executing this request's task locally, without distributed consumers).
Another question I had. When requeueing the request upon receiving NACK from a consumer, will it try to send this request to other consumers or will it try to send to the first available, even if it's the one that already nacked the request?
Thanks
no there is no such a feature in RabbitMQ. You may handle exceptions, and for specific exception send message to the dead queue or if know maximum time that message must live, configure TTL on queue.
if you nack message, it will go to the next AVAILABLE consumer

RabbitMQ - Does one consumer block the other consumers of the same queue?

I'm in a phase of learning RabbitMQ/AMQP from the RabbitMQ documentation. Something that is not clear to me that I wanted to ask those who have hands-on experience.
I want to have multiple consumers listening to the same queue in order to balance the work load. What I need is pretty much close to the "Work Queues" example in the RabbitMQ tutorial.
I want the consumer to acknowledge message explicitly after it finishes handling it to preserve the message and delegate it to another consumer in case of crash. Handling a message may take a while.
My question is whether AMQP postpones next message processing until the previous message is ack'ed? If so how do I achieve load balancing between multiple workers and guarantee no messages get lost?
No, the other consumers don't get blocked. Other messages will get delivered even if they have unacknowledged but delivered predecessors. If a channel closes while holding unacknowledged messages, those messages get returned to the queue.
See RabbitMQ Broker Semantics
Messages can be returned to the queue using AMQP methods that feature a requeue parameter (basic.recover, basic.reject and basic.nack), or due to a channel closing while holding unacknowledged messages.
EDIT In response to your comment:
Time to dive a little deeper into the AMQP specification then perhaps:
3.1.4 Message Queues
A message queue is a named FIFO buffer that holds message on behalf of a set of consumer applications.
Applications can freely create, share, use, and destroy message queues, within the limits of their authority.
Note that in the presence of multiple readers from a queue, or client transactions, or use of priority fields,
or use of message selectors, or implementation-specific delivery optimisations the queue MAY NOT
exhibit true FIFO characteristics. The only way to guarantee FIFO is to have just one consumer connected
to a queue. The queue may be described as “weak-FIFO” in these cases. [...]
3.1.8 Acknowledgements
An acknowledgement is a formal signal from the client application to a message queue that it has
successfully processed a message.[...]
So acknowledgement confirms processing, not receipt. The broker will hold on to the message until it's gotten acknowleged, so that it can redeliver them. But it is free to deliver more messages to consumers even if the prededing messages have not yet been acknowledged. The consumers will not be blocked.