1.what is mean by message in bulk in basicNack
The AMQP 0-9-1 specification defines the basic.reject method that allows clients to reject individual, delivered messages, instructing the broker to either discard them or requeue them. Unfortunately, basic.reject provides no support for negatively acknowledging messages in bulk.
2.is basicNack with multiple false is same as basicReject?
channel.basicNack(deliveryTag,false,true);
channel.basicReject(deliveryTag,true);
Yes; it is effectively the same.
Related
Is it possible to avoid sending the same messages (data) to the certain queue using RabbitMQ configuration?
Or should I do this programmatically?
RabbitMQ does not offer message de-duplication out of the box.
There is a plugin which offers a certain level of de-duplication.
You can also implement de-duplication yourself at either the producers or consumers by checking if the sent/received message was already seen against a cache.
I am using RabbitMQ as a MQ broker. Is it possible to get a notification that a certain message has been acknowledged by all queues? That is, if it was sent to 5 queues, we get a notification after the acknowledgment of the last/5th consumer.
I know you can introduce reply-to queues, but that's not what I am looking for. I don't want to force the consumer to send an acknowledgment message to some queue after acknowledgment.
Is it also possible to continue this follow-up after a broker and/or publisher restart?
No, it is not possible as you state it.
You cannot, from the publisher side, know whether a message has been ACK'd at the consumer side, and in most patterns it's not really something you'd want anyway.
You can, however, use Publisher Confirms. These would inform the publisher that the message has been routed to all the bound queues.
There are several mechanisms for data safety on both the publisher and consumer side. You would normally trust that the broker does not miss messages in between, the same way you trust that a database will hold the records over time.
If nevertheless your workflow requires that your publisher side is informed about the completion of a complex distributed task, and you really can't get away with fire and forget, then you will need to implement that response yourself, normally by means of an additional message.
I'm using NServiceBus 4.x with RabbitMQ 3.2.x as my transport.
I made the assumption that by using RabbitMQ as my transport I would be given the competing consumer model as an option. I understand that NServiceBus employs the "Fannout" exchange type for all exchanges and does not support round robin at this time. However is there a way to configure NServiceBus to take advantage of the levels of indirection via Exchanges and channels that RabbitMQ offers.
I have several consumers I would like to compete for messages from a given queue. What I am observing is subscribers' blocking access to further message retrieval from the queue until the message is consumed. So having more then one consumer at this point does me no good other then redundancy.
After reading some documentation on RabbitMQ I'm assuming that it's normal to block until the Ack receipt is sent from the subscriber. But I had assumed that subscriber #2 would have free access to the queue to fetch another message.
There is mention of increasing the prefetch count on RabbitMQ channel.
Example:
channel.BasicQos(0,prefetchcount,false)
I don't see anywhere that I can change this setting via configuration in NServiceBus. Furthermore as I read what prefetch does I'm really not sure this what i'm looking for.
Is it possible to use RabbitMQ with out a distirbutor type pattern used with MSMQ? Or should I move to MassTransit or Rebus?
Put prefetchcount=2 in your connection string. Any value above 1 will tell the broker to allow more than X unacked message to go out. You need to fiddle with this setting to find the optimum for your scenario.
Is there any way in RabbitMQ to have multiple consumers get the same message from the same queue?
I need to send the same message to anyone who's listening but also ensure that someone deals with it. Basically, I need the fanout functionality of an exchange combined with the basic.ack functionality of a queue. Is there any way to accomplish this in a scalable way?
If you are trying to ensure that the message is properly processed, acknowledgement already provides this capability. If your consumer is unable to process the message and does not provide an ack it will be requeued and processed again by the next available consumer. Implementing multiple competing consumers on the same queue will give you round-robin delivery, allowing the other consumers a chance for success.
How scalable this will be depends on how long it takes to process each message compared to the incoming rate, queue durability, prefetch and how many competing consumers you have on the queue.
I'm in a phase of learning RabbitMQ/AMQP from the RabbitMQ documentation. Something that is not clear to me that I wanted to ask those who have hands-on experience.
I want to have multiple consumers listening to the same queue in order to balance the work load. What I need is pretty much close to the "Work Queues" example in the RabbitMQ tutorial.
I want the consumer to acknowledge message explicitly after it finishes handling it to preserve the message and delegate it to another consumer in case of crash. Handling a message may take a while.
My question is whether AMQP postpones next message processing until the previous message is ack'ed? If so how do I achieve load balancing between multiple workers and guarantee no messages get lost?
No, the other consumers don't get blocked. Other messages will get delivered even if they have unacknowledged but delivered predecessors. If a channel closes while holding unacknowledged messages, those messages get returned to the queue.
See RabbitMQ Broker Semantics
Messages can be returned to the queue using AMQP methods that feature a requeue parameter (basic.recover, basic.reject and basic.nack), or due to a channel closing while holding unacknowledged messages.
EDIT In response to your comment:
Time to dive a little deeper into the AMQP specification then perhaps:
3.1.4 Message Queues
A message queue is a named FIFO buffer that holds message on behalf of a set of consumer applications.
Applications can freely create, share, use, and destroy message queues, within the limits of their authority.
Note that in the presence of multiple readers from a queue, or client transactions, or use of priority fields,
or use of message selectors, or implementation-specific delivery optimisations the queue MAY NOT
exhibit true FIFO characteristics. The only way to guarantee FIFO is to have just one consumer connected
to a queue. The queue may be described as “weak-FIFO” in these cases. [...]
3.1.8 Acknowledgements
An acknowledgement is a formal signal from the client application to a message queue that it has
successfully processed a message.[...]
So acknowledgement confirms processing, not receipt. The broker will hold on to the message until it's gotten acknowleged, so that it can redeliver them. But it is free to deliver more messages to consumers even if the prededing messages have not yet been acknowledged. The consumers will not be blocked.