Is there an easy way to implement something like "locking" to prevent race conditions in RabbitMQ queue when using ack?
I have the following problem - I have a couple of clients consuming a queue using ack. Whenever a client gets a message, he acknowledges it and processes it. However if the processing fails for some reason I'd like the message to be returned to the queue.
Simply process it and then acknowledge it.
And if processing fails requeue the message with ack or nack.
QueueingConsumer consumer = new QueueingConsumer(channel);
boolean autoAck = false;
channel.basicConsume("hello", autoAck, consumer);
QueueingConsumer.Delivery delivery = consumer.nextDelivery();
//do your processing
boolean requeue = false;
channel.basicAck(delivery.getEnvelope().getDeliveryTag(), requeue);
Related
I am trying to code a simple consumer using librabbitmq. It is working, but when I do execute amqp_basic_consume, it consumes the entire queue.
What I want is for it to get a single message, process it and repeat.
I tried using a basic_qos to have the consumer prefetch 1 at a time, but that seems to have no effect at all.
The basic setup and loop:
// set qos of 1 message at a time
if (!amqp_basic_qos(conn, channel, 0, 1, 0)) {
die_on_amqp_error(amqp_get_rpc_reply(conn), "basic.qos");
}
// Consuming the message
amqp_basic_consume(conn, channel, queue, amqp_empty_bytes, no_local, no_ack, exclusive, amqp_empty_table);
while (run) {
amqp_rpc_reply_t result;
amqp_envelope_t envelope;
amqp_maybe_release_buffers(conn);
result = amqp_consume_message(conn, &envelope, &timeout, 0);
if (AMQP_RESPONSE_NORMAL == result.reply_type) {
strncpy(message, envelope.message.body.bytes, envelope.message.body.len);
message[envelope.message.body.len] = '\0';
printf("Received message size: %d\nbody: -%s-\n", (int) envelope.message.body.len, message );
if ( strncmp(message, "DONE",4 ) == 0 )
{
printf("XXXXXXXXXXXXXXXXXX Cease message received. XXXXXXXXXXXXXXXXXXXXX\n");
run = 0;
}
amqp_destroy_envelope(&envelope);
}else{
printf("Timeout.\n");
run = 0;
}
}
I expect to have a queue filled that I can start processing and if I hit ^C, the remaining messages are still in the queue. Instead, even if I have only processed one message, the entire queue is emptied.
This is the behavior when noAck is true. What will happen is that the messages will be pushed to the connected consumer as fast as the broker can send them, because it assumes that the consumer is able to accept them as they are acknowledged immediately upon delivery.
You would want to change noAck to false, then explicitly ack each message back to the broker in this case.
Alternatively, you could use a basic.get to pull messages from the broker one at a time as opposed to using a push-based consumer (there are folks out there who don't like this idea). Your use case will determine what is most appropriate, but based on the fact that you seem to have a full queue and fairly process-intensive messages, I would assume a basic.get would be just fine in this scenario. The question then would be to decide how often to poll when the queue is empty.
I want to consume multiple messages from specific queue or a specific exchange with a given key.
so the scenario is as follow:
Publisher publish message 1 over queue 1
Publisher publish message 2 over queue 1
Publisher publish message 3 over queue 1
Publisher publish message 4 over queue 2
Publisher publish message 5 over queue 2
..
Consumer consume messages from queue 1
get [message 1, message 2, message 3] all at once and handle them in one call back
listen_to(queue_name , num_of_msg_to_fetch or all, function(messages){
//do some stuff with the returned list
});
the messages are not coming at the same time, it is like events and i want to collect them in a queue, package them and send them to a third party.
I also read this post:
http://rabbitmq.1065348.n5.nabble.com/Consuming-multiple-messages-at-a-time-td27195.html
Thanks
Don't consume directly from the queue as queues follow round robin algorithm(an AMQP mandate)
Use shovel to transfer the queue contents to a fanout exchange and consume messages right from this exchange. You get all messages across all connected consumers. :)
If you want to consume multiple messages from specific queue, you can try as below.
channel.queueDeclare(QUEUE_NAME, false, false,false, null);
Consumer consumer = new DefaultConsumer(channel){
#Override
public void handleDelivery(String consumerTag,
Envelope envelope,
AMQP.BasicProperties properties,
byte[] body)
throws IOException {
String message = new String(body, "UTF-8");
logger.info("Recieved Message --> " + message);
}
};
You might need to conceptually separate domain-message from RMQ-message. As a producer you'd then bundle multiple domain messages into a single RMQ-message and .produce() it to RMQ. Remember this kind of design introduces timeouts and latencies due to the existence of a window (you might take some impression from Kafka that does bundling to optimize I/O at the cost of latency).
As a consumer then, you'd have a consumer, with typical .handleDelivery implementation that would transform the received body for the processing: byte[] -> Set[DomainMessage] -> your listener.
Up until now, my RabbitMQ consumer clients have used a prefetch value of 1. I'm looking to increase the value in order to gain performance. If I set the value to 2, will the RabbitMQ server send each consumer 2 messages at once such that I will need to parse the two messages and store the second one in a List until the first is processed and acknowledged? Or will the API handle this behind the scenes?
I'm using the Java AMQP client library:
ConnectionFactory factory = new ConnectionFactory();
...
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
channel.basicQos(2);
QueueingConsumer consumer = new QueueingConsumer(channel);
channel.basicConsume(CONSUME_QUEUE_NAME, false, consumer);
while (!Thread.currentThread().isInterrupted()) {
try {
QueueingConsumer.Delivery delivery = consumer.nextDelivery();
String m = new String(delivery.getBody(), "UTF-8");
// Will m contain two messages? Will I have to each message and keep track of them within a List?
...
}
The api handles this behind the scenes, so there are no worries there for you.
Regarding which message gets where, RMQ will just deliver by using round robin, that is if you have the queue: 1 2 3 4 5 6 and consumer1 and consumer2.
consumer1 will have 1 3 5
consumer2 will have 2 4 6
Should the connection die to any of your consumers the prefetched messages will be redelivered to the active consumers using the same delivery method.
This should be interesting reading and a good starting point to figure more exactly what happens:
Tutorial no.2 which I'm sure you've read
Reliability
The api internally queue messages in a blocking queue.
Setting the prefetch count more than 1 is actually a good idea since your worker need not wait for each and every message to arrive. It can read up to N messages (where N is the prefetch count). It can start working on a message as soon as it has finished the previous one.
Also, you have the option to acknowledge multiple messages at once instead of acknowledging individually.
channel.basicAck(lastDeliveryTag, true);
boolean true indicates to acknowledge all the messages upto and including the supplied lastDeliveryTag
In a Mule flow, I would like to add an Exception Handler that forwards messages to a "retry queue" when there is an exception. However, I don't want this retry logic to run automatically. Instead, I'd rather receive a notification so I can review the errors and then decide whether to retry all messages in the queue or not.
I don't want to receive a notification for every exception. I'd rather have a scheduled job that runs every 15 minutes and checks to see if there are messages in this retry queue and then only send the notification if there are.
Is there any way to determine how many messages are currently in a persistent VM queue?
Assuming you use the default VM queue persistence mechanism and that the VM connector is named vmConnector, you can do this:
final String queueName = "retryQueue";
int messageCount = 0;
final VMConnector vmConnector = (VMConnector) muleContext.getRegistry()
.lookupConnector("vmConnector");
for (final Serializable key : vmConnector.getQueueProfile().getObjectStore().allKeys())
{
final QueueKey queueKey = (QueueKey) key;
if (queueName.equals(queueKey.queueName))
{
messageCount++;
}
}
System.out.printf("Queue %s has %d pending messages%n", queueName, messageCount);
I'd like to send a message to a RabbitMQ server and then wait for a reply message (on a "reply-to" queue). Of course, I don't want to wait forever in case the application processing these messages is down - there needs to be a timeout. It sounds like a very basic task, yet I can't find a way to do this. I've now run into this problem with Java API.
The RabbitMQ Java client library now supports a timeout argument to its QueueConsumer.nextDelivery() method.
For instance, the RPC tutorial uses the following code:
channel.basicPublish("", requestQueueName, props, message.getBytes());
while (true) {
QueueingConsumer.Delivery delivery = consumer.nextDelivery();
if (delivery.getProperties().getCorrelationId().equals(corrId)) {
response = new String(delivery.getBody());
break;
}
}
Now, you can use consumer.nextDelivery(1000) to wait for maximum one second. If the timeout is reached, the method returns null.
channel.basicPublish("", requestQueueName, props, message.getBytes());
while (true) {
// Use a timeout of 1000 milliseconds
QueueingConsumer.Delivery delivery = consumer.nextDelivery(1000);
// Test if delivery is null, meaning the timeout was reached.
if (delivery != null &&
delivery.getProperties().getCorrelationId().equals(corrId)) {
response = new String(delivery.getBody());
break;
}
}
com.rabbitmq.client.QueueingConsumer has a nextDelivery(long timeout) method, which will do what you want. However, this has been deprecated.
Writing your own timeout isn't so hard, although it may be better to have an ongoing thread and a list of in-time identifiers, rather than adding and removing consumers and associated timeout threads all the time.
Edit to add: Noticed the date on this after replying!
There is similar question. Although it's answers doesn't use java, maybe you can get some hints.
Wait for a single RabbitMQ message with a timeout
I approached this problem using C# by creating an object to keep track of the response to a particular message. It sets up a unique reply queue for a message, and subscribes to it. If the response is not received in a specified timeframe, a countdown timer cancels the subscription, which deletes the queue. Separately, I have methods that can be synchronous from my main thread (uses a semaphore) or asynchronous (uses a callback) to utilize this functionality.
Basically, the implementation looks like this:
//Synchronous case:
//Throws TimeoutException if timeout happens
var msg = messageClient.SendAndWait(theMessage);
//Asynchronous case
//myCallback receives an exception message if there is a timeout
messageClient.SendAndCallback(theMessage, myCallback);