Rabbitmq retrieve multiple messages using single synchronous call - rabbitmq

Is there a way to receive multiple message using a single synchronous call ?
When I know that there are N messages( N could be a small value less than 10) in the queue, then I should be able to do something like channel.basic_get(String queue, boolean autoAck , int numberofMsg ). I don't want to make multiple requests to the server .

RabbitMQ's basic.get doesn't support multiple messages unfortunately as seen in the docs. The preferred method to retrieve multiple messages is to use basic.consume which will push the messages to the client avoiding multiple round trips. acks are asynchronous so your client won't be waiting for the server to respond. basic.consume also has the benefit of allowing RabbitMQ to redeliver the message if the client disconnects, something that basic.get cannot do. This can be turned off as well setting no-ack to true.
Setting basic.qos prefetch-count will set the number of messages to push to the client at any time. If there isn't a message waiting on the client side (which would return immediately) client libraries tend to block with an optional timeout.

You can use a QueueingConsumer implementation of Consumer interface which allows you to retrieve several messages in a single request.
QueueingConsumer queueingConsumer = new QueueingConsumer(channel);
channel.basicConsume(plugin.getQueueName(), false, queueingConsumer);
for(int i = 0; i < 10; i++){
QueueingConsumer.Delivery delivery = queueingConsumer.nextDelivery(100);//read timeout in ms
if(delivery == null){
break;
}
}

Not an elegant solution and does not solve making multiple calls but you can use the MessageCount method. For example:
bool noAck = false;
var messageCount = channel.MessageCount("hello");
BasicGetResult result = null;
if (messageCount == 0)
{
// No messages available
}
else
{
while (messageCount > 0)
{
result = channel.BasicGet("hello", noAck);
var message = Encoding.UTF8.GetString(result.Body);
//process message .....
messageCount = channel.MessageCount("hello");
}

First declare instance of QueueingBasicConsumer() wich wraps the model.
From the model execute model.BasicConsume(QueueName, false, consumer)
Then implement a loop that will loop around messages from the queue which will then processing
Next line - consumer.Queue.Dequeue() method - waiting for the message to be received from the queue.
Then convert byte array to a string and display it.
Model.BasicAck() - release message from the queue to receive next message
And then on the server side can start waiting for the next message to come through:
public string GetMessagesByQueue(string QueueName)
{
var consumer = new QueueingBasicConsumer(_model);
_model.BasicConsume(QueueName, false, consumer);
string message = string.Empty;
while (Enabled)
{
//Get next message
var deliveryArgs = (BasicDeliverEventArgs)consumer.Queue.Dequeue();
//Serialize message
message = Encoding.Default.GetString(deliveryArgs.Body);
_model.BasicAck(deliveryArgs.DeliveryTag, false);
}
return message;
}

Related

Ensure that AMQP exchange binding exists before publishing

The System Layout
We have three systems:
An API Endpoint (Publisher and Consumer)
The RabbitMQ Server
The main application/processor (Publisher and consumer)
System 1 and 3 both use Laravel, and use PHPAMQPLIB for interaction with RabbitMQ.
The path of a message
System 1 (the API Endpoint) sends a serialized job to the RabbitMQ Server for System 3 to process. It then immediately declares a new randomly named queue, binds an exchange to that queue with a correlation ID - and starts to listen for messages.
Meanwhile, system 3 finishes the job, and once it does, responds back with details from that job to RabbitMQ, on the exchange, with the correlation ID.
The issue and what I've tried
I often find that this process fails. The job gets sent and received, and the response gets sent - but system 1 never reads this response, and I don't see it published in RabbitMQ.
I've done some extensive debugging of this without getting to a root cause. My current theory is that System 3 is so quick at returning a response, that the new queue and exchange binding hasn't even been declared yet from System 1. This means the response from System 3 has nowhere to go, and as a result vanishes. This theory is mainly based on the fact that if I set jobs to be processed at a lower frequency on System 3, the system becomes more reliable. The faster the jobs process, the more unreliable it becomes.
The question is: How can I prevent that? Or is there something else that I'm missing? I of course want these jobs to process quickly and efficiently without breaking the Request/Response-pattern.
I've logged output from both systems - both are working with the same correlation ID's, and System 3 gets an ACK upon publishing - whilst System 1 has a declared queue with no messages that eventually just times out.
Code Example 1: Publishing a Message
/**
* Helper method to publish a message to RabbitMQ
*
* #param $exchange
* #param $message
* #param $correlation_id
* #return bool
*/
public static function publishAMQPRouteMessage($exchange, $message, $correlation_id)
{
try {
$connection = new AMQPStreamConnection(
env('RABBITMQ_HOST'),
env('RABBITMQ_PORT'),
env('RABBITMQ_LOGIN'),
env('RABBITMQ_PASSWORD'),
env('RABBITMQ_VHOST')
);
$channel = $connection->channel();
$channel->set_ack_handler(function (AMQPMessage $message) {
Log::info('[AMQPLib::publishAMQPRouteMessage()] - Message ACK');
});
$channel->set_nack_handler(function (AMQPMessage $message) {
Log::error('[AMQPLib::publishAMQPRouteMessage()] - Message NACK');
});
$channel->confirm_select();
$channel->exchange_declare(
$exchange,
'direct',
false,
false,
false
);
$msg = new AMQPMessage($message);
$channel->basic_publish($msg, $exchange, $correlation_id);
$channel->wait_for_pending_acks();
$channel->close();
$connection->close();
return true;
} catch (Exception $e) {
return false;
}
}
Code Example 2: Waiting for a Message Response
/**
* Helper method to fetch messages from RabbitMQ.
*
* #param $exchange
* #param $correlation_id
* #return mixed
*/
public static function readAMQPRouteMessage($exchange, $correlation_id)
{
$connection = new AMQPStreamConnection(
env('RABBITMQ_HOST'),
env('RABBITMQ_PORT'),
env('RABBITMQ_LOGIN'),
env('RABBITMQ_PASSWORD'),
env('RABBITMQ_VHOST')
);
$channel = $connection->channel();
$channel->exchange_declare(
$exchange,
'direct',
false,
false,
false
);
list($queue_name, ,) = $channel->queue_declare(
'',
false,
false,
true,
false
);
$channel->queue_bind($queue_name, $exchange, $correlation_id);
$callback = function ($msg) {
return self::$rfcResponse = $msg->body;
};
$channel->basic_consume(
$queue_name,
'',
false,
true,
false,
false,
$callback
);
if (!count($channel->callbacks)) {
Log::error('[AMQPLib::readAMQPRouteMessage()] - No callbacks registered!');
}
while (self::$rfcResponse === null && count($channel->callbacks)) {
$channel->wait();
}
$channel->close();
$connection->close();
return self::$rfcResponse;
}
Grateful for any advise you can offer!
I may be missing something, but when I read this:
System 1 (the API Endpoint) sends a serialized job to the RabbitMQ Server for System 3 to process. It then immediately declares a new randomly named queue, binds an exchange to that queue with a correlation ID - and starts to listen for messages.
My first thought was "why do you wait until the message is sent before declaring the return queue?"
In fact, we have a whole series of separate steps here:
Generating a correlation ID
Publishing a message containing that ID to an exchange for processing elsewhere
Declaring a new queue to receive responses
Binding the queue to an exchange using the correlation ID
Binding a callback to the new queue
Waiting for responses
The response cannot come until after step 2, so we want to do that as late as possible. The only step that can't come before that is step 6, but it's probably convenient to keep steps 5 and 6 close together in the code. So I would rearrange the code to:
Generating a correlation ID
Declaring a new queue to receive responses
Binding the queue to an exchange using the correlation ID
Publishing a message containing the correlation ID to an exchange for processing elsewhere
Binding a callback to the new queue
Waiting for responses
This way, however quickly the response is published, it will be picked up by the queue declared in step 2, and as soon as you bind a callback and start waiting, you will process it.
Note that there is nothing that readAMQPRouteMessage knows that publishAMQPRouteMessage doesn't, so you can freely move code between them. All you need when you want to consume from the response queue is its name, which you can either save into a variable and pass around, or generate yourself rather than letting RabbitMQ name it. For instant, you could name it after the correlation ID it is listening for, so that you can always work out what it is with simple string manipulation, e.g. "job_response.{$correlation_id}"

Rabbitmq: different between envelop.message.body and envelop.message.pool

I have one producer based on nodejs and the javascript library which I used is amqp.node, and the consumer is implemented by C library.
From rabbitmq management web, I can see the messages are pushed into the queue and delivered to the consumer. In the consumer, the amqp_consume_message return AMQP-RESPONSE-NORMAL, however, the envelop.message.body is null. How can I debug it in this case?
Here are my codes to consume messages from rabbitmq
amqp_rpc_reply_t reply;
amqp_envelope_t envelope;
amqp_maybe_release_buffers(m_con);
timeval m_time;
m_time.tv_sec = dwMilliseconds/1000;
m_time.tv_usec = (dwMilliseconds%1000)*1000;
reply = amqp_consume_message(m_con, &envelope, &m_time, 0);//time out 1 second
if (AMQP_RESPONSE_NORMAL != reply.reply_type)
{
return false;
}
bool bRet = false;
amqp_bytes_t& rTheBody = envelope.message.body;
if (rTheBody.len > 0)
{
Update
After further investigation, I find those messages are stored in the envelop.message.pool.pages. I want to the different between message.body and message.pool?
Quoting this
The pool field of the amqp_message_t object (e.g.,
envelope.message.pool) is a memory pool used for allocating parts of
the message. It is an implementation detail and should not be used by
client code directly (this implementation detail is subject to
change).
The only reason that the envelope.message.body.bytes should be NULL
with a AMQP_RESPONSE_NORMAL return value is if a 0-length message body
is received.

RabbitMQ - Non Blocking Consumer with Manual Acknowledgement

I'm just starting to learn RabbitMQ so forgive me if my question is very basic.
My problem is actually the same with the one posted here:
RabbitMQ - Does one consumer block the other consumers of the same queue?
However, upon investigation, i found out that manual acknowledgement prevents other consumers from getting a message from the queue - blocking state. I would like to know how can I prevent it. Below is my code snippet.
...
var message = receiver.ReadMessage();
Console.WriteLine("Received: {0}", message);
// simulate processing
System.Threading.Thread.Sleep(8000);
receiver.Acknowledge();
public string ReadMessage()
{
bool autoAck = false;
Consumer = new QueueingBasicConsumer(Model);
Model.BasicConsume(QueueName, autoAck, Consumer);
_ea = (BasicDeliverEventArgs)Consumer.Queue.Dequeue();
return Encoding.ASCII.GetString(_ea.Body);
}
public void Acknowledge()
{
Model.BasicAck(_ea.DeliveryTag, false);
}
I modify how I get messages from the queue and it seems blocking issue was fixed. Below is my code.
public string ReadOneAtTime()
{
Consumer = new QueueingBasicConsumer(Model);
var result = Model.BasicGet(QueueName, false);
if (result == null) return null;
DeliveryTag = result.DeliveryTag;
return Encoding.ASCII.GetString(result.Body);
}
public void Reject()
{
Model.BasicReject(DeliveryTag, true);
}
public void Acknowledge()
{
Model.BasicAck(DeliveryTag, false);
}
Going back to my original question, I added the QOS and noticed that other consumers can now get messages. However some are left unacknowledged and my program seems to hangup. Code changes are below:
public string ReadMessage()
{
Model.BasicQos(0, 1, false); // control prefetch
bool autoAck = false;
Consumer = new QueueingBasicConsumer(Model);
Model.BasicConsume(QueueName, autoAck, Consumer);
_ea = Consumer.Queue.Dequeue();
return Encoding.ASCII.GetString(_ea.Body);
}
public void AckConsume()
{
Model.BasicAck(_ea.DeliveryTag, false);
}
In Program.cs
private static void Consume(Receiver receiver)
{
int counter = 0;
while (true)
{
var message = receiver.ReadMessage();
if (message == null)
{
Console.WriteLine("NO message received.");
break;
}
else
{
counter++;
Console.WriteLine("Received: {0}", message);
receiver.AckConsume();
}
}
Console.WriteLine("Total message received {0}", counter);
}
I appreciate any comments and suggestions. Thanks!
Well, the rabbit provides infrastructure where one consumer can't lock/block other message consumer working with the same queue.
The behavior you faced with can be a result of couple of following issues:
The fact that you are not using auto ack mode on the channel leads you to situation where one consumer took the message and still didn't send approval (basic ack), meaning that the computation is still in progress and there is a chance that the consumer will fail to process this message and it should be kept in rabbit queue to prevent message loss (the total amount of messages will not change in management consule). During this period (from getting message to client code and till sending explicit acknowledge) the message is marked as being used by specific client and is not available to other consumers. However this doesn't prevent other consumers from taking other messages from the queue, if there are more mossages to take.
IMPORTANT: to prevent message loss with manual acknowledge make sure
to close the channel or sending nack in case of processing fault, to
prevent situation where your application took the message from queue,
failed to process it, removed from queue, and lost the message.
Another reason why other consumers can't work with the same queue is QOS - parameter of the channel where you declare how many messages should be pushed to client cache to improve dequeue operation latency (working with local cache). Your code example lackst this part of code, so I am just guessing. In case like this the QOS can be so big that there are all messages on server marked as belonging to one client and no other client can take any of those, exactly like with manual ack I've already described.
Hope this helps.

Custom component to read messages from queue

I'm using Mule 3.3.1
I am trying to write a Component that reads all available messages from queue, which I intend to be polled using a Quartz scheduler.
Here is my code.
#Override
public Object onCall(MuleEventContext muleEventContext) throws Exception {
MuleMessage[] messages = null;
MuleMessage result = muleEventContext.getMessage();
do {
if (result == null) {
break;
}
if (result instanceof MuleMessageCollection) {
MuleMessageCollection resultsCollection = (MuleMessageCollection) result;
System.out.println("Number of messages: " + resultsCollection.size());
messages = resultsCollection.getMessagesAsArray();
} else {
messages = new MuleMessage[1];
messages[0] = result;
}
result = muleEventContext.getMessage();
} while (result !=null);
return messages;
}
Unfortunately, it loops indefinitely on the first message. Thoughts?
The onCall() method provided in the post is going to loop infinitely because
muleEventContext.getMessage()
always returns a MuleMessage. And so the loop will go in infinitely.
MuleEventContext object here is not an iterator or stream where the pointer points to next element after reading current element.
In order to read all the avialable messages from the queue. You can have a JMS inbound to read by polling on the queue and read all the messages. But remember each message on the queue will be one iteration(one message) from your JMS inbound.
If you want to gather all the Queue messages as a collection of objects and then proceed, then that is a different story. That cannot be done with your component code.
If you intend to gather all your messages as a collection and then start processing then try using something like collection-aggregator of mule at your inbound.
Hope this helps.

How can a RabbitMQ Client tell when it loses connection to the server?

If I'm connected to RabbitMQ and listening for events using an EventingBasicConsumer, how can I tell if I've been disconnected from the server?
I know there is a Shutdown event, but it doesn't fire if I unplug my network cable to simulate a failure.
I've also tried the ModelShutdown event, and CallbackException on the model but none seem to work.
EDIT-----
The one I marked as the answer is correct, but it was only part of the solution for me. There is also HeartBeat functionality built into RabbitMQ. The server specifies it in the configuration file. It defaults to 10 minutes but of course you can change that.
The client can also request a different interval for the heartbeat by setting the RequestedHeartbeat value on the ConnectionFactory instance.
I'm guessing that you're using the C# library? (but even so I think the others have a similar event).
You can do the following:
public class MyRabbitConsumer
{
private IConnection connection;
public void Connect()
{
connection = CreateAndOpenConnection();
connection.ConnectionShutdown += connection_ConnectionShutdown;
}
public IConnection CreateAndOpenConnection() { ... }
private void connection_ConnectionShutdown(IConnection connection, ShutdownEventArgs reason)
{
}
}
This is an example of it, but the marked answer is what lead me to this.
var factory = new ConnectionFactory
{
HostName = "MY_HOST_NAME",
UserName = "USERNAME",
Password = "PASSWORD",
RequestedHeartbeat = 30
};
using (var connection = factory.CreateConnection())
{
connection.ConnectionShutdown += (o, e) =>
{
//handle disconnect
};
using (var model = connection.CreateModel())
{
model.ExchangeDeclare(EXCHANGE_NAME, "topic");
var queueName = model.QueueDeclare();
model.QueueBind(queueName, EXCHANGE_NAME, "#");
var consumer = new QueueingBasicConsumer(model);
model.BasicConsume(queueName, true, consumer);
while (!stop)
{
BasicDeliverEventArgs args;
consumer.Queue.Dequeue(5000, out args);
if (stop) return;
if (args == null) continue;
if (args.Body.Length == 0) continue;
Task.Factory.StartNew(() =>
{
//Do work here on different thread then this one
}, TaskCreationOptions.PreferFairness);
}
}
}
A few things to note about this.
I'm using # for the topic. This grabs everything. Usually you want to limit by a topic.
I'm setting a variable called "stop" to determine when the process should end. You'll notice the loop runs forever until that variable is true.
The Dequeue waits 5 seconds then leaves without getting data if there is no new message. This is to ensure we listen for that stop variable and actually quit at some point. Change the value to your liking.
When a message comes in I spawn the handling code on a new thread. The current thread is being reserved for just listening to the rabbitmq messages and if a handler takes too long to process I don't want it slowing down the other messages. You may or may not need this depending on your implementation. Be careful however writing the code to handle the messages. If it takes a minute to run and your getting messages at sub-second times you will run out of memory or at least into severe performance issues.