Ensure that AMQP exchange binding exists before publishing - rabbitmq

The System Layout
We have three systems:
An API Endpoint (Publisher and Consumer)
The RabbitMQ Server
The main application/processor (Publisher and consumer)
System 1 and 3 both use Laravel, and use PHPAMQPLIB for interaction with RabbitMQ.
The path of a message
System 1 (the API Endpoint) sends a serialized job to the RabbitMQ Server for System 3 to process. It then immediately declares a new randomly named queue, binds an exchange to that queue with a correlation ID - and starts to listen for messages.
Meanwhile, system 3 finishes the job, and once it does, responds back with details from that job to RabbitMQ, on the exchange, with the correlation ID.
The issue and what I've tried
I often find that this process fails. The job gets sent and received, and the response gets sent - but system 1 never reads this response, and I don't see it published in RabbitMQ.
I've done some extensive debugging of this without getting to a root cause. My current theory is that System 3 is so quick at returning a response, that the new queue and exchange binding hasn't even been declared yet from System 1. This means the response from System 3 has nowhere to go, and as a result vanishes. This theory is mainly based on the fact that if I set jobs to be processed at a lower frequency on System 3, the system becomes more reliable. The faster the jobs process, the more unreliable it becomes.
The question is: How can I prevent that? Or is there something else that I'm missing? I of course want these jobs to process quickly and efficiently without breaking the Request/Response-pattern.
I've logged output from both systems - both are working with the same correlation ID's, and System 3 gets an ACK upon publishing - whilst System 1 has a declared queue with no messages that eventually just times out.
Code Example 1: Publishing a Message
/**
* Helper method to publish a message to RabbitMQ
*
* #param $exchange
* #param $message
* #param $correlation_id
* #return bool
*/
public static function publishAMQPRouteMessage($exchange, $message, $correlation_id)
{
try {
$connection = new AMQPStreamConnection(
env('RABBITMQ_HOST'),
env('RABBITMQ_PORT'),
env('RABBITMQ_LOGIN'),
env('RABBITMQ_PASSWORD'),
env('RABBITMQ_VHOST')
);
$channel = $connection->channel();
$channel->set_ack_handler(function (AMQPMessage $message) {
Log::info('[AMQPLib::publishAMQPRouteMessage()] - Message ACK');
});
$channel->set_nack_handler(function (AMQPMessage $message) {
Log::error('[AMQPLib::publishAMQPRouteMessage()] - Message NACK');
});
$channel->confirm_select();
$channel->exchange_declare(
$exchange,
'direct',
false,
false,
false
);
$msg = new AMQPMessage($message);
$channel->basic_publish($msg, $exchange, $correlation_id);
$channel->wait_for_pending_acks();
$channel->close();
$connection->close();
return true;
} catch (Exception $e) {
return false;
}
}
Code Example 2: Waiting for a Message Response
/**
* Helper method to fetch messages from RabbitMQ.
*
* #param $exchange
* #param $correlation_id
* #return mixed
*/
public static function readAMQPRouteMessage($exchange, $correlation_id)
{
$connection = new AMQPStreamConnection(
env('RABBITMQ_HOST'),
env('RABBITMQ_PORT'),
env('RABBITMQ_LOGIN'),
env('RABBITMQ_PASSWORD'),
env('RABBITMQ_VHOST')
);
$channel = $connection->channel();
$channel->exchange_declare(
$exchange,
'direct',
false,
false,
false
);
list($queue_name, ,) = $channel->queue_declare(
'',
false,
false,
true,
false
);
$channel->queue_bind($queue_name, $exchange, $correlation_id);
$callback = function ($msg) {
return self::$rfcResponse = $msg->body;
};
$channel->basic_consume(
$queue_name,
'',
false,
true,
false,
false,
$callback
);
if (!count($channel->callbacks)) {
Log::error('[AMQPLib::readAMQPRouteMessage()] - No callbacks registered!');
}
while (self::$rfcResponse === null && count($channel->callbacks)) {
$channel->wait();
}
$channel->close();
$connection->close();
return self::$rfcResponse;
}
Grateful for any advise you can offer!

I may be missing something, but when I read this:
System 1 (the API Endpoint) sends a serialized job to the RabbitMQ Server for System 3 to process. It then immediately declares a new randomly named queue, binds an exchange to that queue with a correlation ID - and starts to listen for messages.
My first thought was "why do you wait until the message is sent before declaring the return queue?"
In fact, we have a whole series of separate steps here:
Generating a correlation ID
Publishing a message containing that ID to an exchange for processing elsewhere
Declaring a new queue to receive responses
Binding the queue to an exchange using the correlation ID
Binding a callback to the new queue
Waiting for responses
The response cannot come until after step 2, so we want to do that as late as possible. The only step that can't come before that is step 6, but it's probably convenient to keep steps 5 and 6 close together in the code. So I would rearrange the code to:
Generating a correlation ID
Declaring a new queue to receive responses
Binding the queue to an exchange using the correlation ID
Publishing a message containing the correlation ID to an exchange for processing elsewhere
Binding a callback to the new queue
Waiting for responses
This way, however quickly the response is published, it will be picked up by the queue declared in step 2, and as soon as you bind a callback and start waiting, you will process it.
Note that there is nothing that readAMQPRouteMessage knows that publishAMQPRouteMessage doesn't, so you can freely move code between them. All you need when you want to consume from the response queue is its name, which you can either save into a variable and pass around, or generate yourself rather than letting RabbitMQ name it. For instant, you could name it after the correlation ID it is listening for, so that you can always work out what it is with simple string manipulation, e.g. "job_response.{$correlation_id}"

Related

Getting failure callback for Producer in rabbitmq when back pressure kicks in

I wanted to find out the failed messages for my rabbitmq producers using some call back api.I have configured rabbitmq with [{rabbit, [{vm_memory_high_watermark, 0.001}]}]. and tried pushing lot of messages but all the messages are getting accepted and TimeoutException is coming later on and messages not getting send to Queueenter code here, Please tell me how to capture it.
Code for sending message:
// #create-sink - producer
final Sink<ByteString, CompletionStage<Done>> amqpSink =
AmqpSink.createSimple(
AmqpSinkSettings.create(connectionProvider)
.withRoutingKey(AkkaConstants.queueName)
.withDeclaration(queueDeclaration));
// #run-sink
//final List<String> input = Arrays.asList("one", "two", "three", "four", "five");
//Source.from(input).map(ByteString::fromString).runWith(amqpSink, materializer);
String filePath = "D:\\subrata\\code\\akkaAmqpTest-master\\akkaAmqpTest-master\\logs2\\dummy.txt";
Path path = Paths.get(filePath);
// List containing 78198 individual message
List<String> contents = Files.readAllLines(path);
System.out.println("********** file reading done ....");
int times = 5;
// Send 78198*times message to Queue [From console i can see 400000 number of messages being sent]
for(int i=0;i<times;i++) {
Source.from(contents).map(ByteString::fromString).runWith(amqpSink, materializer);
}
System.out.println("************* sending to queue is done");
Unfortunately currently that is not supported out of the box. Ideally the producer would be modeled as a Flow which would send all incoming messages to the AMQP broker and would emit the same message with a result weather it has been successfully sent to the broker or not. There is a ticket to track this possible improvement on the Alpakka issue tracker.

message-driven-channel-adapter drops first message after app context startup unless send is called with a delay

I have an integration test for my Spring Integration config, which consumes messages from a JMS topic with durable subscription. For testing, I am using ActiveMQ instead of Tibco EMS.
The issue I have is that I have to delay sending the first message to the endpoint using a sleep call at the beginning of our test method. Otherwise the message is dropped.
If I remove the setting for durable subscription and selector, then the first message can be sent right away without delay.
I'd like to get rid of the sleep, which is unreliable. Is there a way to check if the endpoint is completely setup before I send the message?
Below is the configuration.
Thanks for your help!
<int-jms:message-driven-channel-adapter
id="myConsumer" connection-factory="myCachedConnectionFactory"
destination="myTopic" channel="myChannel" error-channel="errorChannel"
pub-sub-domain="true" subscription-durable="true"
durable-subscription-name="testDurable"
selector="..."
transaction-manager="emsTransactionManager" auto-startup="false"/>
If you are using a clean embedded activemq for the test, the durability of the subscription is irrelevant until the subscription is established. So you have no choice but to wait until that happens.
You could avoid the sleep by sending a series of startup messages and only start the real test when the last one is received.
EDIT
I forgot that there is a methodisRegisteredWithDestination() on the DefaultMessageListenerContainer.
Javadocs...
/**
* Return whether at least one consumer has entered a fixed registration with the
* target destination. This is particularly interesting for the pub-sub case where
* it might be important to have an actual consumer registered that is guaranteed
* not to miss any messages that are just about to be published.
* <p>This method may be polled after a {#link #start()} call, until asynchronous
* registration of consumers has happened which is when the method will start returning
* {#code true} – provided that the listener container ever actually establishes
* a fixed registration. It will then keep returning {#code true} until shutdown,
* since the container will hold on to at least one consumer registration thereafter.
* <p>Note that a listener container is not bound to having a fixed registration in
* the first place. It may also keep recreating consumers for every invoker execution.
* This particularly depends on the {#link #setCacheLevel cache level} setting:
* only {#link #CACHE_CONSUMER} will lead to a fixed registration.
*/
We use it in some channel tests, where we get the container using reflection and then poll the method until we are subscribed to the topic.
/**
* Blocks until the listener container has subscribed; if the container does not support
* this test, or the caching mode is incompatible, true is returned. Otherwise blocks
* until timeout milliseconds have passed, or the consumer has registered.
* #see DefaultMessageListenerContainer#isRegisteredWithDestination()
* #param timeout Timeout in milliseconds.
* #return True if a subscriber has connected or the container/attributes does not support
* the test. False if a valid container does not have a registered consumer within
* timeout milliseconds.
*/
private static boolean waitUntilRegisteredWithDestination(SubscribableJmsChannel channel, long timeout) {
AbstractMessageListenerContainer container =
(AbstractMessageListenerContainer) new DirectFieldAccessor(channel).getPropertyValue("container");
if (container instanceof DefaultMessageListenerContainer) {
DefaultMessageListenerContainer listenerContainer =
(DefaultMessageListenerContainer) container;
if (listenerContainer.getCacheLevel() != DefaultMessageListenerContainer.CACHE_CONSUMER) {
return true;
}
while (timeout > 0) {
if (listenerContainer.isRegisteredWithDestination()) {
return true;
}
try {
Thread.sleep(100);
} catch (InterruptedException e) { }
timeout -= 100;
}
return false;
}
return true;
}

Rabbitmq: different between envelop.message.body and envelop.message.pool

I have one producer based on nodejs and the javascript library which I used is amqp.node, and the consumer is implemented by C library.
From rabbitmq management web, I can see the messages are pushed into the queue and delivered to the consumer. In the consumer, the amqp_consume_message return AMQP-RESPONSE-NORMAL, however, the envelop.message.body is null. How can I debug it in this case?
Here are my codes to consume messages from rabbitmq
amqp_rpc_reply_t reply;
amqp_envelope_t envelope;
amqp_maybe_release_buffers(m_con);
timeval m_time;
m_time.tv_sec = dwMilliseconds/1000;
m_time.tv_usec = (dwMilliseconds%1000)*1000;
reply = amqp_consume_message(m_con, &envelope, &m_time, 0);//time out 1 second
if (AMQP_RESPONSE_NORMAL != reply.reply_type)
{
return false;
}
bool bRet = false;
amqp_bytes_t& rTheBody = envelope.message.body;
if (rTheBody.len > 0)
{
Update
After further investigation, I find those messages are stored in the envelop.message.pool.pages. I want to the different between message.body and message.pool?
Quoting this
The pool field of the amqp_message_t object (e.g.,
envelope.message.pool) is a memory pool used for allocating parts of
the message. It is an implementation detail and should not be used by
client code directly (this implementation detail is subject to
change).
The only reason that the envelope.message.body.bytes should be NULL
with a AMQP_RESPONSE_NORMAL return value is if a 0-length message body
is received.

RabbitMQ - Non Blocking Consumer with Manual Acknowledgement

I'm just starting to learn RabbitMQ so forgive me if my question is very basic.
My problem is actually the same with the one posted here:
RabbitMQ - Does one consumer block the other consumers of the same queue?
However, upon investigation, i found out that manual acknowledgement prevents other consumers from getting a message from the queue - blocking state. I would like to know how can I prevent it. Below is my code snippet.
...
var message = receiver.ReadMessage();
Console.WriteLine("Received: {0}", message);
// simulate processing
System.Threading.Thread.Sleep(8000);
receiver.Acknowledge();
public string ReadMessage()
{
bool autoAck = false;
Consumer = new QueueingBasicConsumer(Model);
Model.BasicConsume(QueueName, autoAck, Consumer);
_ea = (BasicDeliverEventArgs)Consumer.Queue.Dequeue();
return Encoding.ASCII.GetString(_ea.Body);
}
public void Acknowledge()
{
Model.BasicAck(_ea.DeliveryTag, false);
}
I modify how I get messages from the queue and it seems blocking issue was fixed. Below is my code.
public string ReadOneAtTime()
{
Consumer = new QueueingBasicConsumer(Model);
var result = Model.BasicGet(QueueName, false);
if (result == null) return null;
DeliveryTag = result.DeliveryTag;
return Encoding.ASCII.GetString(result.Body);
}
public void Reject()
{
Model.BasicReject(DeliveryTag, true);
}
public void Acknowledge()
{
Model.BasicAck(DeliveryTag, false);
}
Going back to my original question, I added the QOS and noticed that other consumers can now get messages. However some are left unacknowledged and my program seems to hangup. Code changes are below:
public string ReadMessage()
{
Model.BasicQos(0, 1, false); // control prefetch
bool autoAck = false;
Consumer = new QueueingBasicConsumer(Model);
Model.BasicConsume(QueueName, autoAck, Consumer);
_ea = Consumer.Queue.Dequeue();
return Encoding.ASCII.GetString(_ea.Body);
}
public void AckConsume()
{
Model.BasicAck(_ea.DeliveryTag, false);
}
In Program.cs
private static void Consume(Receiver receiver)
{
int counter = 0;
while (true)
{
var message = receiver.ReadMessage();
if (message == null)
{
Console.WriteLine("NO message received.");
break;
}
else
{
counter++;
Console.WriteLine("Received: {0}", message);
receiver.AckConsume();
}
}
Console.WriteLine("Total message received {0}", counter);
}
I appreciate any comments and suggestions. Thanks!
Well, the rabbit provides infrastructure where one consumer can't lock/block other message consumer working with the same queue.
The behavior you faced with can be a result of couple of following issues:
The fact that you are not using auto ack mode on the channel leads you to situation where one consumer took the message and still didn't send approval (basic ack), meaning that the computation is still in progress and there is a chance that the consumer will fail to process this message and it should be kept in rabbit queue to prevent message loss (the total amount of messages will not change in management consule). During this period (from getting message to client code and till sending explicit acknowledge) the message is marked as being used by specific client and is not available to other consumers. However this doesn't prevent other consumers from taking other messages from the queue, if there are more mossages to take.
IMPORTANT: to prevent message loss with manual acknowledge make sure
to close the channel or sending nack in case of processing fault, to
prevent situation where your application took the message from queue,
failed to process it, removed from queue, and lost the message.
Another reason why other consumers can't work with the same queue is QOS - parameter of the channel where you declare how many messages should be pushed to client cache to improve dequeue operation latency (working with local cache). Your code example lackst this part of code, so I am just guessing. In case like this the QOS can be so big that there are all messages on server marked as belonging to one client and no other client can take any of those, exactly like with manual ack I've already described.
Hope this helps.

Rabbitmq retrieve multiple messages using single synchronous call

Is there a way to receive multiple message using a single synchronous call ?
When I know that there are N messages( N could be a small value less than 10) in the queue, then I should be able to do something like channel.basic_get(String queue, boolean autoAck , int numberofMsg ). I don't want to make multiple requests to the server .
RabbitMQ's basic.get doesn't support multiple messages unfortunately as seen in the docs. The preferred method to retrieve multiple messages is to use basic.consume which will push the messages to the client avoiding multiple round trips. acks are asynchronous so your client won't be waiting for the server to respond. basic.consume also has the benefit of allowing RabbitMQ to redeliver the message if the client disconnects, something that basic.get cannot do. This can be turned off as well setting no-ack to true.
Setting basic.qos prefetch-count will set the number of messages to push to the client at any time. If there isn't a message waiting on the client side (which would return immediately) client libraries tend to block with an optional timeout.
You can use a QueueingConsumer implementation of Consumer interface which allows you to retrieve several messages in a single request.
QueueingConsumer queueingConsumer = new QueueingConsumer(channel);
channel.basicConsume(plugin.getQueueName(), false, queueingConsumer);
for(int i = 0; i < 10; i++){
QueueingConsumer.Delivery delivery = queueingConsumer.nextDelivery(100);//read timeout in ms
if(delivery == null){
break;
}
}
Not an elegant solution and does not solve making multiple calls but you can use the MessageCount method. For example:
bool noAck = false;
var messageCount = channel.MessageCount("hello");
BasicGetResult result = null;
if (messageCount == 0)
{
// No messages available
}
else
{
while (messageCount > 0)
{
result = channel.BasicGet("hello", noAck);
var message = Encoding.UTF8.GetString(result.Body);
//process message .....
messageCount = channel.MessageCount("hello");
}
First declare instance of QueueingBasicConsumer() wich wraps the model.
From the model execute model.BasicConsume(QueueName, false, consumer)
Then implement a loop that will loop around messages from the queue which will then processing
Next line - consumer.Queue.Dequeue() method - waiting for the message to be received from the queue.
Then convert byte array to a string and display it.
Model.BasicAck() - release message from the queue to receive next message
And then on the server side can start waiting for the next message to come through:
public string GetMessagesByQueue(string QueueName)
{
var consumer = new QueueingBasicConsumer(_model);
_model.BasicConsume(QueueName, false, consumer);
string message = string.Empty;
while (Enabled)
{
//Get next message
var deliveryArgs = (BasicDeliverEventArgs)consumer.Queue.Dequeue();
//Serialize message
message = Encoding.Default.GetString(deliveryArgs.Body);
_model.BasicAck(deliveryArgs.DeliveryTag, false);
}
return message;
}