RabbitMQ queue gets blank randomly with around 5K messages remaining - rabbitmq

I have a queue to which a lot of messages are published (~10K). Connected to this queue are multiple consumers with the following code in codeigniter using the php-amqplib library
public function processQueue()
{
// Make connection
$connection = new AMQPStreamConnection('localhost', 5672, 'guest', 'guest');
// Make channel
$channel = $connection->channel();
// Declare queue
$channel->queue_declare(QUEUE_NAME, false, false, false, false);
// PHP callable
$callback = function ($msg) {
//DO MESSAGE PROCESSING HERE
$msg->delivery_info['channel']->basic_ack($msg->delivery_info['delivery_tag']);
};
$channel->basic_consume(AGENTS_QUEUE_PROCESSING, '', false, true, false, false, $callback);
// While queue is empty, wait
while (count($channel->callbacks)) {
// Wait
$channel->wait();
}
// Close channel and connection
$channel->close();
$connection->close();
}
The messages gets filled up and are simultaneously consumed by multiple such consumers. I've observed that with some 5-6k messages remaining (i.e. after consuming around 4-5k) messages, the queue suddenly gets empty with the consumers idling and waiting for more messages. Also, there is a sudden drop at this point in time in the total number of messages on the RabbitMQ Management web panel.
I've tried making the queue with the durable parameter but the problem seems to be the same. What could be the issue and its resolution?

Related

Is it possible for workers to consume based on topic?

I understand that you can send messages directly to a queue using channel.sendToQueue, and this creates a tasks-and-workers situation, where only one consumer will handle each task.
I also understand that you can use channel.publish with a topic-based exchange, and messages will be routed to queues based on the routing key. To my understanding, though, this will always broadcast to all subscribers on any matching queues.
I would essentially like to use the topic-based exchange, but only have one consumer handle each task. I've been through the documentation, and I don't see a way to do this.
My use-case:
I have instances of a microservice set up in multiple locations. There might be two in California, three in London, one in Singapore, etc. When a task is created, the only thing that matters is that it's handled by one of the instances in a given location.
Of course, I can create hundreds of queues named "usa-90210", "uk-ec1a", etc. It just seems like using topics would be much cleaner. It would also be more flexible, given the ability to use wildcards.
If this isn't a feature of RabbitMQ, I'm open to other thoughts or ideas as well.
UPDATE
As per istepaniuk's suggestion, I've tried creating two workers, each binding their own queue to the exchange:
const connectionA = await amqplib.connect('amqp://guest:guest#127.0.0.1:5672');
const channelA = await connectionA.createChannel();
channelA.assertExchange('test_exchange', 'topic', { durable: false });
await channelA.assertQueue('test_queue_a', { exclusive: true });
channelA.bindQueue('test_queue_a', 'test_exchange', 'usa.*');
await channelA.consume('test_queue_a', () => { console.log('worker a'); }, { noAck: true });
const connectionB = await amqplib.connect('amqp://guest:guest#127.0.0.1:5672');
const channelB = await connectionB.createChannel();
channelB.assertExchange('test_exchange', 'topic', { durable: false });
await channelB.assertQueue('test_queue_b', { exclusive: true });
channelB.bindQueue('test_queue_b', 'test_exchange', 'usa.*');
await channelB.consume('test_queue_b', () => { console.log('worker b'); }, { noAck: true });
const pubConnection = await amqplib.connect('amqp://guest:guest#127.0.0.1:5672');
const pubChannel = await pubConnection.createChannel();
pubChannel.assertExchange('test_exchange', 'topic', { durable: false });
pubChannel.publish('test_exchange', 'usa.90210', Buffer.from(''));
Unfortunately, both consumers are still receiving the message.
worker a
worker b
Yes.
I would essentially like to use the topic-based exchange, but only have one consumer handle each task. I've been through the documentation, and I don't see a way to do this.
Use a topic exchange and have your consumers declare and bind their own queues. What you describe is a very common scenario. It is outlined in the tutorial 5, "Topics".
Additionally, you can have multiple consumers share a queue (just don't declare it exclusive). This is described in tutorial 2, "Workers".
The multiple instances of consumers can declare the same queue and bindings, the operation is idempotent. Using durable queues (as opposed to exclusive) also means that the messages will queue up if all your consumers disappear or the network fails.

How to use micronaut rabbitmq client with BuiltinExchangeType TOPIC

I would like to use micronaut-rabbitmq to send messages between services via topics.
Therefore I've created an exchange, queues and bindings in the ChannelInitializer like this:
channel.exchangeDeclare("registration", BuiltinExchangeType.TOPIC, true)
channel.queueDeclare("user_new", true, false, false, null)
channel.queueBind("user_new", "registration", "user.new.#")
channel.queueDeclare("user_all", true, false, false, null)
channel.queueBind("user_all", "registration", "user.#")
When I try to send a message to the routingkey "user.new" it is not send to any of the queues.
#Binding("user.new")
override fun userCreated(event: UserCreatedEvent)
I would expect, that it is send to both of the queues because of the topics routingkeys.
If I rename the "user_new" queue to "user.new" the message is send to this queue. But as I want to have the message in both queues this is no option.
Any help would be appreciated!
Thank you
My Problem was, that I didn't declare the exchange on the #RabbitClient
After changing #RabbitClient to #RabbitClient("registration") everything works perfectly.

iotedge: How to requeue message that could not be processed

There are publisher and consumer custom modules that are running on an Edge IoT device. The publisher module keeps producing messages at constant rate no matter consumer module processes it or not. The consumer module POSTs the message to external service and given that there is no Internet connection, the consumer module would like to requeue the messsage so that is not lost and tried again.
I do not prefer to write an infinite loop to keep retrying; also if the module is restarted the message would be lost. So i prefer to requeue the message to edgeHub/RocksDB.
Where do I find documentation on available responses that can be provided for IoTHubMessageDispositionResult? what is the response to be sent if message needs to be requeued?
if message.processed():
return IoTHubMessageDispositionResult.ACCEPTED
else:
return IoTHubMessageDispositionResult.??
You don't have to implement your own requeuing of messages. IotEdge provides offline functionality as described in this blog post and on this documentation page.
The edgeHub will locally store messages on the edgeDevice if there is no connection to the IotHub. It will automatically start sending those messages (in the correct order) once the connection is established again.
You can configure how long edgeHub will buffer messages like this:
"$edgeHub": {
"properties.desired": {
"schemaVersion": "1.0",
"routes": {},
"storeAndForwardConfiguration": {
"timeToLiveSecs": 7200
}
}
}
The 7200 seconds (2 hours) is also the default if you don't configure anything.
By default, the messages will be written to a folder within the edgeHub docker container. If you want to store them somewhere else you can do so with this configuration:
"edgeHub": {
"type": "docker",
"settings": {
"image": "mcr.microsoft.com/azureiotedge-hub:1.0",
"createOptions": {
"HostConfig": {
"Binds": ["<HostStoragePath>:<ModuleStoragePath>"],
"PortBindings": {
"8883/tcp": [{"HostPort":"8883"}],
"443/tcp": [{"HostPort":"443"}],
"5671/tcp": [{"HostPort":"5671"}]
}
}
}
},
"env": {
"storageFolder": {
"value": "<ModuleStoragePath>"
}
},
"status": "running",
"restartPolicy": "always"
}
Replace HostStoragePath and ModuleStoragePath with the wanted values. Example:
"createOptions": {
"HostConfig": {
"Binds": [
"/etc/iotedge/storage/:/iotedge/storage/"
],
...
}
}
},
"env": {
"storageFolder": {
"value": "/iotedge/storage/"
},
...
Please note that you probably have to manually give the iotEdge user (or all users) access to that folder (using chmod).
Update:
If you are just looking for the available values of IoTHubMessageDispositionResult you will find the answer here:
class IoTHubMessageDispositionResult(Enum):
ACCEPTED = 0
REJECTED = 1
ABANDONED = 2
Update 2:
Messages that have been ACCEPTED are removed from the message queue because they have been successfully delivered.
Messages that are ABANDONED are added to the message queue again and the module will try to send it again as defined in the retryPolicy. For more insight on the retryPolicy you can read this thread.
Messages that are REJECTED are not added to the message queue again.

How do I have RabbitMQ redeliver unacknowledged messages?

I'm following the following tutorial to the letter:
https://www.rabbitmq.com/tutorials/tutorial-two-java.html.
I start the RabbitMQ server as such:
docker pull rabbitmq
docker run -d --hostname my-rabbit-host --name my-rabbit -p 5672:5672 rabbitmq:3
From the tutorial:
Using this code we can be sure that even if you kill a worker using
CTRL+C while it was processing a message, nothing will be lost. Soon
after the worker dies all unacknowledged messages will be redelivered.
I spawn two consumers, and when I CTRL+C one of them, the other running one does not receive the messages that were originally destined to the former consumer. How do I get the messages to be redelivered after CTRL+C'ing out of one of the consumers?
Edit: I'm now installing RabbitMQ via 'brew', but I'm still seeing the same issue.
brew update
brew install rabbitmq
/usr/local/sbin/rabbitmq-server &
There is no need to put sleep or anything like that in the consumer code. On the link you provided, search for paragraph starting with Manual message acknowledgments and look at the code there. The key thing is not to acknowledge the message. If you have autoACK flag set to true, then you can call anything you want, the message is acknowledged as soon as it is received. So simply don't set that flag, and also for testing you could comment out the line channel.basicAck(envelope.getDeliveryTag(), false); in order not to do manual ACK either. So when the consumer exits, the message will still be in the queue.
Strange, RabbitMQ works for me out of the box.
Step 1, I started RabbitMQ:
$ docker run -d --hostname my-rabbit-host --name my-rabbit -p 5672:5672 rabbitmq:3
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e0c3257b8b49 rabbitmq:3 "docker-entrypoint.s…" 18 minutes ago Up 14 minutes 4369/tcp, 5671/tcp, 25672/tcp, 0.0.0.0:5672->5672/tcp my-rabbit
Step 2, I published a message (By the way, I tried with Node.js. See Appendix below for source code):
$ node src/producer.js
Publisher: TODO 1st
Step 3, I started two consumers one after another (my consumer is designed, for the testing purpose, not to acknowledge, so RabbitMQ will never dequeue a message).
Consumer 1 will receive the message, while consumer 2 won't.
Consumer 1:
$ node src/consumer.js
Consumer: TODO 1st
Consumer 2:
$ node src/consumer.js
Step 4, when I stop consumer 1 by 'Ctrl + c', Consumer 2 will immediately receive the message from RabbitMQ:
Consumer 2:
$ node src/consumer.js
Consumer: TODO 1st
Conclusion: Basically, when setting up a consumer, we need to tell RabbitMQ not to dequeue the message until its acknowledgement has been received from the consumer. As a result, if consumer 1 is stopped for any reason before it has a chance to acknowledge the message, RabbitMQ will redeliver the message to consumer 2.
Appendix
src/producer.js
var q = 'tasks'
function bail (err) {
console.error(err)
process.exit(1)
}
// Publisher
function publisher (conn) {
conn.createChannel(onOpen)
function onOpen (err, ch) {
if (err != null) bail(err)
ch.assertQueue(q)
const msg = 'TODO 1st'
ch.sendToQueue(q, Buffer.from(msg), { persistent: true })
console.log('Publisher: ', msg)
}
}
require('amqplib/callback_api')
.connect('amqp://guest:guest#localhost', function (err, conn) {
if (err != null) bail(err)
publisher(conn)
})
src/consumer.js
var q = 'tasks'
function bail (err) {
console.error(err)
process.exit(1)
}
// Consumer
function consumer (conn) {
conn.createChannel(onOpen)
function onOpen (err, ch) {
if (err != null) bail(err)
ch.assertQueue(q)
ch.consume(q, function (msg) {
if (msg !== null) {
console.log('Consumer: ', msg.content.toString())
// Commented out the line below, so RabbitMQ never dequeues a message
// ch.ack(msg)
}
}, { noAck: false })
}
}
require('amqplib/callback_api')
.connect('amqp://guest:guest#localhost', function (err, conn) {
if (err != null) bail(err)
consumer(conn)
})

RabbitMQ prefetch ignored when consumer is Down and gets Up

I'm getting my basicQos ignored when consumer is down and, after, consumer gets up. For instance, suppose that consumer is down and 5 messages arrives from a producer. If consumer is not running, these messages will be stored in disk (I think!) if exchanger/queue is (are) durable.
if I set basicQos as channel.basicQos(0, 3, true), my consumer receives more than 3 messages when it gets UP. Why?!?
On the other hand, everything works properly (only 3 messages are read from the queue) if consumer is running when it receives messages from the queues... My code is as follows:
factory = new ConnectionFactory();
factory.setHost(mRabbitMQHost); //may get server address from file configuration.
factory.setUsername(mRabbitMQUsername);
factory.setPassword(mRabbitMQPassword);
connection = factory.newConnection();
channel = connection.createChannel();
channel.exchangeDeclare("exchangeName", "direct", true); //True enables durability
consumer = new QueueingConsumer(channel);
for (QGQueues queue : QGQueues.values()) {
String queueName = queue.getQueueName();
channel.queueDeclare(queueName, true, false, false, null);
channel.queueBind(queueName, "exchangeName", queue.getRoutingKey());
channel.basicConsume(queueName, false, consumer); //false enables ACK message to RabbitMQ server
}
channel.basicQos(0, 3, true);
Thanks!
My bet would be that you need to set the QoS before you do anything else.
Change your code to this order:
channel = connection.createChannel();
// set QoS immediately
channel.basicQos(0, 3, true);
channel.exchangeDeclare("exchangeName", "direct", true); //True enables durability
consumer = new QueueingConsumer(channel);
for (QGQueues queue : QGQueues.values()) {
String queueName = queue.getQueueName();
channel.queueDeclare(queueName, true, false, false, null);
channel.queueBind(queueName, "exchangeName", queue.getRoutingKey());
channel.basicConsume(queueName, false, consumer); //false enables ACK message to RabbitMQ server
}
this will ensure the prefetch limit is set before you try to consume any messages.