How do I have RabbitMQ redeliver unacknowledged messages? - rabbitmq

I'm following the following tutorial to the letter:
https://www.rabbitmq.com/tutorials/tutorial-two-java.html.
I start the RabbitMQ server as such:
docker pull rabbitmq
docker run -d --hostname my-rabbit-host --name my-rabbit -p 5672:5672 rabbitmq:3
From the tutorial:
Using this code we can be sure that even if you kill a worker using
CTRL+C while it was processing a message, nothing will be lost. Soon
after the worker dies all unacknowledged messages will be redelivered.
I spawn two consumers, and when I CTRL+C one of them, the other running one does not receive the messages that were originally destined to the former consumer. How do I get the messages to be redelivered after CTRL+C'ing out of one of the consumers?
Edit: I'm now installing RabbitMQ via 'brew', but I'm still seeing the same issue.
brew update
brew install rabbitmq
/usr/local/sbin/rabbitmq-server &

There is no need to put sleep or anything like that in the consumer code. On the link you provided, search for paragraph starting with Manual message acknowledgments and look at the code there. The key thing is not to acknowledge the message. If you have autoACK flag set to true, then you can call anything you want, the message is acknowledged as soon as it is received. So simply don't set that flag, and also for testing you could comment out the line channel.basicAck(envelope.getDeliveryTag(), false); in order not to do manual ACK either. So when the consumer exits, the message will still be in the queue.

Strange, RabbitMQ works for me out of the box.
Step 1, I started RabbitMQ:
$ docker run -d --hostname my-rabbit-host --name my-rabbit -p 5672:5672 rabbitmq:3
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e0c3257b8b49 rabbitmq:3 "docker-entrypoint.s…" 18 minutes ago Up 14 minutes 4369/tcp, 5671/tcp, 25672/tcp, 0.0.0.0:5672->5672/tcp my-rabbit
Step 2, I published a message (By the way, I tried with Node.js. See Appendix below for source code):
$ node src/producer.js
Publisher: TODO 1st
Step 3, I started two consumers one after another (my consumer is designed, for the testing purpose, not to acknowledge, so RabbitMQ will never dequeue a message).
Consumer 1 will receive the message, while consumer 2 won't.
Consumer 1:
$ node src/consumer.js
Consumer: TODO 1st
Consumer 2:
$ node src/consumer.js
Step 4, when I stop consumer 1 by 'Ctrl + c', Consumer 2 will immediately receive the message from RabbitMQ:
Consumer 2:
$ node src/consumer.js
Consumer: TODO 1st
Conclusion: Basically, when setting up a consumer, we need to tell RabbitMQ not to dequeue the message until its acknowledgement has been received from the consumer. As a result, if consumer 1 is stopped for any reason before it has a chance to acknowledge the message, RabbitMQ will redeliver the message to consumer 2.
Appendix
src/producer.js
var q = 'tasks'
function bail (err) {
console.error(err)
process.exit(1)
}
// Publisher
function publisher (conn) {
conn.createChannel(onOpen)
function onOpen (err, ch) {
if (err != null) bail(err)
ch.assertQueue(q)
const msg = 'TODO 1st'
ch.sendToQueue(q, Buffer.from(msg), { persistent: true })
console.log('Publisher: ', msg)
}
}
require('amqplib/callback_api')
.connect('amqp://guest:guest#localhost', function (err, conn) {
if (err != null) bail(err)
publisher(conn)
})
src/consumer.js
var q = 'tasks'
function bail (err) {
console.error(err)
process.exit(1)
}
// Consumer
function consumer (conn) {
conn.createChannel(onOpen)
function onOpen (err, ch) {
if (err != null) bail(err)
ch.assertQueue(q)
ch.consume(q, function (msg) {
if (msg !== null) {
console.log('Consumer: ', msg.content.toString())
// Commented out the line below, so RabbitMQ never dequeues a message
// ch.ack(msg)
}
}, { noAck: false })
}
}
require('amqplib/callback_api')
.connect('amqp://guest:guest#localhost', function (err, conn) {
if (err != null) bail(err)
consumer(conn)
})

Related

RabbitMQ queue gets blank randomly with around 5K messages remaining

I have a queue to which a lot of messages are published (~10K). Connected to this queue are multiple consumers with the following code in codeigniter using the php-amqplib library
public function processQueue()
{
// Make connection
$connection = new AMQPStreamConnection('localhost', 5672, 'guest', 'guest');
// Make channel
$channel = $connection->channel();
// Declare queue
$channel->queue_declare(QUEUE_NAME, false, false, false, false);
// PHP callable
$callback = function ($msg) {
//DO MESSAGE PROCESSING HERE
$msg->delivery_info['channel']->basic_ack($msg->delivery_info['delivery_tag']);
};
$channel->basic_consume(AGENTS_QUEUE_PROCESSING, '', false, true, false, false, $callback);
// While queue is empty, wait
while (count($channel->callbacks)) {
// Wait
$channel->wait();
}
// Close channel and connection
$channel->close();
$connection->close();
}
The messages gets filled up and are simultaneously consumed by multiple such consumers. I've observed that with some 5-6k messages remaining (i.e. after consuming around 4-5k) messages, the queue suddenly gets empty with the consumers idling and waiting for more messages. Also, there is a sudden drop at this point in time in the total number of messages on the RabbitMQ Management web panel.
I've tried making the queue with the durable parameter but the problem seems to be the same. What could be the issue and its resolution?

How to use redis pub-sub channel with expire?

Does redis support pub-sub channels with expiration ttl?
We are using redis-cli with vertx : https://vertx.io/docs/vertx-redis-client/java/
redisClient.subscribe(channel, result -> {
if (!result.succeeded()) {
log.warn("subscribe failed msgId={}", msgId, result.cause());
message.fail(500, String.format("retrieve conversation %s failure", msgId));
return;
}
// redisClient.psetex(channel, 28000, ... handler -> {}); ???
is this the way to go?
PS: I m referring to the TTL of the channel reference itself.

iotedge: How to requeue message that could not be processed

There are publisher and consumer custom modules that are running on an Edge IoT device. The publisher module keeps producing messages at constant rate no matter consumer module processes it or not. The consumer module POSTs the message to external service and given that there is no Internet connection, the consumer module would like to requeue the messsage so that is not lost and tried again.
I do not prefer to write an infinite loop to keep retrying; also if the module is restarted the message would be lost. So i prefer to requeue the message to edgeHub/RocksDB.
Where do I find documentation on available responses that can be provided for IoTHubMessageDispositionResult? what is the response to be sent if message needs to be requeued?
if message.processed():
return IoTHubMessageDispositionResult.ACCEPTED
else:
return IoTHubMessageDispositionResult.??
You don't have to implement your own requeuing of messages. IotEdge provides offline functionality as described in this blog post and on this documentation page.
The edgeHub will locally store messages on the edgeDevice if there is no connection to the IotHub. It will automatically start sending those messages (in the correct order) once the connection is established again.
You can configure how long edgeHub will buffer messages like this:
"$edgeHub": {
"properties.desired": {
"schemaVersion": "1.0",
"routes": {},
"storeAndForwardConfiguration": {
"timeToLiveSecs": 7200
}
}
}
The 7200 seconds (2 hours) is also the default if you don't configure anything.
By default, the messages will be written to a folder within the edgeHub docker container. If you want to store them somewhere else you can do so with this configuration:
"edgeHub": {
"type": "docker",
"settings": {
"image": "mcr.microsoft.com/azureiotedge-hub:1.0",
"createOptions": {
"HostConfig": {
"Binds": ["<HostStoragePath>:<ModuleStoragePath>"],
"PortBindings": {
"8883/tcp": [{"HostPort":"8883"}],
"443/tcp": [{"HostPort":"443"}],
"5671/tcp": [{"HostPort":"5671"}]
}
}
}
},
"env": {
"storageFolder": {
"value": "<ModuleStoragePath>"
}
},
"status": "running",
"restartPolicy": "always"
}
Replace HostStoragePath and ModuleStoragePath with the wanted values. Example:
"createOptions": {
"HostConfig": {
"Binds": [
"/etc/iotedge/storage/:/iotedge/storage/"
],
...
}
}
},
"env": {
"storageFolder": {
"value": "/iotedge/storage/"
},
...
Please note that you probably have to manually give the iotEdge user (or all users) access to that folder (using chmod).
Update:
If you are just looking for the available values of IoTHubMessageDispositionResult you will find the answer here:
class IoTHubMessageDispositionResult(Enum):
ACCEPTED = 0
REJECTED = 1
ABANDONED = 2
Update 2:
Messages that have been ACCEPTED are removed from the message queue because they have been successfully delivered.
Messages that are ABANDONED are added to the message queue again and the module will try to send it again as defined in the retryPolicy. For more insight on the retryPolicy you can read this thread.
Messages that are REJECTED are not added to the message queue again.

Google Cloud Pub/Sub Node.js Sample: TypeError: Cannot read property 'on' of null

I'm using GCP and I want to use Cloud Pub/Sub. I got this error below when I tried Node.js sample. Does anyone knows how to fix it?
/private/tmp/pubsub/pubsubsample.js:26
subscription.on('error', onError);
^
TypeError: Cannot read property 'on' of null
at /private/tmp/pubsub/pubsubsample.js:26:15
at /private/tmp/pubsub/node_modules/gcloud/lib/pubsub/index.js:474:7
at Object.handleResp (/private/tmp/pubsub/node_modules/gcloud/lib/common/util.js:113:3)
at /private/tmp/pubsub/node_modules/gcloud/lib/common/util.js:422:12
at Request.onResponse [as _callback] (/private/tmp/pubsub/node_modules/gcloud/node_modules/retry-request/index.js:106:7)
at Request.self.callback (/private/tmp/pubsub/node_modules/gcloud/node_modules/request/request.js:198:22)
at emitTwo (events.js:87:13)
at Request.emit (events.js:172:7)
at Request.<anonymous> (/private/tmp/pubsub/node_modules/gcloud/node_modules/request/request.js:1035:10)
at emitOne (events.js:82:20)
https://github.com/GoogleCloudPlatform/gcloud-node
var gcloud = require('gcloud');
// Authenticating on a per-API-basis. You don't need to do this if you
// auth on a global basis (see Authentication section above).
var pubsub = gcloud.pubsub({
projectId: 'xxxxx',
keyFilename: 'xxx.json'
});
// Reference a topic that has been previously created.
var topic = pubsub.topic('info');
// Publish a message to the topic.
topic.publish({
data: 'New message!'
}, function(err) {});
// Subscribe to the topic.
topic.subscribe('new-subscription', function(err, subscription) {
// Register listeners to start pulling for messages.
function onError(err) {}
function onMessage(message) {}
subscription.on('error', onError);
subscription.on('message', onMessage);
// Remove listeners to stop pulling for messages.
subscription.removeListener('message', onMessage);
subscription.removeListener('error', onError);
});
... I'm using now PubSub but I'm thinking whether I can do same thing by using Google Cloud PubSub.
This post may be relevant. Node.js on Google Cloud Platform Pub/Sub tutorial worker is failing with "TypeError: Cannot call method 'on' of null"
Update 1
I changed to this code but same error was showed.
(error)
subscription.on('error', onError);
^
TypeError: Cannot read property 'on' of null
(code)
// Subscribe to the topic
topic.subscribe('new-subscription', function(err, subscription) {
if( err ) {
// something went wrong, react!
return;
}
// Register listeners to start pulling for messages.
function onError(err) {}
function onMessage(message) {}
subscription.on('error', onError);
subscription.on('message', onMessage);
// Remove listeners to stop pulling for messages.
subscription.removeListener('message', onMessage);
subscription.removeListener('error', onError);
});
Update 2
My expectation is this below.
execute "node pubsub.js"
I can see the sample message 'New message!'
When invoking topic.subscribe(), the method essentially invokes the pubsub.subscribe() method with a specific Topic instance as its context. This can be seen from the source code under Topic.prototype.subscribe.
Based on the PubSub index.js source code, PubSub.prototype.subscribe() issues a HTTP request to create a new subscription respecting the projects.subscriptions.create API format. If the response returns a 409 ALREADY_EXISTS error and you have not set the options.reuseExisting to true, the callback you provided will be invoked with the error and null as a subscription. According to the PubSub Nodejs documentation for topic.subscribe under options.reuseExisting, the default value if not specified is false and
If false, attempting to create a subscription that already exists will
fail.
To use this design more effectively, I would suggest the following:
var pubsub = require('gcloud').pubsub({
"projectId" : "project-id",
"keyFilename" : "key-file.json"
});
// This topic should already have been created
var topic = pubsub.topic("interesting-topic");
// The message that will be published
var message = {"data": "Welcome to this interesting thread"};
// Callback to throw an exception if publish was unsuccessful
// Will log published message IDs if successful
function publishedHandler(err, messageIds, responseBody) {
if (err) {
// Could not publish message(s)
throw err;
}
console.log(messagesIds);
}
// Callback to throw an exception if a subscription could not be found or created
// Will attach event listeners if successfully gets a subscription
function subscriptionHandler(err, subscription, responseBody) {
if (err) {
// Could not get or create a new subscription
throw err;
}
subscription.on("error", errorHandler);
subscription.on("message", messageHandler);
}
// Publish a message to the topic
topic.publish(message, publishedHandler);
// Create or get 'sub' and subcribe it to 'interesting-topic'
topic.subscribe("sub", {"reuseExisting": true}, subscriptionHandler);
Issue 696 may not refer exactly to this issue but does discuss some of the semantic choices made in designing the Node.js library for Cloud PubSub. It's not entirely clear if methods like topic.publish or topic.subscribe check for existence of a topic, existence of a subscription, create a new subscription or get an existing subscription. I would simply warn to add robust error handling.
Currently you do not check for errors, when subscribing, but just errors while being subscribed. To change that, you can use something like this:
// Subscribe to the topic.
topic.subscribe('new-subscription', function(err, subscription) {
// first check for errors
if( err ) {
// something went wrong, react!
return;
}
// rest of your code
});

logstash rabbitmq output never posts to exchange

I've got logstash running, and successfully reading in a file
rabbitmq is running, I'm watching the log, and I can see the web interface
I've configured logstash to output to a rabbitmq exchange... I think!
Here's the problem: nothing ever gets posted to the exchange, as seen in the web interface.
Any ideas?
My output config:
output {
rabbitmq {
codec => plain
host => localhost
exchange => yomtvraps
exchange_type => direct
}
file { path => "/tmp/heartbeat-from-logstash.log" }
}
UPDATE: I'm watching the rabbit log with
tail -F /usr/local/var/log/rabbitmq/rabbit\#localhost.log
As it turns out, the problem was that there was no routing key set for the exchange and queue.
A working config is:
output {
rabbitmq {
codec => plain
host => localhost
exchange => yomtvraps
exchange_type => direct
key => yomtvraps
# these are defaults but you never know...
durable => true
port => 5672
user => "guest"
password => "guest"
}
}
Here's a sample receiver code (using ruby "Bunny")
require "bunny"
conn = Bunny.new(:automatically_recover => false)
conn.start
ch = conn.create_channel
q = ch.queue("yomtvraps")
exchange = ch.direct("yomtvraps", :durable => true)
begin
puts " [*] Waiting for messages. To exit press CTRL+C"
q.bind(exchange, :routing_key => "yomtvraps").subscribe(:block => true) do |delivery_info, properties, body|
puts " [x] Received #{body}"
end
rescue Interrupt => _
conn.close
exit(0)
end
you rabbitmq's parameter seems not enough, username,password and port have not been configured.
You can configure two outputs, one is to rabbitmq, the other is to file for vertifying the log's creation and log stash is ok.
pay attention to the logstash's version(log stash, rabbitmq plugin), it gave me lots of trouble in my trial before (log stash to another redis server etc).
You could debug rabbitmq's log.
ps -ef|grep erl you could find the log file's path in the arguments.
Be sure that rabbitmq's web manager plugin is enabled, and firewall is rightly configured, then open rabbitmq's web manager, ipaddress:15672
check the exchange's type is ok (in this case 'direct' may be a correct choice), your message consumer is configured ok, and your consumer's queue has been been bound to the exchange correctly.
try to post the message to your consumer through web manager and ensure consumer work well.
Monitor your queue when log stash push log into your consumer.