How can I read and log RabbitMQ message content? - rabbitmq

I am sending a message to rabbitMQ, I want to read this message and log it into a file.
How can I do this?

In order to trace all the messages being exchanged on the RabbitMQ server you can use the firehose tracer.
You can activate/deactivate it with the commands:
rabbitmqctl trace_on
rabbitmqctl trace_off
Once activated, all the messages will be duplicated to the exchange amq.rabbitmq.trace.
Just bind a queue to it and consume from there. You can find a working example in our RabbitMQ Cookbook.
It should also be possible to directly trace the messages to file by using the rabbitmq_tracing plugin, but I have never tried it actually.

Related

How to see RabbitMQ messages

Can't view rabbitmq queue messages after using the get messages command.
rabbitmqadmin get queue='queue_name' -H localhost -P 15672 -u rmq -p rmq --vhost=/ count=100
Queue count shows 100 messages, cant use the above command again to see the messages.
I would suggest to read https://www.rabbitmq.com/getstarted.html to understand how rabbitmq works.
The command get consumes the messages so you can't consume them anymore.
If you want to consume the same messages multi-times you can use the stream queue type.
When rabbitMq consumer consumes a mensagem from a queue the same will be deleted from the queue. If you just want to see the message you can log to the RabbitMQ Managment and read the messages, if they're not serialized. But if you want to consume the same message for some reason multiple times read the part of streams queue on the documentation.

clear messages from rabbitMQ queue in mule 3

My requirement is to clear all the messages from queue before processing the flow or publishing anything in the queue.
We are using rabbitMQ and due to some reason messages are stucked in the queue and because of that we are facing some issue when we are counting the queue based on the messages. so for the next time before processing we have to clear the queue.
Here we have multiple queue like slave1, slave2,slave3 and when api will be triggered in the process section we have to clear the queue.
Kindly suggest how we can do this in mule3.
Mule 3 has a generic AMQP connector. It does not support administrative commands from a specific implementation like RabbitMQ, so you can't use the connector.
You could use RabbitMQ REST API and call it using the HTTP Request connector. See this previous answer to see how to delete queues with Curl, then implement the same request with HTTP Request: https://stackoverflow.com/a/29148299/721855

How can I track incoming messages in the rabbitmq queue via the console?

Our rabbitmq is running under kubernetes, I can only go into the console of the pod itself, I cannot access the web interface of rabbitmq. I want to track if the right message is coming to queue from the application, how can I do this ?
I only found rabbitmqctl list-queues, which shows message statistics at the time

RabbitMQ Manual Retry

How can manual retry work in RabbitMQ after a message has been put onto dead letter queue?
Does RabbitMQ provide an user interface through which you can do this? I assume here that RabbitMQ console does not provide you this capability.
The Rabbit MQ management interface would let you do this crudely, you can go into the deadletter queue, 'get' the message then copy the content. Go to the queue you want to retry the message on and 'publish' it directly to that queue.
Alternatively, you can enable the shovel plugin which allows you to move messages from one queue to another. The RabbitMQ Management plugin directly contains instructions on how to do this.
You can write a consumer / producer using a number of various client libraries. For python a popular library is pika (https://pypi.python.org/pypi/pika).
The script can consume all the messages in a queue, then publish them to another queue.

RabbitMQ dropping messages after the first one

I'm using celery 3.0.18 with RabbitMQ 3.0.2. I have a task sent to another application by using celery.send_task, and I can see the send_task call in my logs, I can see the packets leaving the worker instance, and I can see the packets reaching the RabbitMQ instance when I call tcpflow -ce -i any port 5672, however, only the first message gets to the queue. They all have the same routing key, I tried recreating the exchange and bindings, and even a new RabbitMQ instance, and nothing seems to work. This used to work fine for months, until we had to rebuild the RabbitMQ from scratch after a crash in our AWS infrastructure. Strangely, I have the exact same setup working on other application, using the same broker and the same exchange, binding and queue, and it works perfectly there. Also, it works when I send the messages to the same exchange using the same call from a management script, running from the shell on the same instance, but it doesn't work when it's sent from the celery task in the worker process.
Any ideas on what the problem might be?
Eventually, I figured what's wrong, but it's not clear if this is the expected behavior, a celery bug, or a RabbitMQ bug.
What happens is that besides our application tasks, I have a custom logging handler used to send logs to a central location using RabbitMQ, using celery.send_task. This logging handler sends messages to an exchange named application.logger, with a routing key like application.logger.info, application.logger.warning, etc, and have bindings to route some logging levels to specific queues. This exchange, bindings and queues were created directly in RabbitMQ and not defined in Celery routes.
When the worker tries to send a message to this exchange and it doesn't exist, Celery would log a 404 NOT_FOUND error. After that, tasks sent to other exchanges using the same connection weren't delivered. They were sent by the worker instance, we could see the packets arriving and the RabbitMQ management screen for that connection even shows the data arriving from the client in kb/s, but no messages were delivered.