RabbitMq and Apache camel 3.7.3 : autoAck = false not working as expected - rabbitmq

I am trying to consume from RabbitMQ using Apache Camel and i have set autoAck=false to do the manual acknowledgement. And then i want to redirect them to deadLetter queue but messages are disappearing from my placement queue but are not in the deadLetter queue.
What am i doing wrong?
onException(NullPointerException.class).log(LoggingLevel.ERROR, "This is error message : ${exception}")
.to("rabbitmq:deadLetterEx?exchangeType=direct&queue=deadLetter.qu&routingKey=deadLetter&autoDelete=false&autoAck=false");
from("rabbitmq:event.ex?exchangeType=topic&queue=placement.qu&routingKey=event.placement&durable=true&autoDelete=false&autoAck=false")
.log(LoggingLevel.ERROR, "Received from Rabbit:${body}")
.process(this::enrichPlacement)
.end();
enter image description here

Related

I have a RabbitMQ container that has regularly "unexpected_frame" exceptions, what does that mean?

I have an application that is pushing data into RabbitMQ and then some other apps are subscribing to the different exchanges.
But recently, I keep having errors like this after a few hours:
2020-07-09 12:45:12.670 [error] <0.23578.1> Error on AMQP connection <0.23578.1> (172.18.0.5:48230 ->
172.18.0.3:5672, vhost: '/', user: 'guest', state: running), channel 6:
operation basic.publish caused a connection exception unexpected_frame:
"expected content header for class 60, got non content"
2020-07-09 12:45:12.674 [info] <0.23578.1> closing AMQP connection <0.23578.1> (172.18.0.5:48230 ->
172.18.0.3:5672, vhost: '/'
On the client side, I get messages like this:
"Already closed: The AMQP operation was interrupted: AMQP close-reason, initiated by Peer,
code=505, text='UNEXPECTED_FRAME - expected content body, got non content body frame instead',
classId=60, methodId=40"
This is on a docker container.
What could this error be about?
You are sharing a channel for concurrent publishing, use below code
lock (ch) { ch.BasicPublish(); }

RabbitMQ Ack Timeout

I'm using RPC Pattern for processing my objects with RabbitMQ.
You suspect,I have an object, and I want to have that process finishes and After that send ack to RPC Client.
Ack as default has a timeout about 3 Minutes.
My process Take long time.
How can I change this timeout for ack of each objects or what I must be do for handling these like processess?
Modern versions of RabbitMQ have a delivery acknowledgement timeout:
In modern RabbitMQ versions, a timeout is enforced on consumer delivery acknowledgement. This helps detect buggy (stuck) consumers that never acknowledge deliveries. Such consumers can affect node's on disk data compaction and potentially drive nodes out of disk space.
If a consumer does not ack its delivery for more than the timeout value (30 minutes by default), its channel will be closed with a PRECONDITION_FAILED channel exception. The error will be logged by the node that the consumer was connected to.
Error message will be:
Channel error on connection <####> :
operation none caused a channel exception precondition_failed: consumer ack timed out on channel 1
Timeout by default is 30 minutes (1,800,000ms)note 1 and is configured by the consumer_timeout parameter in rabbitmq.conf.
note 1: Timeout was 15 minutes (900,000ms) before RabbitMQ 3.8.17.
if you run rabbitmq in docker, you can describe volume with file rabbitmq.conf, then create this file inside volume and set consumer_timeout
for example:
docker compose
version: "2.4"
services:
rabbitmq:
image: rabbitmq:3.9.13-management-alpine
network_mode: host
container_name: 'you name'
ports:
- 5672:5672
- 15672:15672 ----- if you use gui for rabbit
volumes:
- /etc/rabbitmq/rabbitmq.conf:/etc/rabbitmq/rabbitmq.conf
And you need create file
rabbitmq.conf
on you server by this way
/etc/rabbitmq/
documentation with params: https://github.com/rabbitmq/rabbitmq-server/blob/v3.8.x/deps/rabbit/docs/rabbitmq.conf.example

Camel Rabbit doesn't allow to set empty routing key when declaring DLX

I have a Spring Boot application that uses Camel Rabbit to consume messages from a queue. I use an URI to declare the queue with a dead letter exchange, but I'm not supplying the option deadLetterRoutingKey as I want the messages going to DLX to keep the original routing key. When the application starts it throws the following error:
Caused by: com.rabbitmq.client.ShutdownSignalException: channel error;
protocol method: #method<channel.close>(reply-code=406, reply-text=PRECONDITION_FAILED - invalid arg 'x-dead-letter-routing-key' for queue 'entry.paid.erp' in vhost '/': {unacceptable_type,void}, class-id=50, method-id=10)
Is it possible to configure Camel to have this behavior?
Some additional information:
Camel version: 2.19.1
Spring Boot version: 1.5.4.RELEASE
Example of URI I'm using:
rabbitmq://server:port/my-exchange
?connectionFactory=#connectionFactory
&exchangeType=topic
&queue=my-queue
&autoAck=true
&durable=true
&autoDelete=false
&exclusive=false
&automaticRecoveryEnabled=true
&concurrentConsumers=15
&deadLetterExchange=dlx-exchange
&deadLetterExchangeType=fanout
&deadLetterQueue=dlx-queue
When I set a value for deadLetterRoutingKey the application starts with no errors.
Thanks!

Masstransit RabbitMq Request/Response cannot create auto-delete exchange

We are trying to implement a request/response scenario where the messages will be deleted server(consumer) is down. We start with no exchanges / queues in the rabbit mq installation.
There is a server which creates its own exchange / queue and we want this to be auto-delete=true.
In case the server is up before the client, the exchange is created with the correct configuration. But when the client is up we get this error:
RabbitMQ.Client.Exceptions.OperationInterruptedException: The AMQP operation was interrupted: AMQP close-reason, initiated by Peer, code=406, text="PRECONDITION_FAILED - inequivalent arg 'auto_delete' for exchange 'simple_request' in vhost '****': received 'false' but current is 'true'", classId=40, methodId=10, cause=
In case the client is up first, and tries to send a message an exchange is created with the queue name that we have defined but it is not auto-delete=true which results to error:
RabbitMQ receive transport failed: The AMQP operation was interrupted: AMQP close-reason, initiated by Peer, code=406, text="PRECONDITION_FAILED - inequivalent arg 'auto_delete' for exchange 'simple_request' in vhost '****': received 'true' but current is 'false'", classId=40, methodId=10, cause=RabbitMQ receive transport failed: The supervisor is stopping, no additional scopes can be created
when the server is eventually started.
How do we implement auto-delete queues in a request response scenario?
You can update the URI in your client for the service queue to include query string parameters so that the queue is created properly.
rabbitmq://host/vhost/queue?autodelete=true&durable=false
Note I included durable=false but that's only if you're using a non-durable queue and I wanted to be complete.

How to get detailed log/info about rabbitmq connection action?

I have a python program connecting to a rabbitmq server. When this program starts, it connects well. But when rabbitmq server restarts, my program can not reconnect to it, and leaving error just "Socket closed"(produced by kombu), which is meaningless.
I want to know the detailed info about the connection failure. On the server side, there is nothing useful in the rabbitmq log file either, it just said "connection failed" with no reason given.
I tried the trace plugin(https://www.rabbitmq.com/firehose.html), and found there was no trace info published to amq.rabbitmq.trace exchange when the connection failure happended. I enabled the plugin with:
rabbitmq-plugins enable rabbitmq_tracing
systemctl restart rabbitmq-server
rabbitmqctl trace_on
and then i wrote a client to get message from amq.rabbitmq.trace exchange:
#!/bin/env python
from kombu.connection import BrokerConnection
from kombu.messaging import Exchange, Queue, Consumer, Producer
def on_message(self, body, message):
print("RECEIVED MESSAGE: %r" % (body, ))
message.ack()
def main():
conn = BrokerConnection('amqp://admin:pass#localhost:5672//')
channel = conn.channel()
queue = Queue('debug', channel=channel,durable=False)
queue.bind_to(exchange='amq.rabbitmq.trace', routing_key='publish.amq.rabbitmq.trace')
consumer = Consumer(channel, queue)
consumer.register_callback(on_message)
consumer.consume()
while True:
conn.drain_events()
if __name__ == '__main__':
main()
I also tried to get some debug log from rabbitmq server. I reconfigured rabbitmq.config according to https://www.rabbitmq.com/configure.html, and set
log_levels to
{log_levels, [{connection, info}]}
but as a result rabbitmq server failed to start. It seems like the official doc is not for me, my rabbitmq server version is 3.3.5. However
{log_levels, [connection,debug,info,error]}
or
{log_levels, [connection,debug]}
works, but with this there is no DEBUG info showing in the logs, which i don't know whether it is because the log_levels configuration is not effective or there is just no DEBUG log got printed all the time.
I know that this answer comes massively late, but for future purveyors, this worked for me:
[
{rabbit,
[
{log_levels, [{connection, debug}, {channel, debug}]}
]
}
].
Basically, you just need to wrap the parameters you want to set in whichever module/plugin they belong to.