I am successfully publishing to an exchange with the routing key GetToDosCommand.todo_rss , the log message is
[12:11:59] [$20] [DEBUG] [BaseBusClient`1]: Initiating publish for message 'GetToDosCommand' on exchange 'commands' with routing key GetToDosCommand.todo_rss.
I have bound different queues to the exchange mentioned above with different routing keys that I expect to work , but only the catch queue receives any messages
queue - routing key
get1 - GetToDosCommand.todo_rss
get2 - GetToDosCommand.#
catch - #
Why aren't the other queues getting the messages ?
Related
I am trying to consume from RabbitMQ using Apache Camel and i have set autoAck=false to do the manual acknowledgement. And then i want to redirect them to deadLetter queue but messages are disappearing from my placement queue but are not in the deadLetter queue.
What am i doing wrong?
onException(NullPointerException.class).log(LoggingLevel.ERROR, "This is error message : ${exception}")
.to("rabbitmq:deadLetterEx?exchangeType=direct&queue=deadLetter.qu&routingKey=deadLetter&autoDelete=false&autoAck=false");
from("rabbitmq:event.ex?exchangeType=topic&queue=placement.qu&routingKey=event.placement&durable=true&autoDelete=false&autoAck=false")
.log(LoggingLevel.ERROR, "Received from Rabbit:${body}")
.process(this::enrichPlacement)
.end();
enter image description here
can you help me with RabbitMQ input in logstash.
My application sending versions of code to rabbitmq and then it go to store in elastic stack.
For app in rabbitmq was created queue
name: app_version_queue;
type: classic;
durable: true
Then logstash was configured with that config:
input {
rabbitmq {
id => "rabbitmyq_id"
# connect to rabbit
host => "localhost"
port => 5672
vhost => "/"
# INPUT - PRODUCERS
key => "app_version_queue"
# OUTPUT - CONSUMER
# queue for logstash
queue => "logstash"
auto_delete => false
# Exchange for logstash
exchange => logstash
exchange_type => direct
durable => "true"
# No ack will boost your perf
ack => false
}
}
output {
elasticsearch {
hosts => [ "elasticsearch:9200" ]
index => "app_version-%{+YYYY.MM.dd}"
}
}
It's worked, but now, in RabbitMQ console, i see in table Queued messages
Ready: 914,444
Unacked: 0
Total: 914,444
And my disk space on rabbitmq cluster go to full in 3 days.
After rebooting rabbitmq server, all space is free.
UPDATED:
All reason, why i do that, i want to remove NIFI from that chain app=>rabbit=>nifi=>elastic
I want to do: app=>rabbit=>logstash=>elastic
Queue: app_version
My application send messages to nifi=>ELASTIC
Queue1 - app_version_queue
Queue: logstash, what i created with logstash
Queue2 - logstash
I try to stop NIFI sending, but messages not leaving.
It sounds like what's happened is you've created the infrastructure twice:
Once manually in RabbitMQ
Once in the configuration options to LogStash
What you need is just three things:
An exchange for the application to publish messages to.
A queue for LogStash to consume messages from.
A binding between that exchange and that queue; the queue will get a copy of every message published to the exchange with a matching routing key.
What you have is all of this:
An exchange called logs (created manually) which your application publishes messages to.
A queue called app_version_queue (created manually) which nothing consumes from.
A binding (created manually) delivering copies of messages from logs into app_version_queue, which then sit there forever.
An exchange called logstash (created by LogStash) which nothing publishes messages to.
A queue called logstash (created by LogStash) which LogStash consumes messages from.
A binding (created by LogStash) from the logstash exchange to the logstash queue which doesn't do anything, because no messages are published to that exchange.
A binding (created manually) from the logs exchange to the logstash queue which is actually delivering the messages from your application.
So, for each of the three things (the exchange, the queue, and the binding) you need to:
Decide a name
Decide if you're creating it, or letting LogStash create it
Configure everything to use the same name
For instance, you could keep the names logs and app_version_queue, and create everything manually.
Then your LogStash application would look something like this:
input {
rabbitmq {
id => "rabbitmyq_id"
# connect to rabbit
host => "localhost"
port => 5672
vhost => "/"
# Consume from existing queue
queue => "app_version_queue"
# No ack will boost your perf
ack => false
}
}
On the other hand, you could create just the logs exchange, and let LogStash create the queue and binding, like this:
input {
rabbitmq {
id => "rabbitmyq_id"
# connect to rabbit
host => "localhost"
port => 5672
vhost => "/"
# Create a new queue
queue => "logstash_processing_queue"
durable => "true"
# Take a copy of all messages with the "app_version_queue" routing key from the existing exchange
exchange => "logs"
key => "app_version_queue"
# No ack will boost your perf
ack => false
}
}
Or you could let LogStash create all of it, and make sure your application publishes to the right exchange:
input {
rabbitmq {
id => "rabbitmyq_id"
# connect to rabbit
host => "localhost"
port => 5672
vhost => "/"
# Create a new queue
queue => "logstash_processing_queue"
durable => "true"
# Create a new exchange; point your application to publish here!
exchange => "log_exchange"
exchange_type => "direct"
# Take a copy of all messages with the "app_version_queue" routing key from the new exchange
key => "app_version_queue"
# No ack will boost your perf
ack => false
}
}
I'd probably go with the middle option: the exchange is a part of the application's deployment requirements (it will produce errors if it can't publish there), but any number of queues might bind to it for different reasons (maybe none at all in a test environment, where you don't need ElasticSearch set up).
I'm using RPC Pattern for processing my objects with RabbitMQ.
You suspect,I have an object, and I want to have that process finishes and After that send ack to RPC Client.
Ack as default has a timeout about 3 Minutes.
My process Take long time.
How can I change this timeout for ack of each objects or what I must be do for handling these like processess?
Modern versions of RabbitMQ have a delivery acknowledgement timeout:
In modern RabbitMQ versions, a timeout is enforced on consumer delivery acknowledgement. This helps detect buggy (stuck) consumers that never acknowledge deliveries. Such consumers can affect node's on disk data compaction and potentially drive nodes out of disk space.
If a consumer does not ack its delivery for more than the timeout value (30 minutes by default), its channel will be closed with a PRECONDITION_FAILED channel exception. The error will be logged by the node that the consumer was connected to.
Error message will be:
Channel error on connection <####> :
operation none caused a channel exception precondition_failed: consumer ack timed out on channel 1
Timeout by default is 30 minutes (1,800,000ms)note 1 and is configured by the consumer_timeout parameter in rabbitmq.conf.
note 1: Timeout was 15 minutes (900,000ms) before RabbitMQ 3.8.17.
if you run rabbitmq in docker, you can describe volume with file rabbitmq.conf, then create this file inside volume and set consumer_timeout
for example:
docker compose
version: "2.4"
services:
rabbitmq:
image: rabbitmq:3.9.13-management-alpine
network_mode: host
container_name: 'you name'
ports:
- 5672:5672
- 15672:15672 ----- if you use gui for rabbit
volumes:
- /etc/rabbitmq/rabbitmq.conf:/etc/rabbitmq/rabbitmq.conf
And you need create file
rabbitmq.conf
on you server by this way
/etc/rabbitmq/
documentation with params: https://github.com/rabbitmq/rabbitmq-server/blob/v3.8.x/deps/rabbit/docs/rabbitmq.conf.example
I have a Spring Boot application that uses Camel Rabbit to consume messages from a queue. I use an URI to declare the queue with a dead letter exchange, but I'm not supplying the option deadLetterRoutingKey as I want the messages going to DLX to keep the original routing key. When the application starts it throws the following error:
Caused by: com.rabbitmq.client.ShutdownSignalException: channel error;
protocol method: #method<channel.close>(reply-code=406, reply-text=PRECONDITION_FAILED - invalid arg 'x-dead-letter-routing-key' for queue 'entry.paid.erp' in vhost '/': {unacceptable_type,void}, class-id=50, method-id=10)
Is it possible to configure Camel to have this behavior?
Some additional information:
Camel version: 2.19.1
Spring Boot version: 1.5.4.RELEASE
Example of URI I'm using:
rabbitmq://server:port/my-exchange
?connectionFactory=#connectionFactory
&exchangeType=topic
&queue=my-queue
&autoAck=true
&durable=true
&autoDelete=false
&exclusive=false
&automaticRecoveryEnabled=true
&concurrentConsumers=15
&deadLetterExchange=dlx-exchange
&deadLetterExchangeType=fanout
&deadLetterQueue=dlx-queue
When I set a value for deadLetterRoutingKey the application starts with no errors.
Thanks!
We are trying to implement a request/response scenario where the messages will be deleted server(consumer) is down. We start with no exchanges / queues in the rabbit mq installation.
There is a server which creates its own exchange / queue and we want this to be auto-delete=true.
In case the server is up before the client, the exchange is created with the correct configuration. But when the client is up we get this error:
RabbitMQ.Client.Exceptions.OperationInterruptedException: The AMQP operation was interrupted: AMQP close-reason, initiated by Peer, code=406, text="PRECONDITION_FAILED - inequivalent arg 'auto_delete' for exchange 'simple_request' in vhost '****': received 'false' but current is 'true'", classId=40, methodId=10, cause=
In case the client is up first, and tries to send a message an exchange is created with the queue name that we have defined but it is not auto-delete=true which results to error:
RabbitMQ receive transport failed: The AMQP operation was interrupted: AMQP close-reason, initiated by Peer, code=406, text="PRECONDITION_FAILED - inequivalent arg 'auto_delete' for exchange 'simple_request' in vhost '****': received 'true' but current is 'false'", classId=40, methodId=10, cause=RabbitMQ receive transport failed: The supervisor is stopping, no additional scopes can be created
when the server is eventually started.
How do we implement auto-delete queues in a request response scenario?
You can update the URI in your client for the service queue to include query string parameters so that the queue is created properly.
rabbitmq://host/vhost/queue?autodelete=true&durable=false
Note I included durable=false but that's only if you're using a non-durable queue and I wanted to be complete.