I have two queues main queue and DLQ. Each of them has its own exchange. These exchange are the topic type.
I process messages in the main queue, when a problem occurs, I redirect them to DLX.
I have a problem with the dlq queue, when I manually move messages to the main queue, they don't want to be executed. Messages are manually forwarded from default exchange (AMQP default), with the routing key having the name of the queue.
I think this is because of a different routing key and exchange.
How can this be solved?
Based on what you wrote, it sounds like you are using the RabbitMQ web manager to re-publish.
If that is the case, you can copy the message details (body, header, ...) and paste them into the Exchange view of the web manager. In the Exchange view, click on the 'main' exchange name and scroll to the publish message section. Paste in the message data, set your routing key there, and click publish button.
If you are using an amqp client, the process is similar. The important point is to set the exchange and routing key as desired before you publish.
Related
Using the remote procedure call pattern, I need to send an answer to a reply-queue, i.e. I need to send the message to the default exchange with the name of the queue as the routing key.
I am using the SmallRye Reactive Messing RabbitMQ plugin on Quarkus. All channels are defined statically in the configuration files (which is ok), however, due to the way the configuration mechanism works (microprofile config), I cannot use the empty string as a configuration value, which is the name of the default exchange.
It does not help to omit the name of the exchange, as by default the channel name is used.
Is there a way to send a message to the default exchange using the SmallRye RabbitMQ plugin?
Edit: I have no control over the RabbitMQ server.
You should be able to send messages to the default direct RabbitMQ exchange by setting below attributes:
exchange.name (set to empty string)
exchange.type (set to direct)
Assuming your Reactive Messaging channel is named pets-out, here down a configuration sample:
mp.messaging.outgoing.pets-out.connector=smallrye-rabbitmq
mp.messaging.outgoing.pets-out.exchange.name=
mp.messaging.outgoing.pets-out.exchange.declare=false
mp.messaging.outgoing.pets-out.exchange.type=direct
mp.messaging.outgoing.pets-out.default-routing-key=pets
EDIT
After digging into smallrye-reactive-messaging implementation, I figured out that an empty exchange name will cause a fallback to the channel name as the exchange name.
Hence, there should be no way to send direct messages to the default RabbitMQ exchange.
The alternative solution, neglecting the out-of-the-box offered default exchange would be to
Create a direct exchange without any bound queues and have the Outgoing message handler using a dedicated channel config bound to it:
mp.messaging.outgoing.pets-out.connector=smallrye-rabbitmq
mp.messaging.outgoing.pets-out.exchange.name=my-direct
mp.messaging.outgoing.pets-out.exchange.declare=true
mp.messaging.outgoing.pets-out.exchange.type=direct
mp.messaging.outgoing.pets-out.default-routing-key=pets
Create an alternate exchange configuration for the my-direct exchange routing messages to the default one. This can be operated on the RabbitMQ broker directly using rabbitmqctl:
rabbitmqctl set_policy AE "^my-direct$" '{"alternate-exchange":""}' --apply-to exchanges
We are sending amqp messages to rabbitMQ and are setting the message-ttl property.
If messages got expired, they are moved to the defined DLQ.
Is it possible to have expired messages moved to a seperate DLQ so that they do not interfere with other messages moved to DLQ because of more serious reasons?
Yes, this is possible.
You need to set a Dead Letter Exchange on your queue, and configure the message routing key to change when the messages get expired. Use the x-dead-letter-routing-key arg for this.
Then bind a new queue to your DLX with the dead letter routing key you just defined.
Expired messages will then be sent by RabbitMQ to the DLX, which will route them to the queue you have explicitly defined only for expired messages.
More about this here: https://www.rabbitmq.com/dlx.html.
I have 2 applications, called appA and appB. They respectively have a aQueue and bQueue, that both application have an ReceiveEndpoint. Both application use the same host on RabbitMQ.
appA is sending the command CreateEntityCommand to appB, into the bQueue, with bus.Send method.
In appB, I have a consumer that, consume CreateEntityCommand.
** so far so good **
Question #1 :
If my appB consumer successfully create the entity, i'm publishing a EntityCreatedEvent. My EntityCreatedEvent consumer in appA got it right, but the event is also added to bQueue_skipped, why?
Question #2 :
Now, if my appB consumer has an exception, my appA has to be notified. A Fault is generated, in the bQueue. I would like my appA to consume the Fault, but the Fault is automatically on bQueue. If I add an ReceiveEndpoint in appA to listed bQueue, I got a lot of dead_letter (skipped queue).
As a rule of thumb, if your messages get to the dead-letter (skipped) queue, it means that there is a binding between the message type exchange and the queue exchange, but your endpoint has no consumer for a given message type.
It usually happens, when you used to have a consumer and then removed it. MassTransit won't remove the binding for you, but it also won't know how to process messages that keep coming.
You can delete the obsolete binding by going to the RMQ management UI doing the following:
Open the endpoint queue
Click on bindings, there is only one there, pointing to the endpoint exchange
Follow the link to open the endpoint exchange and see the bindings to message type exchanges
There, you can remove those bindings that you no longer need
If you have no messages in the queue, you can also just remove it and MassTransit will create everything for you, from scratch.
Using RabbitMQ as broker, I would like to copy all the messages from one queue to another queue for test/debug purpose. What's the simplest way via RabbitMQ web management console / cli?
P.S. Under web console for specified queue, I could only Move messages instead of Copy messages to new queue.
When I need to perform such tasks, I do as follows (assuming you want to copy all of the messages from your reference queue):
create a fanout exchange or use the default one (amq.fanout) if he isn't bound to any queue
bind the reference queue to it
bind the "duplicate" queue to it
configure a shovel to send all the messages in the reference queue to the exchange you bound to both queues, with auto-delete set to "After initial length transferred"
But it does mean that if messages arrived to the reference queue through it's normal flow, they will end up at the top of the queue, with the "copied" messages behind/mixed with them
just create another queue with the same routing key if the exchange is a direct exchange
Go to http://localhost:15672/#/queues
Create vhost (vhost=testhost)
Create two queue using vhost( Test1, Test2)
Create exchange Test_exchange: http://localhost:15672/#/exchanges
Bind these queue(Test1 & Test2) on Test_exchange
Install shovel
sudo rabbitmq-plugins enable rabbitmq_shovel
sudo rabbitmq-plugins enable rabbitmq_shovel_management
Add shovel using admin shovel tab
URI: amqp://{user}:{pass}#{localhost}:5672/vhost (this is for reference queue which u want to create copy, vhost if it has)
source
Destination URI: amqp://user:pass#localhost:5672/Test_exchnage
Queue Name: “Test_exchange”
You can can send msg to your reference queue.
There's a commercial tool, QueueExplorer (disclaimer - I'm the author) which allows you to copy messages, among other things.
I'm using celery 3.0.18 with RabbitMQ 3.0.2. I have a task sent to another application by using celery.send_task, and I can see the send_task call in my logs, I can see the packets leaving the worker instance, and I can see the packets reaching the RabbitMQ instance when I call tcpflow -ce -i any port 5672, however, only the first message gets to the queue. They all have the same routing key, I tried recreating the exchange and bindings, and even a new RabbitMQ instance, and nothing seems to work. This used to work fine for months, until we had to rebuild the RabbitMQ from scratch after a crash in our AWS infrastructure. Strangely, I have the exact same setup working on other application, using the same broker and the same exchange, binding and queue, and it works perfectly there. Also, it works when I send the messages to the same exchange using the same call from a management script, running from the shell on the same instance, but it doesn't work when it's sent from the celery task in the worker process.
Any ideas on what the problem might be?
Eventually, I figured what's wrong, but it's not clear if this is the expected behavior, a celery bug, or a RabbitMQ bug.
What happens is that besides our application tasks, I have a custom logging handler used to send logs to a central location using RabbitMQ, using celery.send_task. This logging handler sends messages to an exchange named application.logger, with a routing key like application.logger.info, application.logger.warning, etc, and have bindings to route some logging levels to specific queues. This exchange, bindings and queues were created directly in RabbitMQ and not defined in Celery routes.
When the worker tries to send a message to this exchange and it doesn't exist, Celery would log a 404 NOT_FOUND error. After that, tasks sent to other exchanges using the same connection weren't delivered. They were sent by the worker instance, we could see the packets arriving and the RabbitMQ management screen for that connection even shows the data arriving from the client in kb/s, but no messages were delivered.