How to ignore x-max-priority when using NServiceBus with RabbitMQTransport - rabbitmq

I'm trying to figure out a way to migrate from an existing homegrown wrapper around RabbitMQ to NServiceBus.
The current obstacle is that the wrapper creates queues with the "x-max-priority" setting set to "9". NSB does currently not support priority queues which is fine since the wrapper never uses the priority anyway.
Unfortunately NSB can't connect to queues created by the wrapper and fails with the following error:
Unhandled exception. RabbitMQ.Client.Exceptions.OperationInterruptedException:
The AMQP operation was interrupted:
AMQP close-reason, initiated by Peer, code=406, text='PRECONDITION_FAILED - inequivalent arg 'x-max-priority' for queue 'xxxxxxx' in vhost '/':
received none but current is the value '9' of type 'signedint'', classId=50, methodId=10
Updating the wrapper would solve that issue for new queues. But recreating all the existing queues would mean deleting thousands of queues at hundreds of customer sites. So that's not really an option.
Is there a way to tell NSB to use this setting when it tries to connect to existing queues and just don't do anything with it?

Related

Rabbitmq high availability queues without message replication

I have rabbitMQ broker running on two nodes as a cluster. I have observed that if node, where queue have been created, goes down, then queue would not be available on other node. If I try to publish a message from other node it fails. Even if I remove the failed node from cluster(using forget cluster command) and try to publish message from other node, the behavior is same.
I dont want to enable mirroring of the queue for the simple reason that it would replicate the messages which would be additional load on inter-network.
Is there way available in rabbitMQ to achieve this?
The behaviour you are experiencing is the default behaviour of rabbitmq and its exactly what i supposed to happen.The node where you created the queue becomes the producer now and if this node goes down then any connection available to it or queues or exchanges associated with it will not work at all. There are two options to resolve this issue.
One option is that there is one separate queue for every node and any node that wants to listen to receive messages from a particular node can subscribe to that particular queue's exchange. This seems to be not a very good idea since you need to manage a lot of things for it.
Second option is to always declare a queue before you publish so if your queue is not available then a new queue would take its place and all the nodes subscribe to would be able to listen and any producer node will be able to post that queue. This option will resolve the problems of node getting down or node not available. from the docs
before sending we need to make sure the recipient queue exists. If we send a message to non-existing location, RabbitMQ will just drop the message. Let's create a hello queue to which the message will be delivered:
RabbitMQ lets you import and export definitions. Definitions are json files which contain all broker objects (queues, exchanges, bindings, users, virtual hosts, permissions and parameters). They do not include the messages of queues.
You can export definitions of the node who owns the queue and import them to the slave node of the cluster periodically. You have to enable the management plugin for this task.
More information here: https://www.rabbitmq.com/management.html#configuration

ActiveMQ initializer in OpenEJB/TomEE

I need to start a queue in OpenEJB in a "paused" state so no messages are processed by the consumer until some related data is available. I can programmatically pause the queue as shown here, so if there was some initializer function that is called when a queue is created I could use that method. The queue configuration documentation does not seem to support setting the paused state. Any ideas on how to configure the queue upon creation?
If you read the thread you link you will see a queue is not paused but a broker can be.
In TomEE broker is created from a factory using a spi (in tomee classloader so tomee/lib by default) so you can write your own if that's an option starting programmatically when you are ready.
Now I suspect you don't want to start connectors with the container but it is not an issue to start the broker. Said otherwise you don't want to be connected to any other machine through JMS to not receive anything but if JMS is started and deployed it is ok.
In such a case you can just not configure any connector on the broker and add them when ready. You can find brokers doing:
new org.apache.openejb.resource.activemq.ActiveMQ5Factory().getBrokers()

RabbitMQ & Spring amqp retry without blocking consumers

I'm working with RabbitMQ and Spring amqp where I would prefer not to lose messages. By using exponential back off policy for retrying, I'm potentially blocking my consumers which they could be working off on messages they could handle. I'd like to give failed messages several days to retry with the exponential back off policy, but I don't want a consumer blocking for several days and I want it to keep working on the other messages.
I know we can achieve this kind of functionality with ActiveMQ(Retrying messages at some point in the future (ActiveMQ)), but could not find a similar solution for RabbitMQ.
Is there a way to achive this with Spring amqp and RabbitMQ?
You can do it via the dead letter exchange. Reject the message and route it to the DLE/DLQ and have a separate listener container that consumes from the DLQ and stop/start that container as needed.
Or, instead of the second container you can poll the DLQ using the RabbitTemplate receive (or receiveAndConvert) methods (on a schedule) and route the failed message(s) back to the primary queue.

RabbitMQ dropping messages after the first one

I'm using celery 3.0.18 with RabbitMQ 3.0.2. I have a task sent to another application by using celery.send_task, and I can see the send_task call in my logs, I can see the packets leaving the worker instance, and I can see the packets reaching the RabbitMQ instance when I call tcpflow -ce -i any port 5672, however, only the first message gets to the queue. They all have the same routing key, I tried recreating the exchange and bindings, and even a new RabbitMQ instance, and nothing seems to work. This used to work fine for months, until we had to rebuild the RabbitMQ from scratch after a crash in our AWS infrastructure. Strangely, I have the exact same setup working on other application, using the same broker and the same exchange, binding and queue, and it works perfectly there. Also, it works when I send the messages to the same exchange using the same call from a management script, running from the shell on the same instance, but it doesn't work when it's sent from the celery task in the worker process.
Any ideas on what the problem might be?
Eventually, I figured what's wrong, but it's not clear if this is the expected behavior, a celery bug, or a RabbitMQ bug.
What happens is that besides our application tasks, I have a custom logging handler used to send logs to a central location using RabbitMQ, using celery.send_task. This logging handler sends messages to an exchange named application.logger, with a routing key like application.logger.info, application.logger.warning, etc, and have bindings to route some logging levels to specific queues. This exchange, bindings and queues were created directly in RabbitMQ and not defined in Celery routes.
When the worker tries to send a message to this exchange and it doesn't exist, Celery would log a 404 NOT_FOUND error. After that, tasks sent to other exchanges using the same connection weren't delivered. They were sent by the worker instance, we could see the packets arriving and the RabbitMQ management screen for that connection even shows the data arriving from the client in kb/s, but no messages were delivered.

Is it possible to configure multiple queues to one shovel?

I've got a webservice that accepts messages that can be sent to a RabbitMQ cluster using whatever queue they define. This is so front-end devs can send messages via javascript.
I want to make the webservice more robust so that when we have network trouble, the webservice can still accept messages and then handle them when the network is back up. After some initial reading, it seems that the Shovel plugin should handle this nicely.
What I was thinking was to install a local instance of RabbitMQ on the webservice box with shovel turned on. I can then send all messages through the local RabbitMQ instance and have it push all messages to the cluster and deal with the network problems.
My problem is after looking at the documentation it seems that I have to configure every queue I want to forward to in the shovel config file. If that's the case I'm not sure this will work since we allow clients to define a queue through the webservice on the fly.
I would like to have the webservice take the messages, hand them off to the local rmq instance and have it pass the messages off to the cluster using the same queues/exachanges/etc.
Has anyone tried this or can explain how the shovel plugin works?
Have you considered sending messages to an exchange instead of a queue. Send all messages to one exchange possibly a topic exchange if you need that kind of flexibility. Then have the consumer handle the different messages or different queues from the exchange. Sending to one exchange would make configuring the shovel considerably easier.