I am new to Active MQ but sometimes the queues are not being processed and keep piling up, Is it a good practice to purge?, Isnt there any other solution that may prevent me from keeping all my messages for reprocessing apart from purging? I really dont want to loose the queues, Is this possible?
The correct way to deal with this is to set an expiration on messages such that after a given time the broker can discard them. Letting messages just pile into queues without regard to their lifetime will lead you into all sorts of problems most notably storage.
You need to develop a strategy for how long the messages should live so that the broker can start getting rid of them once they are no longer of use. If you don't do that then purging the queue is you only option.
Related
I would like to create a cluster for high availability and put a load balancer front of this cluster. In our configuration, we would like to create exchanges and queues manually, so one exchanges and queues are created, no client should make a call to redeclare them. I am using direct exchange with a routing key so its possible to route the messages into different queues on different nodes. However, I have some issues with clustering and queues.
As far as I read in the RabbitMQ documentation a queue is specific to the node it was created on. Moreover, we can only one queue with the same name in a cluster which should be alive in the time of publish/consume operations. If the node dies then the queue on that node will be gone and messages may not be recovered (depends on the configuration of course). So, even if I route the same message to different queues in different nodes, still I have to figure out how to use them in order to continue consuming messages.
I wonder if it is possible to handle this failover scenario without using mirrored queues. Say I would like switch to a new node in case of a failure and continue to consume from the same queue. Because publisher is just using routing key and these messages can go into more than one queue, same situation is not possible for the consumers.
In short, what can I to cope with the failures in an environment explained in the first paragraph. Queue mirroring is the best approach with a performance penalty in the cluster or a more practical solution exists?
Data replication (mirrored queues in RabbitMQ) is a standard approach to achieve high availability. I suggest to use those. If you don't replicate your data, you will lose it.
If you are worried about performance - RabbitMQ does not scale well.
The only way I know to improve performance is just to make your nodes bigger or create second cluster. Adding nodes to cluster does not really improve things. Also if you are planning to use TLS it will decrease throughput significantly as well. If you have high throughput requirement +HA I'd consider Apache Kafka.
If your use case allows not to care about HA, then just re-declare queues/exchanges whenever your consumers/publishers connect to the broker, which is absolutely fine. When you declare queue that's already exists nothing wrong will happen, queue won't be purged etc, same with exchange.
Also, check out RabbitMQ sharding plugin, maybe that will do for your usecase.
I have one rabbit mq Publisher who is publishing on a Direct exchange. There are multiple rabbit mq consumers bound to the Direct exchange with different routing keys.
Few of these consumers might take more time to process the message.
My question is does one slow consumer affect the performance of other consumers even though they are bound on different routing keys ?
One slow consumer will have no affect on other consumers. Each consumer is independent and can work as fast or as slow as necessary for your application.
It will affect other consumers in the terrible case that said consumer's queue start backing up badly up to the point where you hit the server memory watermark. If that happens tho, you need to review what's going on in your system for such situation to arise.
I'm using RabbitMQ as a message queue in a service-oriented architecture, where many separate web services publish messages bound for RabbitMQ queues. Those queues are in turn subscribed to by various consumers, which perform background work; a pretty vanilla use-case for RabbitMQ.
Now I'd like to change some of the queue parameters (specifically, I'd like to bind queues to a new dead-letter exchange with a certain routing key). My problem is that making this change in place on a production system is problematic for a couple reasons.
Whats the best way for me to transition to these new queues without losing messages in a production system?
I've considered everything from versioning queue names to making a new vhost with the new settings to doing all the changes in place.
Here are some of the problems I'm facing:
Because RabbitMQ queues are idempotent, the disparate web services have been declaring the queues before publishing to them (in case they don't already exist). Once you change the queue parameters (but maintain the same routing key), the queue declare fails and RabbitMQ closes the channel.
I'd like to not lose messages when changing a queue (here I'm planning on subscribing an exclusive consumer that saves the messages and then republishes to the new queue).
General coordination between disparate publishers and the consumer base (or, even better, a way to avoid needing to coordinate them).
Queues bindings can be added and removed at runtime without any impact on clients, unless clients manually modify bindings. So if your question only about bindings just change them via CLI or web management panel and skip what written below.
It's a common problem to make back-incompatible changes, especially in heterogeneous environment, especially when multiple applications attempts to declare same entity in their own way (with their specific settings). There are no easy way to change queue declaration at the same time in multiple applications and it highly depends on how whole working process organized, how critical your apps are, what is your infrastructure and etc.
Fast and dirty way:
While the publishers doesn't deals with queues declaration and bindings (at least they should not do that), you can focus on consumers. Wrapping queues declaration in try-except block may be the fast and dirty choice. Also most projects, even numerous can survive small downtime, so you can block rabbitmq user in one shell, alter queue as you wish (create new one and make your consumers use it instead of old one) and then unblock user and let consumers works as before (your workers are under supervisor or monit, right?). Then migrate manually messages from old queue to new one.
Fast and safe solution:
Is is a bit tricky and based on a hack how to migrate messages from one queue to another inside single vhost. The whole solution works inside single vhost but requires extra queue for every queue you want to modify. Set up Dead Letter Exchanges on source queue and point it to route expired messages to your new target queue. Then apply Per-Queue Message TTL to source queue, set x-message-ttl=0 (to it's minimal value, see No Queueing at all note about immediate delivery). Both actions can be done via CLI or management panel and can be done on already declared queue. In this way your publishers can publish messages as usual and even old consumers can work as expected for the first time, but in parallel new consumers can consume from new queue which can be pre-declared with new args manually or in other way.
Note, that on queues with large messages number and huge messages flow there are some risks to met flow control limits, especially if your server utilize almost all of it resources.
Much more complicated but safer approach (for cases when whole messages workflow logic changed):
Make all necessary changes to applications and run new codebase in parallel to existing one, but on the different RabbitMQ vhost (or even use separate server, it depends on your applications load and hardware). Actually, it may be possible to run on the same vhost but change exchanges and queues name, but it even doesn't sound good and smells even in written form. After you set up new apps, switch them with old one and run messages migration from old queues to new one (or just let old system empty the queues). It guaranties seamless migration with minimal downtime. If you have your deployment automatized, whole process will not takes too much efforts.
P.S.: in any case above, if you can, let old consumers to empty queues so you don't need to migrate messages manually.
Update:
You may find very useful Shovel plugin, especially Dynamic Shovels to move messages between exchanges and queues, even between different vhosts and servers. It's the fastest and safest way to migrate messages between queues/exchanges.
If I declare a queue with x-max-length, all messages will be dropped or dead-lettered once the limit is reached.
I'm wondering if instead of dropped or dead-lettered, RabbitMQ could activate the Flow Control mechanism like the Memory/Disk watermarks. The reason is because I want to preserve the message order (when submitting; FIFO behaviour) and would be much more convenient slowing down the producers.
Try to realize queue length limit on application level. Say, increment/decrement Redis key and check it max value. It might be not so accurate as native RabbitMQ mechanism but it works pretty good on separate queue/exchange without affecting other ones on the same broker.
P.S. Alternatively, in some tasks RabbitMQ is not the best choice and old-school relational databases (MySQL, PostgreSQL or whatever you like) works the best, but RabbitMQ still can be used as an event bus.
There are two open issues related to this topic on the rabbitmq-server github repo. I recommended expressing your interest there:
Block publishers when queue length limit is reached
Nack messages that cannot be deposited to all queues due to max length reached
When my system's data changes I publish every single change to at least 4 different consumers (around 3000 messages a second) so I want to use a message broker.
Most of the consumers are responsible to update their database tables with the change.
(The DBs are different - couch, mysql, etc therefor solutions such as using their own replication mechanism or using db triggers is not possible)
questions
Does anyone have an experience with data replication between DBs using a message broker?
is it a good practice?
What do I do in case of failures?
Let's say, using RabbitMQ, the client removed 10,000 messages from the queue, acked, and threw an exception each time before handling them. Now they are lost. Is there a way to go back in the queue?
(re-queueing them will mess their order ).
Is using rabbitMQ a good practice? Isn't the ability to go back in the queue as in Kafka important to fail scenarios?
Thanks.
I don't have experience with DB replication using message brokers, but maybe this can help put you in the right track:
2. What do I do in case of failures?
Let's say, using RabbitMQ, the client removed 10,000 messages from the
queue, acked, and threw an exception each time before handling them.
Now they are lost. Is there a way to go back in the queue?
You can use dead lettering to avoid losing messages. I'd suggest to not ack until you are sure the consumers have processed them successfully, unless it is a long-running task. In case of failure, basic.reject instead of basic.ack to send them to a dead-letter queue. You have a medium throughput, so gotta be careful with that.
However, the order is not guaranteed. You'll need to implement a manual mechanism to recover them in the order they were published, maybe by using message headers with some sort of timestamp or id mechanism, to re-process them in the correct order.