I am trying to figure out whether in a mirrored queue that only has persistant messages, is it still possible to lose messages during the re-synchronisation process.
If I have a queue mirrored across a two nodes (to simplify the example).
The exchange and queue is durable and all the messages marked as persistant.
The Master Queue is on Node 1
The Mirrored Queue is on Node 2
The scenario is
Initially the queues are synchronised
Node 2 goes down
Node 2 Recovers
Before Node 2 synchronises Node 1 is lost
Node 2 becomes the master
At step 3 Node 2 recovers, does it load the messages from the message store that it had persisted, or will it start with no messages and start synchronising (by the two standard resynchronsisation methods)
In the case where a queue is mirrored, does each queue have it's own message store.
If this scenario does lose messages, is there a scenario where this can be avoided
It seems that if this scenario occurs, the messages will be lost regardless of your configuration. To mitigate the problem, the solution would be to ensure that
Ensure messages are persisted
Queues and Exchanges are durable
Ensure consumer acknowledgements are used and that it is set to only
acknowledge when the message has been committed to the master and all
the mirrored replicas.
Ensure there are an appropriate number of mirrored replicas so as to
avoid getting to the situation where you don't have a synchronized
queue
There will be a throughput performance hit.
Related
We have currently using a service bus in Azure and for various reasons, we are switching to RabbitMQ.
Under heavy load, and when specific tasks on backend are having problem, one of our queues can have up to 1 million messages waiting to be processed.
RabbitMQ can have a maximum of 50 000 messages per queue.
The question is how can we design the rabbitMQ infrastructure to continue to work when messages are temporarily accumulating?
Note: we want to host our RabbitMQ server in a docker image inside a kubernetes cluster.
we imagine an exchange that would load balance mesages between queues in nodes behind.
But what is unclear to us is how to dynamically add new queues on demand if we detect that queues are getting full.
RabbitMQ can have a maximum of 50 000 messages per queue.
There is no this kind of limit.
RabbitMQ can handle more messages using quorum or classic queues with lazy.
With stream queues RabbitMQ can handle Millions of messages per second.
we imagine an exchange that would load balance messages between queues in nodes behind.
you can do that using different bindings.
kubernetes cluster.
I would suggest to use the k8s Operator
But what is unclear to us is how to dynamically add new queues on demand if we detect that queues are getting full.
There is no concept of FULL in RabbitMQ. There are limits that you can put using max-length or TTL.
A RabbitMQ queue will never be "full" (no such limitation exists in the software). A queue's maximum length rather depends on:
Queue settings (e.g max-length/max-length-bytes)
Message expiration settings such as x-message-ttl
Underlying hardware & cluster setup (available RAM and disk space).
Unless you are using Streams (new feature in v 3.9) you should always try to keep your queues short (if possible). The entire idea of a Message Queue (in it's classical sense) is that a message should be passed along as soon as possible.
Therefore, if you find yourself with long queues you should rather try to match the load of your producers by adding more consumers.
I was reading through the documentation of RabbitMQ on their website and came across two terminologies which seem to be doing the same thing - "Durable Queues" and "Disk Node". As per the documentation if I make a Disk Node, all data except messages, message store indices, queue indices and other node state (not sure what are the other node states).
So, if I make my node a Disk Node, do I still need to mark my queue as durable to survive broker restarts ?
Same question goes for durable exchanges as well.
Disk nodes and durable queues are two different concepts within RabbitMQ.
RabbitMQ maintains certain internal information (such as users, passwords, vhosts, ...) within specific mnesia tables. Disk nodes store these tables on disk. As the related documentation states:
This does not include messages, message store indices, queue indices and other node state.
To ensure durability/persistence of exchanges, queues or messages you need to explicitly state it when you declare/publish them.
I have a cluster of 3 RabbitMQ nodes and I want to keep master queues balanced across all nodes, even after node reboots. Still, master queues don't rebalance when a new node joins the cluster or when one of the nodes disconnects and reconnects.
Example: I create 100 queues on nodes A, B and C.
If node C shutdowns, master queues from C are almost equally rebalanced between node A and B. So at this point, nodes A and B have both approximately 50 master queues.
Now, if I reconnect node C, it'll remain with 0 master queues until new queues are created. This is problematic because I want all my nodes to produce the same amount of work.
My exchanges are durables, my queues are durables and mirrored and my messages are persistent. I want to avoid loosing messages.
I know there is a way to change the master node manually using a policy trick. But this is not satisfying since it breaks HA (by inducing a resynchronisations of all mirrors).
you can use this command:
rabbitmq-queues rebalance type --vhost-pattern pattern --queue-pattern pattern
for example
rabbitmq-queues rebalance "all" --vhost-pattern "a-vhost" --queue-pattern ".*"
One solution is to use Federated Queues.
A federated queue links to other queues (called upstream queues). It will retrieve messages from upstream queues in order to satisfy demand for messages from local consumers.
You can create a completely new cluster which is both upstream and downstream from the original cluster. You also need to ensure that both your publishers and consumers reconnects periodically (to avoid one cluster to monopolize all connections, defeating load-balancing).
As you pointed out, there's also Simon MacMullen's trick from rabbitmq-users group.
# rabbitmqctl set_policy --apply-to queues --priority 100 my-queue '^my-queue$' '{"ha-mode":"nodes", "ha-params":["rabbit#master-node"]}'
# rabbitmqctl clear_policy my-queue
But it has the underdesirable side-effect to make mirrors loose synchronisation for a while. This might be acceptable or not, depending on your requirements, so I think it's worth saying it's possible.
More advanced technique might come up in 4.x, but it is not sure at all.
I am new to RabbitMQ. I wanted to know how memory is used in case of HA.
For example, in Kafka the partition use a specific amount of memory if data is present or not in it and so do the replications .In RabbitMQ how are the queues allocated memory ? and How does HA work ?Do the mirrored queues occupy the same amout of memory each replicated node ?
Queues in RabbitMQ don't need a lot of resources per se, but messages will be kept in memory in most of the cases. When a message is sent to the queue that has mirrored queues, this message will be replicated among other nodes defined by the mirroring policy. The idea of mirrored queues is to provide high availability, so if the broker hosting the master queue crashes, a new master queue will be elected among alive mirrored queues. The switch to the new node should happen quite fast, because all messages are ready to be consumed.
Simple example:
The cluster consists of 3 nodes:
The test queue was created on the node-1.rabbitmq node and the mirroring policy was applied to replicate messages on all nodes:
Approximately 70k messages were sent to the test queue and the screenshot from the RabbitMQ management tool is shown below:
It is clear that all nodes got messages and they are kept in memory.
Memory consumption of RabbitMQ is a tricky topic and there are many factors which can affect it (type of the queue, the amount of messages in other queues, reaching the defined limits, etc.). In the official documentation it is stated:
RabbitMQ can report on its own memory use, to let you see where your system is using memory. Note that all measurements are somewhat approximate, based on values returned by the underlying Erlang virtual machine; however they should still be accurate enough to be useful.
Let us consider the scenario below.
There are 3 RabbitMQ brokers(B1,B2,B3) deployed in a clustered model. There is an exchange E with bindings which is replicated to all the 3 brokers. There is a producer P and 3 consumers C1,C2,C3. I have the following questions
Lets say a producer connects to broker B1 and creates a Queue Q which is mirrored to B2. Now when a consumer connects to Broker B3, how does it get the messages in the queue?
From my understanding, the exchange and binding information is maintained in memory in each broker. If the exchange is persistent, in order to recover from broker crashes, is the exchange and binding information also persisted in the disk in all brokers?
If the entire queue is maintained in memory in all the mirrored brokers, it consumes a lot of memory in the broker. In order to support potentially large number of queues each holding millions of messages in each broker, is it not a constraint for scalability?
Each mirrored queue has a master node. The master node for that queue is always used for consuming. So when a consumer connects to a node which is does not have the queue storage (or is a slave node), the consumer will actually end up consuming from the master node.
Yes, assuming the node is a disc node and not a RAM node. I'm not 100% sure about the binding, but my guess is yes. Anyway, it's highly recommended to always declare all queues, exchanges etc that your client needs! (do this each time client starts or something)
Yes, that's the point of mirroring: add redundancy in case something goes wrong. It does not increase performance (rather the opposite!). But in general, queues with millions of messages is not exactly a good situation as queues should, on average, be empty