I have two processes on two different servers connecting to RabbitMQ and consuming messages from the same queues (for active/active HA). Is it possible to ensure that a maximum total of one message in a queue is unacked at a given point in time, across two connections?
Combining the "exclusive" flag with basic.qos(1) would ensure that a maximum of one message in a queue is unacked at a given point in time, but would have only one process consuming.
Is there a way to have a consumer prefetch limit (e.g. basic.qos(1)) apply as a total across all connections while still having all connections able to consume?
It's not possible. Please see the documentation for the global flag.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
Related
We have currently using a service bus in Azure and for various reasons, we are switching to RabbitMQ.
Under heavy load, and when specific tasks on backend are having problem, one of our queues can have up to 1 million messages waiting to be processed.
RabbitMQ can have a maximum of 50 000 messages per queue.
The question is how can we design the rabbitMQ infrastructure to continue to work when messages are temporarily accumulating?
Note: we want to host our RabbitMQ server in a docker image inside a kubernetes cluster.
we imagine an exchange that would load balance mesages between queues in nodes behind.
But what is unclear to us is how to dynamically add new queues on demand if we detect that queues are getting full.
RabbitMQ can have a maximum of 50 000 messages per queue.
There is no this kind of limit.
RabbitMQ can handle more messages using quorum or classic queues with lazy.
With stream queues RabbitMQ can handle Millions of messages per second.
we imagine an exchange that would load balance messages between queues in nodes behind.
you can do that using different bindings.
kubernetes cluster.
I would suggest to use the k8s Operator
But what is unclear to us is how to dynamically add new queues on demand if we detect that queues are getting full.
There is no concept of FULL in RabbitMQ. There are limits that you can put using max-length or TTL.
A RabbitMQ queue will never be "full" (no such limitation exists in the software). A queue's maximum length rather depends on:
Queue settings (e.g max-length/max-length-bytes)
Message expiration settings such as x-message-ttl
Underlying hardware & cluster setup (available RAM and disk space).
Unless you are using Streams (new feature in v 3.9) you should always try to keep your queues short (if possible). The entire idea of a Message Queue (in it's classical sense) is that a message should be passed along as soon as possible.
Therefore, if you find yourself with long queues you should rather try to match the load of your producers by adding more consumers.
How many queues can be created in ActiveMQ? Is there any limitation in ActiveMQ 5.14?
I have a Java application which need to create 1 queue per customer. My concern is what will affect if I create the queue more than 1,000 queues?
There is no arbitrary limit on the number of queues. The only limitation is the resources available to the JVM as each new queue will consume heap memory not just for the messages in the queue but for the queue's own data-structures.
I recommend you move forward with creating however many queues you need and if you run into trouble make careful observations and ask new questions if you need to.
In a RabbitMQ Quorum Queue (using raft) cluster of say 4 nodes (N1-N4),
Can I have a consumer that can read only from N1/N2? In this case, will a message produced in N3, be delivered to a consume via N1/N2?
As per the documentation from the below post:
https://www.cloudamqp.com/blog/2019-04-03-quorum-queues-internals-a-deep-dive.html
With Raft, all reads and writes go through a leader whose job it is to
replicate the writes to its followers. When a client attempts to
read/write to a follower, it is told who the leader is and told to
send all writes to that node. The leader will only confirm the write
to the client once a quorum of nodes has confirmed they have written
the data to disk. A quorum is simply a majority of node
If this is the case, How can scaling be achieved if it's just the leader node that's gonna do all the work?
First of all, RabbitMQ clusters should have an odd number of nodes, so that a majority can always be established in the event of a network partition.
Consumers can always read from any node in a RabbitMQ cluster. If a queue master/mirror is not running on the node to which the consumer is connected, the communication will be forwarded to another node.
How can scaling be achieved if it's just the leader node that's gonna
do all the work?
"scaling" is so non-specific a word that I hesitate to answer this. But I assume you're asking what happens with multiple quorum queues. The answer is that each queue has its own leader, and these leaders are distributed around the cluster.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
In this example I have a setup of 2 consumers and 2 publishers in my network. The centre is a RabbitMQ broker as shown in the screenshot below. Due to fail-safe reasons, I am wondering if RabbitMQ supports load-balancing or mirroring of the server (broker) in any way. I just would like to get rid of the star topology for two reasons:
1) If one broker fails, another publisher can take over immediately
2) If one brokers network throughput is not good enough the other takes over
Solving one or the other (or even both) would be great.
My current infrastructure
Preferred infrastructure
RabbitMQ clustering (docs) can meet your first requirement. Use three nodes and be sure your applications are coded and tested to take failure scenarios into account.
I don't know of anything out-of-the-box that can meet your second requirement. You will have to implement something that uses cluster statistics or application statistics to determine when to switch to another cluster due to lower throughput.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
I am just curious what is the optimal way to publish and consume messages, ignoring durability, persistence and similar things, but rather from the network perspective in a cluster?
If we publish a message over a connection opened to server 1 (s1), but the queues master-locator-node is on server 2 (s2), the server has to move that message from s1 to s2, right?
It would be optimal to always consume from queues that are "local" to the server we are connected on, meaning that all the queues we consume from over our connection are located on that server, wouldn't it?
Is this overcomplicating? Or would it be best to always publish to and consume from servers where the queue is located? I am dealing with somewhere around 3B messages daily, so I am trying to reduce latency and load as much as possible.
Yes, always publishing to and consuming from the queue master node is optimal. Your understanding of what happens when you connect to a non-master node is correct. Of course, this means you will have to make your applications aware of this information (from the HTTP API).
If you're not worried about message loss, there's little need for a cluster in this scenario.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
You are ignoring important factors of the correct guidance, such as persistence and message size.Depending on message size , persistence and workload you have three potential resource bottlenecks 1) CPU 2) Network 3) Storage. In addition there is also the possibility of a contention bottleneck depending on the number of clients on each queue.