activemq round robin between queues or topics - activemq

I'm trying to achieve load balancing between different types of messages. I would not know in advance what the messages coming in might be until they hit the queue. I know I can try resequencing the messages, but I was thinking that maybe if there was a way to have the various consumers round robin between either queues or between topics, this would solve my problem.
The main problem i'm trying to solve is that I have many services sending messages to one queue with many consumers feeding off one queue. I do not want one type of service monopolizing the entire worker cluster. Again I don't know in advance what the messages that are going to hit the queue are going to be.
To try to clearly repeat my question:
Is there a way to tell the consumers to round robin between either existing queues or topics?
Thank you in advance.

I found the answer to my question on another post just had to know to look there. I resolved my problem by not creating AMQ consumer but a JMS listener with a composite destination as specified in this post: jms-listener-dynamically-choose-destinations. It turns out the JMS listener automatically round robins though all the queues you assign to it.

Consumers on a Queue will already do round robin processing of the messages on the Queue. The one thing to keep in mind is consumer prefetch which can allow one consumer to grab many messages before others arrive on the Queue so you may need to adjust prefetch depending on your scenario.
Read up on the differences between Queue and Topic here.

Related

RabbitMQ back up messages in specific queue

I have a service that consumes messages from a RabbitMQ queue (posting to the queue is done through a topic exchange). Assuming that the service can theoretically fail and lose its state, possibility to back up all the messages for disaster recovery would come in handy.
The first idea that comes to mind is adding another binding for the topic exchange so that the messages are also posted to another queue, and creating a custom service for backing up messages that would listen on that queue. But this sounds much like a potential reinvention of the wheel. Is there a simpler way to do this with RabbitMQ (plugin/existing service/etc)?
Found out that it's possible to do with a combination of a firehose and a tracing plugin.
RabbitMQ cluster, as specified in Clustering Guide and Highly Available Queues will do what you want in the right way.

Using AMQP (RabbitMQ) for High Availablity in my applications

I am putting together a queue based distributed system, all standard stuff. We are using the latest version of RabbitMQ to provide our messaging transport tier.
I have some questions regarding achieving high availability (for my applications and not actually RabbitMQ) that I couldn't answer by reading the documentation. Would appreciate some advice, it's very likely my lack of understanding of Rabbit/AMQP that is causing the problem :)
Problem: I have a message producer (called the primary). There is one and only 1 message producer. There is a secondary producer (called the backup) which should take over from the primary should it fail.
How could I achieve this using existing RabbitMQ capabilities?
Thoughts: Use an "exclusive" queue, to which the primary will be connected to. The backup will attempt to connect to to this queue. When the primary fails, the backup will gain connectivity to the queue and establish control over the process.
What is the correct pattern I should be using to achieve this? I couldn't find any documentation on competing producers etc, would appreciate your advice! How do others do this?
Kind regards
TM
If you want to have only one producer at a time - you can't afford it with RabbitMQ mechanism (unless you'll get some plugin but I don't know such of a kind). You can gain control on producers number on application level.
P.S.:
Looks like you don't get AMQP idea well, producers publish messages to exchanges, while consuming get them from queue. The broker (RabbitMQ) route messages from exchange to on or more queues (in fact, it can also route messages to other exchange, but that's another story).

Stopping consumers from consuming messages from queue

I am starting with ActiveMQ and I have a usecase. i have n producers sending messages into a Queue Q1. I want to stop the delivery of messages (i.e. i do not want consumers to consume those messages). I want to store the messages for sometime without those being consumed.
I was looking at ways this can be achieved. These two things came into my mind based on what i browsed through.
Using Mirrored queues, so that I can wiretap the messages and save into a virtual queue.
Possibly stop consumers from doing a PULL on the queue.
Another dirty way of doing this is by making consumers not send ack messages once its consumed a message from the queue.
We are currently not happy with either of these.
Any other way you can suggest.
Thanks in advance.
If you always want message delivery to be delayed you can use the scheduler feature of ActiveMQ to delay delivery until a set time or a fixed delay etc.
Other strategies might also work but it really up to you to design something that fits your use case. You can try to use Apache Camel to define a route that implements the logic of your use case to either dispatch a message to a Queue or send it to the scheduler for delayed processing. It all really depends on your use case and requirements.

Send One Message to only one of Multiple Consumers in RabbitMQ

I have a somewhat unique use case with RabbitMQ and I'm not sure how to go about solving the problem. I want to have one queue with multiple consumers bound to it and then have RabbitMQ send out one message to only one consumer at at time and wait for an ACK before sending out another message to any other consumer.
I realize this kills throughput and can essentially starve the other consumers but for me that's OK. The reason for this odd use case is that the service that the consumers talk to can only handle one concurrent request at a time so I need a way to limit this but consumers can also die unexpectedly and I need another consumer to pick up processing the messages if this happens. I know there is the prefetch option but that still allows multiple users to get a and exclusive queues but I'm not sure those accomplish what I want. Is it possible configure RabbitMQ to do this?
No; there is no way to limit competing consumers on the same queue such that there is one and only one message in process across all consumers until the ack is received.
A similar question came up some time ago; I don't remember if it was here or in the Spring forums but I believe the solution was to have the consumers acquire a global lock of some kind, using something like hazelcast, or even a simple database table row lock (with prefetch=1 so each consumer had only one "in process" message which was processed as and when each one got the lock).

RabbitMQ fan out on a topic exchange

Pretty new to RabbitMQ and we're still in the investigation stage to see if it's a good fit for our use cases--
We've readily come to the conclusion that our desired topology would have us deploying a few topic based exchanges, and then filtering from there to specific queues. For example, let's say we have a user and an upload exchange, where the user queue might receive messages where the topic is "new-registration" or "friend-request" and the upload exchange might receive messages like "video-upload" or "picture-upload".
Creating the queues, getting them routed to the appropriate queue, and then building listeners to handle the messages for the various queues has been quite straight forward.
What's unclear to me however is if it's possible to do a fanout on a topic exchange?
I.e. I have named queues that are bound to my topic exchange, but I'd like to be able to just throw tons of instances of my listeners at those queues to prevent single points of failure. But to the best of my knowledge, RabbitMQ treats these listeners in a straight forward round robin fashion--e.g. every Nth message always go to the same Nth listener rather than dispatching messages to the first available consumer. This is generally acceptable to us but given the load we anticipate, we'd like to avoid the possibility of hot spots developing amongst our consumer farm.
So, is there some way, either in the queue or exchange configuration or in the consumer code, where we can point our listeners to a topic queue but have the listeners treated in a fanout fashion?
Yes, by having the listeners bind using different queue names, they will be treated in a fanout fashion.
Fanout is 1:N though, i.e. each task can be delivered to multiple listeners like pub-sub. Note that this isn't restricted to a fanout exchange, but also applies if you bind multiple queues to a direct or topic exchange with the same binding key. (Installing the management plugin and looking at the exchanges there may be useful to visualize the bindings in effect.)
Your current setup is a task queue. Each task/message is delivered to exactly one worker/listener. Throw more listeners at the same queue name, and they will process the tasks round-robin as you say. With "fanout" (separate queues for a topic) you will process a task multiple times.
Depending on your platform there may be existing work queue solutions that meet your requirements, such as Resque or DelayedJob for Ruby, Celery for Python or perhaps Octobot or Akka for the JVM.
I don't know for a fact, but I strongly suspect that RabbitMQ will skip consumers with unacknowledged messages, so it should never bottleneck on a single stuck consumer. The comments on their FAQ seem to suggest that RabbitMQ will make an effort to keep things chugging along even in the presence of troublesome consumers.
This is a late answer, but in case others come across this question...
It sounds like what you want is fair dispatch rather than a fan out model (which would publish a given message to every queue).
Fair dispatch will give a message to the next available worker rather than using a simple round-robin approach. This should avoid the "hotspots" you are concerned about, without delivering the same message to multiple consumers.
If this is what you are looking for, then see the "Fair Dispatch" section on this page in the Rabbit docs. A prefetch count of 1 is the key here.