ActiveMQ producer needs to know when consumer has started - activemq

I am a newcomer, and I have a question I want to ask for your help. In my project I used ActiveMQ to implement communication. Now there is a requirement that the producer needs to sense whether the consumer has started, and how should I implement it?

As others have chimed in, it is best to not have producers and consumers know about each other. Essentially, that is one of the primary benefits of event-driven-design-- applications are decoupled at runtime.
If there is some hard-and-fast rule to have producers and consumers communicate, then you can implement a communication channel using a dedicated Queue or Topic to send events. This design has a number of names-- side-channel, control channel or out-of-band communication.

Related

RabbitMQ with pika: different callback for different queue on same channel?

I have a hard time understanding the basic concepts of RabbitMQ. I find the online documentation not perfectly clear.
So far I understand, what a channel, a queue, a binding etc. is.
But how would the following use case be implemented:
Use Case: Sender posts to one exchange with different topics. On the receiver side, depending on the topic, different receivers should be notified.
So the following should somehow be feasible with a topic exchange:
create a channel
within this channel, create a topic exchange
for each topic to be subscribed to, create a queue and a queue binding with this topic as property
My difficulty is that the callback would be related to the channel, not to the queue or the queue binding. I am not 100 % sure if I am right here.
So that's my question: in order to have multiple callbacks, IOW: different message handlers, depending on the subscribed topic - do you have to create multiple channels, one for each "different message handling"? All these channels should grab the same exchange and define their own queue + queue binding for that specific topic?
Please confirm if this is correct or if I am straying from the canonic path of AMPQ ... "queue" sounds so light-weight, so I intuitively thought of a queue or a queue binding as the right point to attach a consuming event handler to, but it seems that, instead, channel is my friend in this. Right?
Another aspect of my question:
If I really have to use multiple channels for this, do I have to declare the same exchange (exchange name and exchange type of "topic") for each channel? I hoped there was something like:
define the exchange with this name and the type of "topic" once
for each channel, "grab" this predefined exchange and use it by adding queues and queue bindings to this exchange
I find it helpful to think about the roles of the broker (RabbitMQ) and the clients (your applications) separately.
The broker, RabbitMQ, will receive messages from your publishers, route them to queues, and eventually send them to consumers. The message routing can be simple or complex. In your case, the routing is topic based with a few different queues.
You haven't said much about the publishers, likely because their job is simple. They send messages with a routing key to RabbitMQ.
The consumer side is where things can get interesting. At the simplest level, a consumer subscribes to a queue, receives messages from RabbitMQ, and processes them. The consumer opens a connection to RabbitMQ and will use a channel for a particular use (e.g., subscribing to a queue). The power of message brokers is that they allow designers to break up processes into separate apps if desired.
You don't give much insight into your application, other than the presence of different message topics. An important design choice for you to make is how to define the application(s). Are the different topics suitable for separate applications, or will a single application handle all types of messages.
For the former case, you would have one application for each queue. A single channel that subscribes to the queue is probably the most sensible decision unless your application needs to be threaded. For threaded applications, each thread would have its own channel and all threads can be subscribed to the same queue. Each application would have its own callback function for processing that type of message.
For the latter case (single application with multiple queues), the best approach would be to have at least one channel per queue. It sounds like each queue would require its own callback function, and you would assign the functions to the channels according to its subscription. You might have multiple channels per queue if your application can process multiple messages (of each topic) simultaneously.
Regarding your question about declaring exchanges, queues, and bindings, these items only need to be created once. But it is reasonable practice to have your clients declare them at connection time. Advantages of declaring them are that they will be created again if they were deleted and that any discrepancies between your declaration and what is on the broker will trigger errors.

Smart Broker vs. Dumb Broker (Kafka and RabbitMQ)

In discussing the differences between Kafka and RabbitMQ, "dumb broker" and "smart broker" keeps popping up in their interactions with consumers. Kafka is described as having a dumb broker while RabbitMQ is said to have a smart broker/dumb consumer model.
What exactly does this mean? I'm familiar with the basics of Kafka and a little bit more about RabbitMQ. However, what features of RabbitMQ makes the broker smarter than Kafka's?
This is a question that bothered me for sometime too :) Here's what I have understood so far...
In the case of RabbitMQ the broker makes sure the messages are delivered to the consumers and dequeue them only when it gets acknowledgement from all the consumers that need that message. It also keeps track of consumer state.
Kafka does not keep track of "which messages were read by consumers". The Kafka broker keeps all messages in queues for a fixed duration of time and it's the responsibility of the consumer to read it from the queue. It also does not have this overhead operation of keeping track of consumer state.
You can read more about it in this awesome Pivotal blog post comparing RabbitMQ and Kafka.
The point about Kafka using a dumb broker while Rabbit MQ using a smart broker is one of the points used while deciding which Messaging System to use. Since RabbitMQ is a smart broker implementing global startegies for retry is far easier and listener agnostic than in Kafka.
Given a set of microservices accessed through an API gateway I believe that the above point, combined with the advantages of Rabbit MQ being much more maintainable and the knowledge that the data passed across microservices will never amount to the same load as that of streaming data, makes Rabbit MQ a far better choice than Kafka for Inter Service Communication
Dumb vs Smart broker means that the Broker can be smart to route messages based on certain conditions.
In the case of RabbitMQ, producer sends message to Exchange and Exchange routes the message to Queue. Here "Exchange" does the routing and thats what they call as Smart broker. Again people have made Brokers really smart and ended up with ESB which we all know what happened and Industry is moving away from Bloated ESB's.
In the case of Kafka, broker doesn't route messages. It is up to the user to create topics, producers partition the events into topic-partitions, and consumer groups and decide which consumer groups listens to which topic.
Smart vs Dumb broker has nothing to do with Message acknowledgment. In case of RabbitMQ, it tracks the status of each message to see whether it is consumer or not. In the case of Kafka, it happens but differently by using offsets on partitions and offset is stored in Kafka itself ( consumer can also store). But both provide the functionality.

Using AMQP (RabbitMQ) for High Availablity in my applications

I am putting together a queue based distributed system, all standard stuff. We are using the latest version of RabbitMQ to provide our messaging transport tier.
I have some questions regarding achieving high availability (for my applications and not actually RabbitMQ) that I couldn't answer by reading the documentation. Would appreciate some advice, it's very likely my lack of understanding of Rabbit/AMQP that is causing the problem :)
Problem: I have a message producer (called the primary). There is one and only 1 message producer. There is a secondary producer (called the backup) which should take over from the primary should it fail.
How could I achieve this using existing RabbitMQ capabilities?
Thoughts: Use an "exclusive" queue, to which the primary will be connected to. The backup will attempt to connect to to this queue. When the primary fails, the backup will gain connectivity to the queue and establish control over the process.
What is the correct pattern I should be using to achieve this? I couldn't find any documentation on competing producers etc, would appreciate your advice! How do others do this?
Kind regards
TM
If you want to have only one producer at a time - you can't afford it with RabbitMQ mechanism (unless you'll get some plugin but I don't know such of a kind). You can gain control on producers number on application level.
P.S.:
Looks like you don't get AMQP idea well, producers publish messages to exchanges, while consuming get them from queue. The broker (RabbitMQ) route messages from exchange to on or more queues (in fact, it can also route messages to other exchange, but that's another story).

Message bus: sender must wait for acknowledgements from multiple recipients

In our application the publisher creates a message and sends it to a topic.
It then needs to wait, when all of the topic's subscribers ack the message.
It does not appear, the message bus implementations can do this automatically. So we are leaning towards making each subscriber send their own new message for the client, when they are done.
Now, the client can receive all such messages and, when it got one from each destination, do whatever clean-ups it has to do. But what if the client (sender) crashes part way through the stream of acknowledgments? To handle such a misfortune, I need to (re)implement, what the buses already implement, on the client -- save the incoming acknowledgments until I get enough of them.
I don't believe, our needs are that esoteric -- how would you handle the situation, where the sender (publisher) must wait for confirmations from multiple recipients (subscribers)? Sort of like requesting (and awaiting) Return-Receipts from each subscriber to a mailing list...
We are using RabbitMQ, if it matters. Thanks!
The functionality that you are looking for sounds like a messaging solution that can perform transactions across publishers and subscribers of a message. In The Java world, JMS specifies such transactions. One example of a JMS implementation is HornetQ.
RabbitMQ does not provide such functionality and it does for good reasons. RabbitMQ is built for being extremely robust and to perform like hell at the same time. The transactional behavior that you describe is only achievable with the cost of reasonable performance loss (especially if you want to keep outstanding robustness).
With RabbitMQ, one way to assure that a message was consumed successfully, is indeed to publish an answer message on the consumer side that is then consumed by the original publisher. This can be achieved through RabbitMQ's RPC procedure calls which might help you to get a clean solution for your problem setting.
If the (original) publisher crashes before all answers could be received, you can assume that all outstanding answers are still queued on the broker. So you would have to build your publisher in a way that it is capable to resume with processing those left messages. This might turn out to be none-trivial.
Finally, I recommend the following solution: Design your producing component in a way that you can consume the answers with one or more dedicated answer consumers that are separated from the origin publisher.
Benefits of this solution are:
the origin publisher can finish its task independent of consumer success
the origin publisher is independent of consumer availability and speed
the origin publisher implementation is far less complex
in a crash scenario, the answer consumer can resume with processing answers
Now to a more general point: One of the major benefits of messaging is the decoupling of application components by the broker. In AMQP, this is achieved with exchanges and bindings that allow you to move message distribution logic from your application to a central point of configuration.
If you add RPC-style calls to your clients, then your components are most likely closely coupled again, meaning that the publishing component fails if one of the consuming components fails / is not available / too slow. This is exactly what you will want to avoid. Otherwise, why would you have split the components then?
My recommendation is that you design your application in a way that publishers can complete their tasks independent of the success of consumers wherever possible. Back-channels should be an exceptional case and be implemented in the described not-so coupled way.

RabbitMQ fan out on a topic exchange

Pretty new to RabbitMQ and we're still in the investigation stage to see if it's a good fit for our use cases--
We've readily come to the conclusion that our desired topology would have us deploying a few topic based exchanges, and then filtering from there to specific queues. For example, let's say we have a user and an upload exchange, where the user queue might receive messages where the topic is "new-registration" or "friend-request" and the upload exchange might receive messages like "video-upload" or "picture-upload".
Creating the queues, getting them routed to the appropriate queue, and then building listeners to handle the messages for the various queues has been quite straight forward.
What's unclear to me however is if it's possible to do a fanout on a topic exchange?
I.e. I have named queues that are bound to my topic exchange, but I'd like to be able to just throw tons of instances of my listeners at those queues to prevent single points of failure. But to the best of my knowledge, RabbitMQ treats these listeners in a straight forward round robin fashion--e.g. every Nth message always go to the same Nth listener rather than dispatching messages to the first available consumer. This is generally acceptable to us but given the load we anticipate, we'd like to avoid the possibility of hot spots developing amongst our consumer farm.
So, is there some way, either in the queue or exchange configuration or in the consumer code, where we can point our listeners to a topic queue but have the listeners treated in a fanout fashion?
Yes, by having the listeners bind using different queue names, they will be treated in a fanout fashion.
Fanout is 1:N though, i.e. each task can be delivered to multiple listeners like pub-sub. Note that this isn't restricted to a fanout exchange, but also applies if you bind multiple queues to a direct or topic exchange with the same binding key. (Installing the management plugin and looking at the exchanges there may be useful to visualize the bindings in effect.)
Your current setup is a task queue. Each task/message is delivered to exactly one worker/listener. Throw more listeners at the same queue name, and they will process the tasks round-robin as you say. With "fanout" (separate queues for a topic) you will process a task multiple times.
Depending on your platform there may be existing work queue solutions that meet your requirements, such as Resque or DelayedJob for Ruby, Celery for Python or perhaps Octobot or Akka for the JVM.
I don't know for a fact, but I strongly suspect that RabbitMQ will skip consumers with unacknowledged messages, so it should never bottleneck on a single stuck consumer. The comments on their FAQ seem to suggest that RabbitMQ will make an effort to keep things chugging along even in the presence of troublesome consumers.
This is a late answer, but in case others come across this question...
It sounds like what you want is fair dispatch rather than a fan out model (which would publish a given message to every queue).
Fair dispatch will give a message to the next available worker rather than using a simple round-robin approach. This should avoid the "hotspots" you are concerned about, without delivering the same message to multiple consumers.
If this is what you are looking for, then see the "Fair Dispatch" section on this page in the Rabbit docs. A prefetch count of 1 is the key here.