What is the exact logic ActiveMQ uses to say that a consumer is slow
consumer?
In case of multiple consumers does slow take into account
other consumers also connected to the same topic?
It will be great if code pointers to the same also can be provided
Look at line 116 and the add method in TopicSubscription.java.
A slow consumer is one slow consumer lacking behind in receiving published messages. How "fast" other consumers are does not matter for a slow consumer. It's more the difference between produced and consumed messages. Read the code for details or rephrase your question to something more specific.
Related
Saw similar questions but different expected answers. My question is can I can create a consumer to focus on a single queue until it empties then switch to the other queue, until new work is sent to their main queue?
For example: 1 queue contains large amount of work to be processed in longer time frame and its own dedicated consumers (3 for instance). The 2nd queue receives much less work that requires less processing. If the consumers for the 2nd queue complete their work can I make it so they switch to the first queue until their queue receives more work?
I think for this question, it's important to keep in mind that there is a difference between a "consumer" in the canonical sense vs. a "consumer" in the RabbitMQ sense.
A RabbitMQ Consumer is a contrivance of the protocol - basically, it is a designation that the channel/connection would like to have messages pushed to it, under a designated consumer tag. In this sense, it is merely a notification to the broker to immediately send messages.
In the canonical sense, a message consumer is any piece of code that processes messages.
So, the answer to your question is "yes, go ahead and write your program to do that." You have control over the canonical consumer code. It is up to your software to determine what to do with a message that arrives from a queue.
Now, if you're wondering if RabbitMQ can re-subscribe a consumer to a different queue, the answer is "that's not how it works." In RabbitMQ, a consumer is simply a response to a request to subscribe to a queue - it is a "consumer tag" object. The ongoing nature of the subscription is tied to the channel/connection pair.
What should you do? While your code doesn't specify any particular coding language, in my opinion, you're off-track by even asking this question. Subscribe to both queues. If there is nothing for the worker to do, I think the computer would be perfectly happy with that. If you're worried about a particularly busy queue issuing too much work, you can use a number of techniques to throttle messages coming into that consumer. One popular technique is prefetch.
I have an exchange that's going to receive roughly 50 messages per second. These messages have a unique identifier which relates to each unit in the field. This unique identifier will be the routing key. Every now and again we need to debug or analyse a unit. At that point in time we will spin up a queue, with the correct routing key, and bind it to the exchange. This way, that queue will start receiving the messages for that unit and any consumers monitoring that queue, will then receive the messages.
What this does mean is that 99% of the time, the exchange will have no queues and no routing key. Then, every now and again a queue and routing key will be created and subscribe.
It feels kind of wasteful to be sending 50 messages per second at an exchange, when its just going to immediately discard them. That said, it feels like this how RabbitMQ exchanges are supposed to be used. I guess from a developer perspective i feel like this is wasteful but I also think my understanding of rabbit says that this is the correct way to do.
Is there any overhead to doing this? Any performance concerns I should have? or maybe I am approaching this entirely wrong?
I did try to search before asking but nothing really describes a scenario where an exchange has no queue or routing key, but is still receiving messages.
This is basically how RabbitMQ works, as you have described. The broker is not responsible for how often and how many events you decide to publish. It will nonetheless protect from too much pressure. It has a credit based flow control mechanism. RabbitMQ flow control.
RabbitMQ has different ways in which unroutable messages can be handled.Unroutable Message Handling How to deal with unroutable messages
To sum up a bit the information you will find on those links:
If the publisher does not set the message as mandatory, it will either be discarded or republished to a different alternate exchange that you can configure. This only makes sense if you want to persist all unroutable messages regardless of the source in a single queue, that you can handle later.
If the publisher sets the message as mandatory, the message will be returned to the publisher and the publisher can have a returned message handler setup in order to handle those events.
These strategies in addition to the flow control mechanism, also assure RabbitMQ reliability and protection.
In your situation if you want to limit the messages from producer even more, you need to create a mechanism, as an example, so the producer will not start publishing only when a consumer becomes active. So basically the consumer process will communicate the producer process that it is active and it can start publishing. But from my experience I don't think it's worth the overhead, at least at first, because 50 messages per seconds isn't much. You can monitor the RabbitMQ server and check how is the resource consumption to check if you need to optimize, at first. Optimization is best done with metrics and understanding.
I have a Java application which publishes events to RabbitMQ. It has one very important characteristic: message order must be preserved at all times. The consumer can handle duplicates, but it cannot handle when message 2 is enqueued before message 1, so to say.
I have been reading a lot about RabbitMQ lately, and I feel there is only solution to do this: set the channel in confirm mode (https://www.rabbitmq.com/confirms.html - basically, it forces the broker to acknowledge the publication) and publish one by one. With one by one I mean that the message 2 is only published after RabbitMQ confirmed (via an asynchronous ACK response) that message 1 is actually well received and persisted.
I tried this in a conceptual implementation, and while this works fine, it's uber slow, without exaggerating. Which makes sense: after all, we are now limiting our message rate to 1 message at a time.
So this leads me to my question: are there other, more performant, ways to ensure that message ordering is always preserved (either in RabbitMQ or via different approaches)?
Although my concern is RabbitMQ, I believe this question might be applied to any kind of asynchronous message queue service.
RabbitMQ's clients enqueue in the same order that you sent. It's when subscribers go down, you get network splits or the subscriber NACKs messages that they can get re-ordered; and even then RMQ tries to keep them in the same approximate order by re-queueing at the same position, or as close to the same position.
You can do it like you suggest; take one message at a time, because if you take a message, but crash before you've ACKed it from the broker, it will pop up when your service comes back up, at the same position.
This assumes you only have a single service instance at any given time, consuming from the queue. Which in turn is a distributed systems problem on its own, if you have a scheduler like Kubernetes or Mesos, spawning your service instances.
Another solution would be to ensure ordering of processing in the receiving service, by "resequencing" the messages based on their logical timestamps/sequence numbers.
I've written a much more thorough guide as annotated code here https://github.com/haf/rmq-publisher-confirms-hopac/blob/master/src/Server/Shared/RabbitMQ.fs — with batching you can resequence. Furthermore, if your idempotence builds the consecutive sequence numbers into its logic, you can start taking batches and each event will be idempotent, despite being re-consumed.
I`ve been reading about the principles of AMQP messaging confirms. (https://www.rabbitmq.com/confirms.html). Really helpful and wel written article but one particular thing about consumer aknowledgments is really confusing, here is the quote:
Another things that's important to consider when using automatic acknowledgement mode is that of consumer overload.
Consumer overload? Message queue is processed and kept in RAM by broker (if I understand it correctly). What overload is it about? Does consumer have some kind of second queue?
Another part of that article is even more confusing:
Consumers therefore can be overwhelmed by the rate of deliveries, potentially accumulating a backlog in memory and running out of heap or getting their process terminated by the OS.
What backlog? How is this all works together? What part of job is done by consumer (besides consuming message and processing it of course)? I thought that broker is keeping queues alive and forwards the messages but now I am reading about some mysterious backlogs and consumer overloads. This is really confusing, can someone explain it a bit or at least point me to the good source?
I believe the documentation you're referring to deals with what, in my opinion, is sort of a design flaw in either AMQP 0-9-1 or RabbitMQ's implementation of it.
Consider the following scenario:
A queue has thousands of messages sitting in it
A single consumer subscribes to the queue with AutoAck=true and no pre-fetch count set
What is going to happen?
RabbitMQ's implementation is to deliver an arbitrary number of messages to a client who has not pre-fetch count. Further, with Auto-Ack, prefetch count is irrelevant, because messages are acknowledged upon delivery to the consumer.
In-memory buffers:
The default client API implementations of the consumer have an in-memory buffer (in .NET it is some type of blocking collection (if I remember correctly). So, before the message is processed, but after the message is received from the broker, it goes into this in-memory holding area. Now, the design flaw is this holding area. A consumer has no choice but to accept the message coming from the broker, as it is published to the client asynchronously. This is a flaw with the AMQP protocol specification (see page 53).
Thus, every message in the queue at that point will be delivered to the consumer immediately and the consumer will be inundated with messages. Assuming each message is small, but takes 5 minutes to process, it is entirely possible that this one consumer will be able to drain the entire queue before any other consumers can attach to it. And since AutoAck is turned on, the broker will forget about these messages immediately after delivery.
Obviously this is not a good scenario if you'd like to get those messages processed, because they've left the relative safety of the broker and are now sitting in RAM at the consuming endpoint. Let's say an exception is encountered that crashes the consuming endpoint - poof, all the messages are gone.
How to work around this?
You must turn Auto-Ack off, and generally it is also a good idea to set reasonable pre-fetch count (usually 2-3 is sufficient).
Being able to signal back pressure a basic problem in distributed systems. Without explicit acknowledgements, the consumer does not have any way to say "Slow down" to broker. With auto-ack on, as soon as the TCP acknowledgement is received by broker, it deletes the message from its memory/disk.
However, it does not mean that the consuming application has processed the message or ave enough memory to store incoming messages. The backlog in the article is simply a data structure used to store unprocessed messages (in the consumer application)
In our application the publisher creates a message and sends it to a topic.
It then needs to wait, when all of the topic's subscribers ack the message.
It does not appear, the message bus implementations can do this automatically. So we are leaning towards making each subscriber send their own new message for the client, when they are done.
Now, the client can receive all such messages and, when it got one from each destination, do whatever clean-ups it has to do. But what if the client (sender) crashes part way through the stream of acknowledgments? To handle such a misfortune, I need to (re)implement, what the buses already implement, on the client -- save the incoming acknowledgments until I get enough of them.
I don't believe, our needs are that esoteric -- how would you handle the situation, where the sender (publisher) must wait for confirmations from multiple recipients (subscribers)? Sort of like requesting (and awaiting) Return-Receipts from each subscriber to a mailing list...
We are using RabbitMQ, if it matters. Thanks!
The functionality that you are looking for sounds like a messaging solution that can perform transactions across publishers and subscribers of a message. In The Java world, JMS specifies such transactions. One example of a JMS implementation is HornetQ.
RabbitMQ does not provide such functionality and it does for good reasons. RabbitMQ is built for being extremely robust and to perform like hell at the same time. The transactional behavior that you describe is only achievable with the cost of reasonable performance loss (especially if you want to keep outstanding robustness).
With RabbitMQ, one way to assure that a message was consumed successfully, is indeed to publish an answer message on the consumer side that is then consumed by the original publisher. This can be achieved through RabbitMQ's RPC procedure calls which might help you to get a clean solution for your problem setting.
If the (original) publisher crashes before all answers could be received, you can assume that all outstanding answers are still queued on the broker. So you would have to build your publisher in a way that it is capable to resume with processing those left messages. This might turn out to be none-trivial.
Finally, I recommend the following solution: Design your producing component in a way that you can consume the answers with one or more dedicated answer consumers that are separated from the origin publisher.
Benefits of this solution are:
the origin publisher can finish its task independent of consumer success
the origin publisher is independent of consumer availability and speed
the origin publisher implementation is far less complex
in a crash scenario, the answer consumer can resume with processing answers
Now to a more general point: One of the major benefits of messaging is the decoupling of application components by the broker. In AMQP, this is achieved with exchanges and bindings that allow you to move message distribution logic from your application to a central point of configuration.
If you add RPC-style calls to your clients, then your components are most likely closely coupled again, meaning that the publishing component fails if one of the consuming components fails / is not available / too slow. This is exactly what you will want to avoid. Otherwise, why would you have split the components then?
My recommendation is that you design your application in a way that publishers can complete their tasks independent of the success of consumers wherever possible. Back-channels should be an exceptional case and be implemented in the described not-so coupled way.