We have a wrapper library around RabbitMQ at my workplace, created by someone who no longer works here. I'm designing a new system using Rabbit, and am working out the best approach for declaring queues, exchanges and bindings. Our Rabbit architecture has a few federated global zones, and each zone has multiple Rabbit nodes.
The wrapper code to publish messages and subscribe to queues re-declares the relevant exchanges, queues and bindings each time. My concern is that this may introduce significant latency into every message publish, especially if it needs to wait for confirmation the queue/exchange exists in the remote global zones. I expect the benchmark of millions of messages a second don't re-declare the exchange for each publish.
In short, this approach seems a bit wasteful and paranoid to me, but perhaps I'm missing something.
So I have a few questions:
Is re-declaring the queues and exchanges a significant performance hit, given global federation?
Is re-declaring on each use a good approach because it handles queues/exchanges disappearing due to broker restarts or explicit deletion?
Should we just declare queues and exchanges once per process
and expect them to last the whole lifetime?
Should durable exchanges and queues be declared in Rabbit config and not declared by the applications at all?
How should config changes for queues/exchanges be handled if applications may continue to declare them with old config? Should applications just handle the declare failure and continue to publish/consume?
Is re-declaring the queues and exchanges a significant performance hit
it can be for a very large volume of messages
Is re-declaring on each use a good approach because it handles queues/exchanges disappearing due to broker restarts or explicit deletion?
"good approach" - no.
"effective" at preventing disappeared exchanges / queues / bindings from causing problems, yes... but it's not a good thing to do, in most cases
(maybe ok if you only send a message very infrequently, there is a real cause for concern about the topology being wiped clean)
Should we just declare queues and exchanges once per process and expect them to last the whole lifetime?
this is my general approach.
it opens the possibility of topology being destroyed and you not knowing it. it comes down to whether or not you think this will really happen.
Should durable exchanges and queues be declared in Rabbit config and not declared by the applications at all?
there's nothing wrong with pre-defined topology, but it misses a lot of the power and flexibility of rabbitmq and the amqp protocol.
many messaging systems require predefined topologies and specialized tools to manage the topology. amqp is quite different in that it allows you to define the topology as needed.
if you deal with a static topology, then this might be a good option for you
How should config changes for queues/exchanges be handled if applications may continue to declare them with old config? Should applications just handle the declare failure and continue to publish/consume?
i would crash the app and report it through whatever error reporting mechanism you are using.
having a topology change is usually something important, and done for a reason. if the exchange or queue declaration needs to change, there is probably a good reason for it and the code should not continue with the old declaration.
Related
I am new to message queues and was wondering if people can explain usecase(s) for using an exclusive queue in RabbitMQ.
From the docs:
An exclusive queue can only be used (consumed from, purged, deleted, etc) by its declaring connection.
Exclusive queues are deleted when their declaring connection is closed or gone. They therefore are only suitable for client-specific transient state
Exclusive queues are a type of temporary queue, and as such they:
"...can be a reasonable choice for workloads with transient clients, for example, temporary WebSocket connections in user interfaces, mobile applications and devices that are expected to go offline or use switch identities. Such clients usually have inherently transient state that should be replaced when the client reconnects."
See notes on durability for more context.
Exclusive queues, as you note, have an added restriction. An exclusive queue:
"... can only be used (consumed from, purged, deleted, etc) by its declaring connection."
This makes it (potentially) suitable for use as queue contained within a single application, where the application will create and then process its own work queue items.
The following points are just my opinion, reflecting on the above notes:
I think it may be a relatively rare RabbitMQ use case, compared to "publish-and-subscribe" use cases such as topic exchanges and similar implementations which communicate across different nodes and connections.
Also, I would expect that much of the time, the core functionality of this type of queue could be provided by a language's built-in data structures (such as Java's queue implementations). But if you want a mature out-of-the-box queueing solution for your application to use internally, then maybe it can be a suitable option.
Let's consider a system where thousands of clients data is published to a RabbitMQ exchange (client_id is known at this stage). Exchange routes them to a single queue. Finally, messages are consumed by a single application. Works great.
However, over time, the consuming application becomes a bottleneck and needs to be scaled horizontally. The problem is the system requires that messages considering particular client are consumed by the same instance of the application.
I can create lots of queues: either one per client or use a topic exchange and route it based on some client_id prefix. Still, I don't see an elegant way how to design the consumer application so that it can be scaled horizontally (as it requires stating queues that it consumes explicitly).
I'm looking for RabbitMQ way for solving this problem.
RabbitMQ has x-consistent-hash and x-modulus-hash exchanges that can be used to solve the problem. When these exchanges are used, messages get partitioned to different queues according to hash values of routing keys. Of course, there are differences between x-consistent-hash and x-modulus-hash in the way how partitioning is implemented, but main idea stays the same - messages with the same routing key (client_id) will be distributed to the same queue and eventually should be consumed by the same application.
For example, the system can have the following topology: every application can define an exclusive queue (used by only one connection and the queue will be deleted when that connection closes) that is binded to the exchange (x-consistent-hash or x-modulus-hash).
In my opinion, it is a good idea to have a distributed cache layer in this particular scenario, but RabbitMQ provides the plugins to tackle this kind of problems.
We're currently using RabbitMQ, where a continuously super-fast producer is paired with a consumer limited by a limited resource (e.g. slow-ish MySQL inserts).
We don't like declaring a queue with x-max-length, since all messages will be dropped or dead-lettered once the limit is reached, and we don't want to loose messages.
Adding more consumers is easy, but they'll all be limited by the one shared resource, so that won't work. The problem still remains: How to slow down the producer?
Sure, we could put a flow control flag in Redis, memcached, MySQL or something else that the producer reads as pointed out in an answer to a similar question, or perhaps better, the producer could periodically test for queue length and throttle itself, but these seem like hacks to me.
I'm mostly questioning whether I have a fundamental misunderstanding. I had expected this to be a common scenario, and so I'm wondering:
What is best practice for throttling producers? How is this done with RabbitMQ? Or do you do this in a completely different way?
Background
Assume the producer actually knows how to slow himself down with the right input. E.g. a hardware sensor or hardware random number generator, that can generate as many events as needed.
In our particular real case, we have an API that users can use to add messages. Instead of devouring and discarding messages, we'd like to apply back-pressure by having our API return an error if the queue is "full", so the caller/user knows to back-off, or have the API block until the consumer catches up. We don't control our user, so regardless of how fast the consumer is, I can create a producer that is faster.
I was hoping for something like the API for a TCP socket, where a write() can block and where a select() can be used to determine if a handle is writable. So either having the RabbitMQ API block or have it return an error if the queue is full.
For the x-max-length property, you said you don't want messages to be dropped or dead-lettered. I see there was an update in adding some more capabilities for this. As I see it is specified in the documentation:
"Use the overflow setting to configure queue overflow behaviour. If overflow is set to reject-publish, the most recently published messages will be discarded. In addition, if publisher confirms are enabled, the publisher will be informed of the reject via a basic.nack message"
So as I understand it, you can use queue limit to reject the new messages from publishers thus pushing some backpressure to the upstream.
I don't think that this is in any way rabbitmq specific. Basically you have a scenario, where there are two systems of different processing capabilities, and this mismatch will either pose a risk of overflowing the queue (whatever it would be), or even in case of a constant mismatch between producer and consumer, simply create more and more time-distance between event creation and its handling.
I used to deal with this kind of scenarios, and unfortunately there is no magic bullet. You either have to speed up even handling (better hardware, more suited software?) or throttle the event creation (which has nothing to do with MQ really).
Now, I would ask you what's the goal and how the events are produced. Are the events are produced constantly, with either unlimitted or just very high rate (for example readings from sensors - the more, the better), or are they created in batches/spikes (for example: user requests in specific time periods, batch loads from CRM system). I assume that the goal is to process everything cause you mention you don't want to loose any queued message.
If the output is constant, then some limiter (either internal counter, if the producer is the only producer, or external queue length checks if queue can be filled with some other system) is definitely in place.
IF eventsInTimePeriod/timePeriod > estimatedConsumerBandwidth
THEN LowerRate()
ELSE RiseRate()
In real world scenarios we used to simply limit the output manually to the estimated values and there were some alerts set for queue length, time from queue entry to queue leaving etc. Where such limiters were omitted (by mistake mostly) we used to find later some tasks that were supposed to be handled in few hours, that were waiting for three months for their turn.
I'm afraid it's hard to answer to "How to slow down the producer?" if we know nothing about it, but some ideas are: aforementioned rate check or maybe a blocking AddMessage method:
AddMessage(message)
WHILE(getQueueLength() > maxAllowedQueueLength)
spin(1000); // or sleep or whatever
mqAdapter.AddMessage(message)
I'd say it all depends on specific of the producer application and in general your architecture.
I would like to configure my ActiveMQ producers to failover (I'm using the Stomp protocol) when a broker reaches a configured limit. I want to allow consumers to continue consumption from the overloaded broker, unabated.
Reading ActiveMQ docs, it looks like I can configure ActiveMQ to do one of a few things when a broker reaches its limits (memory or disk):
Slow down messages using producerFlowControl="true" (by blocking the send)
Throw exceptions when using sendFailIfNoSpace="true"
Neither of the above, in which case..I'm not sure what happens? Reverts to TCP flow control?
It doesn't look like any of these things are designed to trigger a producer failover. A producer will failover when it fails to connect but not, as far as I can tell, when it fails to send (due to producer flow control, for example).
So, is it possible for me to configure a broker to refuse connections when it reaches its limits? Or is my best bet to detect slow down on the producer side, and to manually reconfigure my producers to use the a different broker at that time?
Thanks!
Your best bet is to use sendFailIfNoSpace, or better sendFailIfNoSpaceAfterTimeout. This will throw an exception up to your client, which can then attempt to resend the message to another broker at the application level (though you can encapsulate this logic over the top of your Stomp library, and use this facade from your code). Though if your ActiveMQ setup is correctly wired, your load both in terms of production and consumption should be more or less evenly distributed across your brokers, so this feature may not buy you a great deal.
You would probably get a better result if you concentrated on fast consumption of the messages, and increased the storage limits to smooth out peaks in load.
In our application the publisher creates a message and sends it to a topic.
It then needs to wait, when all of the topic's subscribers ack the message.
It does not appear, the message bus implementations can do this automatically. So we are leaning towards making each subscriber send their own new message for the client, when they are done.
Now, the client can receive all such messages and, when it got one from each destination, do whatever clean-ups it has to do. But what if the client (sender) crashes part way through the stream of acknowledgments? To handle such a misfortune, I need to (re)implement, what the buses already implement, on the client -- save the incoming acknowledgments until I get enough of them.
I don't believe, our needs are that esoteric -- how would you handle the situation, where the sender (publisher) must wait for confirmations from multiple recipients (subscribers)? Sort of like requesting (and awaiting) Return-Receipts from each subscriber to a mailing list...
We are using RabbitMQ, if it matters. Thanks!
The functionality that you are looking for sounds like a messaging solution that can perform transactions across publishers and subscribers of a message. In The Java world, JMS specifies such transactions. One example of a JMS implementation is HornetQ.
RabbitMQ does not provide such functionality and it does for good reasons. RabbitMQ is built for being extremely robust and to perform like hell at the same time. The transactional behavior that you describe is only achievable with the cost of reasonable performance loss (especially if you want to keep outstanding robustness).
With RabbitMQ, one way to assure that a message was consumed successfully, is indeed to publish an answer message on the consumer side that is then consumed by the original publisher. This can be achieved through RabbitMQ's RPC procedure calls which might help you to get a clean solution for your problem setting.
If the (original) publisher crashes before all answers could be received, you can assume that all outstanding answers are still queued on the broker. So you would have to build your publisher in a way that it is capable to resume with processing those left messages. This might turn out to be none-trivial.
Finally, I recommend the following solution: Design your producing component in a way that you can consume the answers with one or more dedicated answer consumers that are separated from the origin publisher.
Benefits of this solution are:
the origin publisher can finish its task independent of consumer success
the origin publisher is independent of consumer availability and speed
the origin publisher implementation is far less complex
in a crash scenario, the answer consumer can resume with processing answers
Now to a more general point: One of the major benefits of messaging is the decoupling of application components by the broker. In AMQP, this is achieved with exchanges and bindings that allow you to move message distribution logic from your application to a central point of configuration.
If you add RPC-style calls to your clients, then your components are most likely closely coupled again, meaning that the publishing component fails if one of the consuming components fails / is not available / too slow. This is exactly what you will want to avoid. Otherwise, why would you have split the components then?
My recommendation is that you design your application in a way that publishers can complete their tasks independent of the success of consumers wherever possible. Back-channels should be an exceptional case and be implemented in the described not-so coupled way.