Usecases for exclusive queues in RabbitMQ - rabbitmq

I am new to message queues and was wondering if people can explain usecase(s) for using an exclusive queue in RabbitMQ.
From the docs:
An exclusive queue can only be used (consumed from, purged, deleted, etc) by its declaring connection.
Exclusive queues are deleted when their declaring connection is closed or gone. They therefore are only suitable for client-specific transient state

Exclusive queues are a type of temporary queue, and as such they:
"...can be a reasonable choice for workloads with transient clients, for example, temporary WebSocket connections in user interfaces, mobile applications and devices that are expected to go offline or use switch identities. Such clients usually have inherently transient state that should be replaced when the client reconnects."
See notes on durability for more context.
Exclusive queues, as you note, have an added restriction. An exclusive queue:
"... can only be used (consumed from, purged, deleted, etc) by its declaring connection."
This makes it (potentially) suitable for use as queue contained within a single application, where the application will create and then process its own work queue items.
The following points are just my opinion, reflecting on the above notes:
I think it may be a relatively rare RabbitMQ use case, compared to "publish-and-subscribe" use cases such as topic exchanges and similar implementations which communicate across different nodes and connections.
Also, I would expect that much of the time, the core functionality of this type of queue could be provided by a language's built-in data structures (such as Java's queue implementations). But if you want a mature out-of-the-box queueing solution for your application to use internally, then maybe it can be a suitable option.

Related

RabbitMQ competing consumers processing 1 message at a time sequentially

Similar to this question, we have FIFO queues and the messages must be processed in order. We want competing consumers from different machines for redundancy and performance reasons, but only one consumer on one machine should handle a message for a given queue at a time.
I tried setting the prefetch count to 1, but I believe this will only work if used with a single machine. Is this possible by default with RabbitMQ or do we need to implement our own lock?
Given a single queue with multiple consumers there is no way to block one of the consumers, all of them receive the messages in round-robin fashion.
EDIT
See https://www.rabbitmq.com/consumers.html#single-active-consumer
/EDIT
You could see this plugin, https://github.com/rabbitmq/rabbitmq-consistent-hash-exchange to distribute the load using different queues.
I tried setting the prefetch count to 1
prefetch=1 means that the consumers take one message at a time.
do we need to implement our own lock
Yes, If you want one single consumer for queue avoiding other consumers.
EDIT
There are also the Exclusive Queues https://www.rabbitmq.com/queues.html#exclusive-queues but note:
Exclusive queues are deleted when their declaring connection is closed or gone (e.g. due to underlying TCP connection loss). They, therefore, are only suitable for client-specific transient state.

Key-aware consumers in RabbitMQ

Let's consider a system where thousands of clients data is published to a RabbitMQ exchange (client_id is known at this stage). Exchange routes them to a single queue. Finally, messages are consumed by a single application. Works great.
However, over time, the consuming application becomes a bottleneck and needs to be scaled horizontally. The problem is the system requires that messages considering particular client are consumed by the same instance of the application.
I can create lots of queues: either one per client or use a topic exchange and route it based on some client_id prefix. Still, I don't see an elegant way how to design the consumer application so that it can be scaled horizontally (as it requires stating queues that it consumes explicitly).
I'm looking for RabbitMQ way for solving this problem.
RabbitMQ has x-consistent-hash and x-modulus-hash exchanges that can be used to solve the problem. When these exchanges are used, messages get partitioned to different queues according to hash values of routing keys. Of course, there are differences between x-consistent-hash and x-modulus-hash in the way how partitioning is implemented, but main idea stays the same - messages with the same routing key (client_id) will be distributed to the same queue and eventually should be consumed by the same application.
For example, the system can have the following topology: every application can define an exclusive queue (used by only one connection and the queue will be deleted when that connection closes) that is binded to the exchange (x-consistent-hash or x-modulus-hash).
In my opinion, it is a good idea to have a distributed cache layer in this particular scenario, but RabbitMQ provides the plugins to tackle this kind of problems.

When to declare/bind Queues and Exchanges with RabbitMQ

We have a wrapper library around RabbitMQ at my workplace, created by someone who no longer works here. I'm designing a new system using Rabbit, and am working out the best approach for declaring queues, exchanges and bindings. Our Rabbit architecture has a few federated global zones, and each zone has multiple Rabbit nodes.
The wrapper code to publish messages and subscribe to queues re-declares the relevant exchanges, queues and bindings each time. My concern is that this may introduce significant latency into every message publish, especially if it needs to wait for confirmation the queue/exchange exists in the remote global zones. I expect the benchmark of millions of messages a second don't re-declare the exchange for each publish.
In short, this approach seems a bit wasteful and paranoid to me, but perhaps I'm missing something.
So I have a few questions:
Is re-declaring the queues and exchanges a significant performance hit, given global federation?
Is re-declaring on each use a good approach because it handles queues/exchanges disappearing due to broker restarts or explicit deletion?
Should we just declare queues and exchanges once per process
and expect them to last the whole lifetime?
Should durable exchanges and queues be declared in Rabbit config and not declared by the applications at all?
How should config changes for queues/exchanges be handled if applications may continue to declare them with old config? Should applications just handle the declare failure and continue to publish/consume?
Is re-declaring the queues and exchanges a significant performance hit
it can be for a very large volume of messages
Is re-declaring on each use a good approach because it handles queues/exchanges disappearing due to broker restarts or explicit deletion?
"good approach" - no.
"effective" at preventing disappeared exchanges / queues / bindings from causing problems, yes... but it's not a good thing to do, in most cases
(maybe ok if you only send a message very infrequently, there is a real cause for concern about the topology being wiped clean)
Should we just declare queues and exchanges once per process and expect them to last the whole lifetime?
this is my general approach.
it opens the possibility of topology being destroyed and you not knowing it. it comes down to whether or not you think this will really happen.
Should durable exchanges and queues be declared in Rabbit config and not declared by the applications at all?
there's nothing wrong with pre-defined topology, but it misses a lot of the power and flexibility of rabbitmq and the amqp protocol.
many messaging systems require predefined topologies and specialized tools to manage the topology. amqp is quite different in that it allows you to define the topology as needed.
if you deal with a static topology, then this might be a good option for you
How should config changes for queues/exchanges be handled if applications may continue to declare them with old config? Should applications just handle the declare failure and continue to publish/consume?
i would crash the app and report it through whatever error reporting mechanism you are using.
having a topology change is usually something important, and done for a reason. if the exchange or queue declaration needs to change, there is probably a good reason for it and the code should not continue with the old declaration.

RabbitMQ Per-Connection Queue Creation and Deletion

I've been adopting RabbitMQ in a new project. I'll need a clustered environment to support system failure and high-demand. On to the problem: queues must be created as exclusive whenever a client connects. If the client disconnects, I want the queue to be deleted, freeing its resources. Furthermore, queue binding to topics must be limited with per-credential permissions.
Concluding, I would like to constraint connection to create only exclusive queues (that would auto-delete when the connection closes) and only bind such queue to a list of topics I would allow, configured per user account.
I'm not being able to either limit queue creation to exclusive, nor limit the topics a client can subscribe to. I could impose this constraint based on VHOSTS, but that would require the dynamic creation of VHOSTS and probably hundreds of them.
Is this possible in RabbitMQ? Is there a better approach to it?
Thanks
If you only want clients to be able to create exclusive queues you may need to write your own wrapper and abstract away RabbitMQ from the clients completely. Have your clients talk to RabbitMQ through this wrapper and deal with queue creation and binding here.
This would expose your own version of queue_declare which then calls the RabbitMQ queue_declare method setting exlusive=true.

How to implement single-consumer-multi-queue model for rabbitMQ

I have found this image is very similar to my bussiness model. I need to split message to some queue.
for some heavy work. I can add more worker thread for them. But for some no much heavy work. I can
let single consumer to subscribe their message. But how to do that in rabbitMQ.
Through their document. I just found that single-queue-multi-consumer model.
You can add multiple workers to a queue
There can be multiple queues bound to an exchange.
In RabbitMQ, the producer always sends the message to an exchange. So, in your case, I hope only one exchange is enough. If you want to load balance at the consumer side, you have the above said two options.
You can also read my article:
https://techietweak.wordpress.com/2015/08/14/rabbitmq-a-cloud-based-message-oriented-middleware/
RabbitMQ has a very flexible model, which enables a wide variety of routing scenarios to take place.
I need to split message to some queue. for some heavy work. I can add more worker thread for them.
Yes, this is supported via a direct exchange. Publish a message using a routing key that is the same as the name of the queue. For convenience, let's say you use the fully-qualified object name (e.g. MyApp.Objects.DataTypeOne). All you need to do is subscribe multiple consuming processes to this queue, and RabbitMQ will load-balance using a round-robin approach.
But for some no much heavy work. I can let single consumer to subscribe their message.
Yes, you can do this also. Same process as in the paragraph above. Just don't attach multiple consuming processes.
I have found this image is very similar to my business model.
The diagram isn't very useful, because it lacks information about the type of messages being published. In that sense, it is only an interconnect diagram. The interesting lines are the ones connecting the queues to the exchange, as that is what you specify within RabbitMQ via Queue Bindings. You can also bind exchanges to one another, but that's a bit further than we probably need to go.
Everything else on the diagram is fully under your control as the user of the RabbitMQ/AMQP system. You can create an arbitrary number of publishers and have an arbitrary number of consuming processes each consuming from an arbitrary number of queues. There are no hard and fast limits, though there are some practical aspects you probably will want to think about to ensure your system is maintainable.