KahaDBPersistenceAdapter and createQueueMessageStore - activemq

I am not sure if I need to call createQueueMessageStore for each queue that will be persisted, and it not, what is the purpose of this call ? Is setting the adapter on the broker enough without individual queueMessageStores ?

createQueueMessageStore() is used by the ActiveMQ Broker - you don't need to call this.
ActiveMQ automatically creates Queues and Topics on demand -
so if you send a message to a Queue foo.bar the ActiveMQ broker will check to see if it exists, and if it doesn't it will create it for you (using the createQueueMessageStore() call).

Related

RabbitMQ Pika - how to check if a consumer is active

Does anyone know what is the easiest way to check if a consumer is active for a particular queue? I could hit the api on localhost manually, but I'd love a way to do this with Pika.
For example, I have a queue xyz I want to know how many consumers are listening to that queue.
You can do a queue.declare with passive=true for the same queue you want to get the consumer count from. This will return a response (queue.declare-ok) which includes the consumer count.
From the protocol reference documentation:
bit passive
If set, the server will reply with Declare-Ok if the queue already
exists with the same name, and raise an error if not. The client can
use this to check whether a queue exists without modifying the server
state.

RabbitMQ: dropping messages when no consumers are connected

I'm trying to setup RabbitMQ in a model where there is only one producer and one consumer, and where messages sent by the producer are delivered to the consumer only if the consumer is connected, but dropped if the consumer is not present.
Basically I want the queue to drop all the messages it receives when no consumer is connected to it.
An additional constraint is that the queue must be declared on the RabbitMQ server side, and must not be explicitly created by the consumer or the producer.
Is that possible?
I've looked at a few things, but I can't seem to make it work:
durable vs non-durable does not work, because it is only useful when the broker restarts. I need the same effect but on a connection.
setting auto_delete to true on the queue means that my client can never connect to this queue again.
x-message-ttl and max-length make it possible to lose message even when there is a consumer connected.
I've looked at topic exchanges, but as far as I can tell, these only affect the routing of messages between the exchange and the queue based on the message content, and can't take into account whether or not a queue has connected consumers.
The effect that I'm looking for would be something like auto_delete on disconnect, and auto_create on connect. Is there a mechanism in rabbitmq that lets me do that?
After a bit more research, I discovered that one of the assumptions in my question regarding x-message-ttl was wrong. I overlooked a single sentence from the RabbitMQ documentation:
Setting the TTL to 0 causes messages to be expired upon reaching a queue unless they can be delivered to a consumer immediately
https://www.rabbitmq.com/ttl.html
It turns out that the simplest solution is to set x-message-ttl to 0 on my queue.
You can not doing it directly, but there is a mechanism not dificult to implement.
You have to enable the Event Exchange Plugin. This is a exchange at which your server app can connect and will receive internal events of RabbitMQ. You would be interested in the consumer.created and consumer.deleted events.
When these events are received you can trigger an action (create or delete the queue you need). More information here: https://www.rabbitmq.com/event-exchange.html
Hope this helps.
If your consumer is allowed to dynamically bind / unbind a queue during start/stop on the broker it should be possible by that way (e.g. queue is pre setup and the consumer binds the queue during startup to an exchange it wants to receive messages from)

Correct config using rabbitmq as celery backend

I'm building a flask app with celery, using rabbitmq as celery's backend.
my conf for celery is
CELERY_BROKER_URL='amqp://localhost:5672',
CELERY_RESULT_BACKEND='amqp://',
CELERY_QUEUE_HA_POLICY='all',
CELERY_TASK_RESULT_EXPIRES=None
Then, declaring a queue produced a whole bunch of error
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()
channel.queue_declare(queue=new_task_id)
error
PreconditionFailed: Queue.declare: (406) PRECONDITION_FAILED - inequivalent arg
'durable' for queue '1419349900' in vhost '/':
received 'true' but current is 'false'
OK, I changed it to channel.queue_declare(queue=new_task_id, durable=True)
again, error
PreconditionFailed: Queue.declare: (406) PRECONDITION_FAILED - inequivalent arg
'auto_delete' for queue '1419350288' in vhost '/':
received 'true' but current is 'false'
OK, I changed it to channel.queue_declare(queue=new_task_id, durable=True, auto_delete=True)
This time error disappeared.
But how would I know it before I got these errors? I searched celery's doc on this topic, detailed documentation, but didn't get what I need——it just list all the conf items yet don't tell me how to set it. Or it's rabbitmq's doc which I should refer to?
Thank you!
edit
So, all Queues declared in your configuration file, or in any registered tasks.
Could you explain a bit more on this? And what's the difference between declare and create?
You said Result queues will be created with 'durable', 'auto-delete' flags, where can I find this information? And How does celery know a queue is a result queue?
Celery default behaviour is to create all missing Queues (see CELERY_CREATE_MISSING_QUEUES Documentation. Task queues will be created by default with 'durable' flag. Result queues will be created with 'durable', 'auto-delete' flags, and 'x-expires' if your CELERY_TASK_RESULT_EXPIRES parameter is not None (by default, it is set to 1 day).
So, all Queues declared in your configuration file, or in any registered tasks. Moreover, as you use amqp result backend, the worker, if you does not have the CELERY_IGNORE_RESULT parameter set, the result queue will be created at initialisation of the tash, and named as the task_id.
So, if you try to redeclare this queue with a conflicting configuration, RabbitMQ will refuse it. And thus, you don't have to create it.
Edit
Queue "declaration", as indicated in the pika documentation, allow to check the existence of a Queue in RabbitMQ, and, if not, create it. If CELERY_CREATE_MISSING_QUEUESis set to True in your Celery configuration, at initialization, any queue listed in CELERY_QUEUES or CELERY_DEFAULT_QUEUE parameter, or any custom queue declared in registered tasks options, such as #task(name="custom", queue="my_custom_queue"), or even in a custom CELERY_ROUTING definition, will be "declared" to RabbitMQ, thus will be created if they don't exist.
Queue parametrization documentation can be found here, in the Using Transient Queues paragraph, but the best way to see it is to use the RabbitMQ management plugin, allowing you to monitor in a Web UI the declared queues and their configuration (you can see a D flag for Durable, and a A-D flag for Auto-Delete).
Finally, celery doesn't "know" if a queue is a result queue, but, when created, a task is assigned to an unique identifier. This identifier will be used a the queue name for any result. It means that if the producer of the task waits for a result, it will listen to that queue, whenever she will be created. And the consumer, once the task aknowledged, and before the task is really executed, and if the task does not ignore results (througt setting CELERY_IGNORE_RESULT or task custom options), will check if a queue named as the task identifier exists, if not, it will create it with the default Result configuration (see the Result Backend configuration)

Many subscriptions to a single queue with RabbitMQ STOMP

Is it possible to bind a single queue to many topics using RabbitMQ STOMP client?
Each time a client sending SUBSCRIBE frame server creates a new queue for it, it makes usage of "prefetch-count" useless for me, because it applies to each subscription individually.
I am just looking for any way to get messages with many topics in the single queue via RabbitMQ Web-STOMP. Any ideas?
See Documentation:User generated queue names for Topic and Exchange destinations
The header x-queue-name specified queue name should binding to same queue if there exist, but will exist multiple subscription on client.
The different between AMQP and STOMP concept not compatible in some ways.

Configure RabbitMQ to replace an old pending message with a new one

Is is possible to configure a RabbitMQ exchange or a queue in such a way that at most one message with a given routing key is pending at any time? If new message arrives, the old one would be dropped and the new one enqueued.
If such option is not available, what would be the best way to implement this at the application level? I.e. when application receives a message how can it check if there any more pending messages?
You need to install Last Value Cache and enable it. Your exchange will be type "x-lvc", which inherits from the direct exchange type.
each time you connect to MQ, create a queue and bind to this exchange. It will deliver the most recent message to the queue. It is perfect for making sure you get only the most uptodate message. All other messages sent to this exchange are discarded unless there is a queue connected. So once connected you will continue to receive updates.
here are installation instructions:
https://github.com/simonmacmullen/rabbitmq-lvc-plugin
here is a similar question:
RabbitMQ messaging - initializing consumer