delete queue in ActiveMq cluster? - activemq

I configure a activemq cluster using JDBC Master Slave,My requirement is when i using a the queue i should delete it ,using jmx can delete a queue if you konw the broker jmxserviceurl ,but in cluster,i can't konw which broker the queue create on,so is there any other way to down this?

Why don't you "garbace collect" them instead and you won't have to worry to much about unused queues cluttering down the cluster?
http://activemq.apache.org/delete-inactive-destinations.html

Related

ActiveMQ - how to set persistency from config?

I have a topic and three queues in activeMQ. Producer is publishing the message to the topic and it's routing the same message in these three queues. Now my question is:
I haven't set persistency in producer and I don't want to because of some project limitation. How caan I set persistency for the topic and queues from ActiveMQ broker? I am using ActiveMQ 5.15.12 version.
Can I set persistency for one queue and don't set for another?
What will happen if I don't use the persistency? I know there have chances of losing messages during broker restart but is there any other way to overcome this issue?

rabbitmq-server start losing data over durable queues

On windows, when I am using rabbitmq-server start/stop commands, data over the RabbitMQ durable queues are deleted. It seems queues are re-created when I start the RabbitMQ server.
If I use rabbitmqctl stop_app/start_app, I am not losing any data. Why?
What will happen if my server goes down and how can I be sure I that I won't lose data if it does?
configuration issue: I was starting rabbitmq from rabbitmq sbin directory. I re-installed the rabbitmq and added rabbitmq to windows services. Now data lost problem was solved on my computer. When I start/stop the windows service , rabbitmq is not losing any data
Making queues durable is not enough. Probably you'll need also to declare exchange as durable as well as send 'persistent' messages.
In Java you'll use:
channel.basicPublish("", "sample_queue",
MessageProperties.PERSISTENT_TEXT_PLAIN, // note that this parameter is not null!
message.getBytes())

Can I disable remote queue access in RabbitMQ cluster?

When creating a RabbitMQ cluster, non-mirrored queues from other nodes are "remotely accessible" from other nodes.
To a naive developer they will seemingly be able to publish to and consume from any node in an cluster and it will give them a false sense of high-availability.
If the node hosting the queue dies, the consumer will no longer be able to reach the queue from the other node.
Is there a way to disable this behaviour so that it's obvious that one has to either have a mirrored queue or needs to create a distinct queues on each server, consume from both and then handle duplicates.
Thanks
It is not possible disable this behaviour, this is one of the main reasons why you create a cluster.
BTW, you can create a federated cluster by using federation plug-in.
So you can:
have isolated nodes
share only the exchanges or/and queues you prefer.

Recognize RabbitMQ master node in high-availability cluster

I would like to run RabbitMQ Highly Available Queues in a cluster of two RabbitMQ instances on two separate servers. It's not clear to me from the documentation how can I detect which node is considered as master by RabbitMQ in order to determine which node should I publish messages to and consume from.
Is that something that RabbitMQ resolves internally (and so I can publish and consume from master even when connected to a slave node) or should the application know about master node for each queue and connect only to it?
RabbitMQ will take care of that. The idea of HA queues is that you publish and consume from either node, and RabbitMQ will try to keep a consistent state.

Rabbitmq federation delete messages

I'm planning to use rabbitmq federation plugin to replicate messages from master data center to standby, so I can't use cluster mirrored queues.
Is it possible to replicate message deletion to auto sync queue?
In case you need to replicate message from one queue to many consumers use an shovel to map the desired queue to a fanout exchange...then consume directly from the exchange using exclusive queues for each consumer