To achieve message queue backup and restore is it enough to save the files under the Mnesia database directory and the broker configuration?
also do i have to stop the broker before copying the files?
Thanks!
Related
I am using KahaDB as a persistent storage to save message in ActiveMQ 5.16.4.
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb"
checkForCorruptJournalFiles="true"
checksumJournalFiles="true"
ignoreMissingJournalfiles="true"
/>
</persistenceAdapter>
I'm sending persistent messages and then while the broker is running I'm deleting the KahaDB log files (db-1.log in below picture) which are supposed to hold the queue's messages. However, but deleting the log file doesn't seems to do anything. In the ActiveMQ console I still see the persistent messages, and I can also send more messages which get picked up by connected consumer from Spring Boot apps. I thought deleting those log files will get rid of messages that are pending in queue or break ActiveMQ. Any idea why it isn't happening?
Inside KahaDB folder:
ActiveMQ doesn't treat KahaDB like a SQL database where messages are stored and retrieved during runtime. Generally speaking, ActiveMQ keeps all of its messages in memory and it uses KahaDB is a journal to store messages which it will reload into memory if the broker fails or is restarted administratively. Deleting KahaDB's underlying data won't impact what is in the broker's memory, and it's not clear why you would ever want to do this in the first place.
If you want to remove the messages from a queue during runtime you can do so administratively via the web console. Deleting the KahaDB log files is not the recommended way to do this.
I'm trying to copy all the messages in queue (Q1) to another queue (Q2) running on a different machine.
I'm using the shovel plugin and both nodes are running amqp 091. I've tested the connection and if I set the destination queue to a non-existing one, it does indeed create a new queue on the separate machine so I know the connection works.
rabbitmqctl set_parameter shovel test '{"src-uri": "amqp://guest:guest#localhost:5672", "src-queue": "q1", "ack-mode": "on-confirm", "dest-uri": "amqp://guest:guest#host:5672", "dest-queue": "q2"}'
I expected the plugin to transfer all existing messages to Q2, however they're not being transferred. Does the shovel plugin not do this?
It's because the messages were not in the Ready state. I had to kill my celery worker and then the messages transferred successfully.
Can I change the node name from RabbitMq Management Console for a specific queue? I tried, but I think that this is created when I started my app. Can I change it afterwards? My queue is on node RabbitMQ1, and my connection on node RabbitMQ2, so I cannot read messages from that queue. Maybe I can change my connection node?
The node name is not just a label, but it's where the queue is physically located. In fact by default queues are not distributed/mirrored, but created on the server where the application connected, as you correctly guessed.
However you can make your queue mirrored using policies, so you can consume messages from both the servers.
https://www.rabbitmq.com/ha.html
You can change the policy for the queues by using the rabbitmqctl command or from the management console, admin -> policies.
You need to synchronize the queue in order to clone the old messages to the mirror queue with:
rabbitmqctl sync_queue <queue_name>
Newly published messages will end in both the copies of the queue, and can be consumed from both alternatively (the same message won't be consumed from both).
On windows, when I am using rabbitmq-server start/stop commands, data over the RabbitMQ durable queues are deleted. It seems queues are re-created when I start the RabbitMQ server.
If I use rabbitmqctl stop_app/start_app, I am not losing any data. Why?
What will happen if my server goes down and how can I be sure I that I won't lose data if it does?
configuration issue: I was starting rabbitmq from rabbitmq sbin directory. I re-installed the rabbitmq and added rabbitmq to windows services. Now data lost problem was solved on my computer. When I start/stop the windows service , rabbitmq is not losing any data
Making queues durable is not enough. Probably you'll need also to declare exchange as durable as well as send 'persistent' messages.
In Java you'll use:
channel.basicPublish("", "sample_queue",
MessageProperties.PERSISTENT_TEXT_PLAIN, // note that this parameter is not null!
message.getBytes())
Is there any option or command line available to format (erase all data)ActiveMQ server like hdfs node format?
I have deleted the all Queues in ActiveMQ but still is has consumer 47% , how can format all data ?
Simple answer: Remove the kahadb folder pointed out by your persistence configuration and restart ActiveMQ. It will be recreated on startup if not present.
Longer answer: As long as there is a message or something going on, ActiveMQ will lock transaction log files where that message is. The log files will be cleaned up with some interval when there are no unconsumed messages within them. That might include unconsumed messages in durable subscriptions and other things.
You can also set the broker attribute deleteAllMessagesOnStartup in configuration and restart. That can be useful in some situations.