ActiveMQ: How to merge the schedulerdb of two ActiveMQ brokers - activemq

We have two independent ActiveMQ brokers running (AMQ 5.11 and 5.14). The 5.14 must replace the 5.11 broker.
Yet, the AMQ 5.11 has still messages in the schedulerDB. How can we migrate the scheduled messages from broker 5.11 into the scheduler of 5.14? The 5.14 already has collected scheduled messages, so we cannot simply replace the files.
Can we merge the schedulerdb?

What if you keep the old broker alive and configure a static brigde to the new broker. I.e. all messages that appears on any queue would flow over to the new instance. When all scheduled deliveries are done you should be able to close the old broker. This requires you to keep both brokers alive and disable the transport-connector of the old broker so it won't accept clients.
How to setup a Static bridge:
http://activemq.apache.org/networks-of-brokers.html

Related

RabbitMQ auto-delete queues with timeouts

I have a k8s service, using rabbitMQ as message broker.
I want to be able to delete a specific queue if the service deployment which may have multiple pods is stopped.
Reading the documentation RabbitMq Queues Docs I found that the best case for me in this case is to use the auto-deleted property of the queue.
Is there any option so the auto-deleted queue will not be deleted immediately after the clients are disconnected, instead to wait some seconds to wait for reconnection ?

Shovel plugin not transferring existing messages to destination queue

I'm trying to copy all the messages in queue (Q1) to another queue (Q2) running on a different machine.
I'm using the shovel plugin and both nodes are running amqp 091. I've tested the connection and if I set the destination queue to a non-existing one, it does indeed create a new queue on the separate machine so I know the connection works.
rabbitmqctl set_parameter shovel test '{"src-uri": "amqp://guest:guest#localhost:5672", "src-queue": "q1", "ack-mode": "on-confirm", "dest-uri": "amqp://guest:guest#host:5672", "dest-queue": "q2"}'
I expected the plugin to transfer all existing messages to Q2, however they're not being transferred. Does the shovel plugin not do this?
It's because the messages were not in the Ready state. I had to kill my celery worker and then the messages transferred successfully.

rabbitmq-server start losing data over durable queues

On windows, when I am using rabbitmq-server start/stop commands, data over the RabbitMQ durable queues are deleted. It seems queues are re-created when I start the RabbitMQ server.
If I use rabbitmqctl stop_app/start_app, I am not losing any data. Why?
What will happen if my server goes down and how can I be sure I that I won't lose data if it does?
configuration issue: I was starting rabbitmq from rabbitmq sbin directory. I re-installed the rabbitmq and added rabbitmq to windows services. Now data lost problem was solved on my computer. When I start/stop the windows service , rabbitmq is not losing any data
Making queues durable is not enough. Probably you'll need also to declare exchange as durable as well as send 'persistent' messages.
In Java you'll use:
channel.basicPublish("", "sample_queue",
MessageProperties.PERSISTENT_TEXT_PLAIN, // note that this parameter is not null!
message.getBytes())

ActiveMQ cosumer connection differ from producer

The following is my ActiveMQ setup:
I have two AMQ broker which are configured with failover.
I have 40 producer but only on consumer.
Now the problem:
From time to time, one of the producer lost the connection to the master broker. The failover reacts and the producer gets a new connection to the slave which gets the messages. So far so good. But the consumer does not have the problem, he consumes still the messages from the master. He does not know, that the slave has also some messages.
How can i now solve the problem woth losing those messages thay are sent to the slave?
Thank in advance
I would recommend you configure a network of brokers. That way, your brokers will be connected as well, and it no longer matters which broker your producers and consumers connect to - the messages will get propagated across the network.

Activemq STOMP: detecting and clearing dead nondurable subscribers

I have the following situation that is affecting our ActiveMQ 5.8 broker.
Several Perl scripts on a Windows workstation connected to ActiveMQ using STOMP and subscribed (nondurable) to various topics. The power failed on the Workstation.
Using the Web console, I can see that ActiveMQ still thinks these subscribers are connected, based on the number of consumers shown and on the high temp message store being used. I had set for no producer flow control and set memory limits, so what I believe I am seeing is that ActiveMQ is spooling all messages to disk because it thinks the long dead subscribers are still connected and might eventually read them. It's been 30 days, and ActiveMQ still doesn't realize that these subscribers are no longer connected.
It there a way to configure ActiveMQ so that "undead" subscriber connections like these are eventually cleared automatically?
While the previous answer is basically correct, ActiveMQ does provide solutions for STOMP transports on the Broker to heart-beat connections, even if the client connects with STOMP v1.0. I blogged about this some time ago when ActiveMQ v5.6 was released, see the section on STOMP 1.0 default heartbeat configuration. Another option is to set tcp keepAlive on for the transport and tune your OS to use a shorter default check interval, the default is usually around two hours.
Though Stomp 1.1+ supports Heartbeating, Active MQ currently doesnt support inactive consumer detection for Stomp. (usually achieved with wireFormat.maxInactivityDuration).
Be Careful:
These values are currently not supported but are planned for a later release
ActiveMQ supports it for Openwire though. i,e after the configured duration the consumer would be considered DEAD !