Migrate data from Kahadb to MKahadb - activemq

I am changing from KahaDB to MKahaDB. So, distributing some of my queues to separated destinations. But i would like to migrate old queues' data to the newly created destinations. Is there anyone knows, how can i do that?

The only way to migrate right now would be to create a new broker with the mKahaDB configuration you want and then network the old broker to the new one and create a subscription using the console on the new broker each of the destinations you want to drain over to the new broker, the demand would drain down the messages from the old broker.

Related

How to get user specific data in a queue from ActiveMQ

If as admin I wanted to know from a particular queue A, how many calls initiated by which person and how many get dequeued, and how many are still in queue # any time.
I just want to develop one UI in my application to show those user-specific records from ActiveMQ.
There is no built in functionality in the broker that does this sort of thing. You could develop your own broker plugin that tracks these things but you'd need to build some sort of DB or other storage as you would lose any in-memory stats when a broker is restarted. You should use caution when trying to push all requirements into the message broker for system level management as that is not its purpose and will likely result in other issues when you do.

The strange behavior of `delete-after` attribute of dynamic shovel

I was exploring the shovel plugin for moving the messages from source to temporary queues as a part of a bigger use case. I was creating the dynamic shovel for each queue to move the messages to the temporary queue and delete the dynamic shovel using the attribute "delete-after": "queue-length". I have seen in the RabbitMQ Management console(Admin->Shovel status) that the dynamic shovel got deleted successfully, but the source/temporary queues' state was running.
But the issue was that when new messages were coming to the source queues, they were automatically moving to the temporary queues even though there was no consumer of the source queue.
Note:
Source and temporary both queues are durable.
Messages are persistent (Delivery mode: 2)
The said operation was performed parallelly as there are hundreds of queues. I was creating dynamic shovel for each queue and delete them.
While I'm removing the dynamic shovel using the DELETE HTTP API instead of the above approach, it's working perfectly. I want to avoid making an extra HTTP call as the no of source queues are hundreds.
delete-after attribute has been deprecated and renamed with src-delete-after a long back. RMQ v3.7.x has the support of delete-after attribute but it was removed in v3.8.x(up to 3). Then it was brought back in v3.8.4
https://github.com/rabbitmq/rabbitmq-shovel/issues/72
Thanks to Michael

scaling of rabbit mq

what scaling options can we use if rabbitMQ metrices reaches a threshold?I have a VM on which RabbitMQ is running. If the queue length>90% of total queue length, can we increase the instance count by 1 and a with a separate queue such that they are to be processed on a priority basis?
In short what scaling options do we have based on different parameters for RabbitMQ
Take a look into RabbitMQ Sharding Plugin
From their README:
RabbitMQ Sharding Plugin
This plugin introduces the concept of sharded queues for RabbitMQ.
Sharding is performed by exchanges, that is, messages will be
partitioned across "shard" queues by one exchange that we should
define as sharded. The machinery used behind the scenes implies
defining an exchange that will partition, or shard messages across
queues. The partitioning will be done automatically for you, i.e: once
you define an exchange as sharded, then the supporting queues will be
automatically created on every cluster node and messages will be
sharded across them.
Auto-scaling
One interesting property of this plugin, is that if you add more nodes
to your RabbitMQ cluster, then the plugin will automatically create
more shards in the new node. Say you had a shard with 4 queues in node
a and node b just joined the cluster. The plugin will automatically
create 4 queues in node b and join them to the shard partition.
Already delivered messages will not be rebalanced, but newly arriving
messages will be partitioned to the new queues.

How do you replay KahaDB message archives?

In the ActiveMQ KahaDB documentation, it mentions that you can archive KahaDB data files so they can be replayed if needed later. Yet, through some searching and looking through their documentation and the draft copy of ActiveMQ in Action, I can't find any example or clues how to actually do the replay of those files.
I'm hoping someone out there can point me in the direction on what needs to be done in order to actually perform a replay.
KahaDB only replays messages/events when a broker is started to return the broker to the state prior to the broker being stopped (recovering persistent messages, etc.)
It does not retain historical messages to be replayed on demand. Once a message is dequeued successfully, then its removed from the KahaDB data files.
If you have such a requirement to copy messages for auditing/reuse, then I suggest look into something like mirrored queues or using the camel wire-tap pattern.

NServiceBus: How to configure a subscriber when using DB subscription storage

I have a logical publication which is basically a bunch of MT servers, who all access a DB subscription storage. These MTs are typically upgraded by taking 1/2 out of rotation, installing the new MT version, bringing them back online, and then repeating for the other half.
I am confused how a subscriber would subscribes to such a publication. In all of the examples I have seen, a subscriber needs to have a publisher's InputQueue specified in configuration in order for the subscription request to be received. But what InputQueue would I specify in this situation? I don't want subscription to fail if some of my publisher MT's happen to be down. Would I just subscribe manually by adding a record to the DB subscription storage?
Publishers usually publish as a result of processing some command from a client, and as such, you usually use a distributor to scale them out, as well as using the DB subscription storage. Subscribers are another kind of client so you would configure them to point to the distributor as well.