Get dump of fuse activemq messages - activemq

I am running a production application with fuse esb and using fuse provided activemq queues. There are 100k messages in one of my queues and I need to get a dump of those messages without removing them from the queue. What is the method to get a dump of those messages.
I used activemq:browse karaf command and directed output to file.But it did not give me all the messages. Only 4000 messages were written to a file.

ActiveMQ cannot browse extremely deep Queues so you won't likely be able to view them all. The browse operation is limited to what can fit into the broker memory and by the maxBrowsePageSize setting.
There is no tooling to dump the contents of the message store offered in ActiveMQ. A broker is not a database and should not be treated as one, messages are meant for consumers to consume.

Related

Why ActiveMQ doesn't and console still shows messages after deleting db-*.log files from KahaDB

I am using KahaDB as a persistent storage to save message in ActiveMQ 5.16.4.
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb"
checkForCorruptJournalFiles="true"
checksumJournalFiles="true"
ignoreMissingJournalfiles="true"
/>
</persistenceAdapter>
I'm sending persistent messages and then while the broker is running I'm deleting the KahaDB log files (db-1.log in below picture) which are supposed to hold the queue's messages. However, but deleting the log file doesn't seems to do anything. In the ActiveMQ console I still see the persistent messages, and I can also send more messages which get picked up by connected consumer from Spring Boot apps. I thought deleting those log files will get rid of messages that are pending in queue or break ActiveMQ. Any idea why it isn't happening?
Inside KahaDB folder:
ActiveMQ doesn't treat KahaDB like a SQL database where messages are stored and retrieved during runtime. Generally speaking, ActiveMQ keeps all of its messages in memory and it uses KahaDB is a journal to store messages which it will reload into memory if the broker fails or is restarted administratively. Deleting KahaDB's underlying data won't impact what is in the broker's memory, and it's not clear why you would ever want to do this in the first place.
If you want to remove the messages from a queue during runtime you can do so administratively via the web console. Deleting the KahaDB log files is not the recommended way to do this.

Rabbitmq high availability queues without message replication

I have rabbitMQ broker running on two nodes as a cluster. I have observed that if node, where queue have been created, goes down, then queue would not be available on other node. If I try to publish a message from other node it fails. Even if I remove the failed node from cluster(using forget cluster command) and try to publish message from other node, the behavior is same.
I dont want to enable mirroring of the queue for the simple reason that it would replicate the messages which would be additional load on inter-network.
Is there way available in rabbitMQ to achieve this?
The behaviour you are experiencing is the default behaviour of rabbitmq and its exactly what i supposed to happen.The node where you created the queue becomes the producer now and if this node goes down then any connection available to it or queues or exchanges associated with it will not work at all. There are two options to resolve this issue.
One option is that there is one separate queue for every node and any node that wants to listen to receive messages from a particular node can subscribe to that particular queue's exchange. This seems to be not a very good idea since you need to manage a lot of things for it.
Second option is to always declare a queue before you publish so if your queue is not available then a new queue would take its place and all the nodes subscribe to would be able to listen and any producer node will be able to post that queue. This option will resolve the problems of node getting down or node not available. from the docs
before sending we need to make sure the recipient queue exists. If we send a message to non-existing location, RabbitMQ will just drop the message. Let's create a hello queue to which the message will be delivered:
RabbitMQ lets you import and export definitions. Definitions are json files which contain all broker objects (queues, exchanges, bindings, users, virtual hosts, permissions and parameters). They do not include the messages of queues.
You can export definitions of the node who owns the queue and import them to the slave node of the cluster periodically. You have to enable the management plugin for this task.
More information here: https://www.rabbitmq.com/management.html#configuration

RabbitMQ Manual Retry

How can manual retry work in RabbitMQ after a message has been put onto dead letter queue?
Does RabbitMQ provide an user interface through which you can do this? I assume here that RabbitMQ console does not provide you this capability.
The Rabbit MQ management interface would let you do this crudely, you can go into the deadletter queue, 'get' the message then copy the content. Go to the queue you want to retry the message on and 'publish' it directly to that queue.
Alternatively, you can enable the shovel plugin which allows you to move messages from one queue to another. The RabbitMQ Management plugin directly contains instructions on how to do this.
You can write a consumer / producer using a number of various client libraries. For python a popular library is pika (https://pypi.python.org/pypi/pika).
The script can consume all the messages in a queue, then publish them to another queue.

rabbitMQ federation VS ActiveMQ Master/Slave

I am trying to set up cluster of brokers, which should have same feature like rabbitMQ cluster, but over WAN (my machines are in different locations), so rabbitMQ cluster does not work.
I am looking to alternatives, rabbitMQ federation is just backup the messages in the downstream, can not make sure they have exactly the same messages available at any time (downstream still keeps the old messages already consumed in the upstream)
how about ActiveMQ Master/Slave, I have found :
http://activemq.apache.org/how-do-distributed-queues-work.html
"queues and topics are all replicated between each broker in the cluster (so often to a master and maybe a single slave). So each broker in the cluster has exactly the same messages available at any time so if a master fails, clients failover to a slave and you don't loose a message."
My concern is that if it can automatically update to make sure Master/Slave always have the same messages, which means the consumed messages in Master will also disappear in Slaves.
Thanks :)
ActiveMQ has various clustering features.
First there is High Availability - "Master/Slave". The idea is that several physical servers act as a single logical ActiveMQ broker. If one goes down, another takes it place without losing data. You can do that by sharing the message store (shared file system or shared JDBC), or you could setup a replicated cluster, which replicates read/writes to the master down to all slaves (you need three+ servers). ActiveMQ is using LevelDB and Apache Zookeeper to achieve this.
The other format of cluster available in ActiveMQ is to be able to distribute load and separate security over several logical brokers. Brokers are then connected in a network of brokers. Messages are by default passed around to the broker with available consumers for that message. However, there is a rich toolbox of features in ActiveMQ to tweak a network of brokers to do things as always send a copy of a message to specific broker etc. It takes some messing with the more advanced features though (static network connectors and queue mirroring, maybe more).
Maybe there is a better way to solve your requirements, which is not really specified in the question?

Is it possible to configure multiple queues to one shovel?

I've got a webservice that accepts messages that can be sent to a RabbitMQ cluster using whatever queue they define. This is so front-end devs can send messages via javascript.
I want to make the webservice more robust so that when we have network trouble, the webservice can still accept messages and then handle them when the network is back up. After some initial reading, it seems that the Shovel plugin should handle this nicely.
What I was thinking was to install a local instance of RabbitMQ on the webservice box with shovel turned on. I can then send all messages through the local RabbitMQ instance and have it push all messages to the cluster and deal with the network problems.
My problem is after looking at the documentation it seems that I have to configure every queue I want to forward to in the shovel config file. If that's the case I'm not sure this will work since we allow clients to define a queue through the webservice on the fly.
I would like to have the webservice take the messages, hand them off to the local rmq instance and have it pass the messages off to the cluster using the same queues/exachanges/etc.
Has anyone tried this or can explain how the shovel plugin works?
Have you considered sending messages to an exchange instead of a queue. Send all messages to one exchange possibly a topic exchange if you need that kind of flexibility. Then have the consumer handle the different messages or different queues from the exchange. Sending to one exchange would make configuring the shovel considerably easier.