How can messages in Activemq queue/topics be persisted while using KahaDB? - activemq

More specifically, right now when activemq is restarted, the enqueue/dequeue message count for queue and topic changes back to 0 but I would like activemq to persist messages using KahaDB so that even after an activemq restart the counter doesn't switch back to 0 but shows the cumulative count. Any pointers will really help.
I am using activemq version - 5.4.3 and all default settings.
Thank you in advance.

If you have KahaDB configured and are sending Messages with the persistent property enabled (the default) then they will be stored and reloaded for all the Queues or Topics that had durable subscribers.

Related

Why ActiveMQ doesn't and console still shows messages after deleting db-*.log files from KahaDB

I am using KahaDB as a persistent storage to save message in ActiveMQ 5.16.4.
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb"
checkForCorruptJournalFiles="true"
checksumJournalFiles="true"
ignoreMissingJournalfiles="true"
/>
</persistenceAdapter>
I'm sending persistent messages and then while the broker is running I'm deleting the KahaDB log files (db-1.log in below picture) which are supposed to hold the queue's messages. However, but deleting the log file doesn't seems to do anything. In the ActiveMQ console I still see the persistent messages, and I can also send more messages which get picked up by connected consumer from Spring Boot apps. I thought deleting those log files will get rid of messages that are pending in queue or break ActiveMQ. Any idea why it isn't happening?
Inside KahaDB folder:
ActiveMQ doesn't treat KahaDB like a SQL database where messages are stored and retrieved during runtime. Generally speaking, ActiveMQ keeps all of its messages in memory and it uses KahaDB is a journal to store messages which it will reload into memory if the broker fails or is restarted administratively. Deleting KahaDB's underlying data won't impact what is in the broker's memory, and it's not clear why you would ever want to do this in the first place.
If you want to remove the messages from a queue during runtime you can do so administratively via the web console. Deleting the KahaDB log files is not the recommended way to do this.

How to persist retained message between activemq broker restart?

I have a status topic, published using QoS 1 & retained message = true over ActiveMQ (docker rmohr/activemq:5.15.9). I want my dashboard to be able to late subscribe to the topics and always receive the last published message.
The retained functionality seems to work well but the message seems to be wiped out upon ActiveMQ broker restart.
If I stop publishing to the topic, restart the broker, and try to late subscribe, I do not receive the last message (the one that was retained before the broker restart).
I use the default container configuration (kahadb & filesystem directory mounted for data/ and conf/). I thought that the retained meesage would be in kahadb but it is empty. The ActiveMQ ui also shows empty queue for topic after broker restart.
It is expected behavior? Can I achieve retained message persistence through broker restart with ActiveMQ? How should I proceed?
The retain message should not be lost under any circumstances unless the client publishes an empty retain message
You can switch to EMQ x to avoid this problem. You can store the data on disk or in your favorite database

ActiveMQ JMS Topic - delete old messages

Is there a way to monitor messages in ActiveMQ JMS topic and most importantly delete older messages, e.g. delete messages older than a month ago.
I am using Apache Camel to build ActiveMQ Connection and JMS topics.
There is a header within sent JMS messages called time to live, which when surpassed will remove the messages from the queue.
It is possible to achieve the same affects at the broker level.
Further information can be found here http://activemq.apache.org/manage-durable-subscribers.html

Activemq STOMP: detecting and clearing dead nondurable subscribers

I have the following situation that is affecting our ActiveMQ 5.8 broker.
Several Perl scripts on a Windows workstation connected to ActiveMQ using STOMP and subscribed (nondurable) to various topics. The power failed on the Workstation.
Using the Web console, I can see that ActiveMQ still thinks these subscribers are connected, based on the number of consumers shown and on the high temp message store being used. I had set for no producer flow control and set memory limits, so what I believe I am seeing is that ActiveMQ is spooling all messages to disk because it thinks the long dead subscribers are still connected and might eventually read them. It's been 30 days, and ActiveMQ still doesn't realize that these subscribers are no longer connected.
It there a way to configure ActiveMQ so that "undead" subscriber connections like these are eventually cleared automatically?
While the previous answer is basically correct, ActiveMQ does provide solutions for STOMP transports on the Broker to heart-beat connections, even if the client connects with STOMP v1.0. I blogged about this some time ago when ActiveMQ v5.6 was released, see the section on STOMP 1.0 default heartbeat configuration. Another option is to set tcp keepAlive on for the transport and tune your OS to use a shorter default check interval, the default is usually around two hours.
Though Stomp 1.1+ supports Heartbeating, Active MQ currently doesnt support inactive consumer detection for Stomp. (usually achieved with wireFormat.maxInactivityDuration).
Be Careful:
These values are currently not supported but are planned for a later release
ActiveMQ supports it for Openwire though. i,e after the configured duration the consumer would be considered DEAD !

message deleted from queue

I have used BlockingQueue implementation to process my events by services from a queue. However in case if the server goes down, all my events from that queue are getting deleted and hence I am missing events to process. (I am looking for some internal DB where server can store the event/messages from queue and if server goes down and up again, it can load all events/messages to process again, without manually intervention).
Any help on this. I am not sure if I should use Apache ActiveMQ. I am using apache servicemix.
Thanks in advance.
I can not answer about how to do this with BlockingQueue.
But ActiveMQ has two features that you will benefit from:
Persistent Queues and possibly you might also want to look at Durable Queues
It has a built in database that just does this under the hood and allows messages to be persisted in queue even if broker or consumer has to restart.