I sent 1.000.000 object messages to the queue, and my kahadb's db.data file became 480 mb. Then my consumer started to get messages. After a while consuming finished and all messages in the queue reached to the target. But when I checked my db.data file, it was still 480 mb. Thats why I want to delete consumed messages.
How can I do that. Is there any property to delete automatically
Manually you can purge queued messages in the web console http://localhost:8161/
http://activemq.apache.org/how-do-i-purge-a-queue.html
Automatically you can discard expired messages with <sharedDeadLetterStrategy processExpired="false" />
http://activemq.apache.org/message-redelivery-and-dlq-handling.html
Related
I am using ActiveMQ with web console (activemq-web-console-5.16.4) in TomEE. The ActiveMQ-web-console-5.16.4.war was added to the TomEE webapps folder. Afterwards, I could access the web console. Currently, I want to view/monitor the content of enqueued/processed messages in the web console "Messages Enqueued". How can I manage that in my case? Should I bind the KahaDB message store or other databases?
In my application I use Apache Camel and send messages from one route to another by ActiveMQ.
I would appreciate any help.
Screenshots:
You can use the web console itself to view the content of the message assuming it fits into the narrow constraints of what the console can decode into human readable format.
First, click the "Browse" link.
Second, click the link for the actual message.
Third, see the "Message Details."
To be clear, you can only inspect the content of messages which are in the queue. This is represented by the "Number of Pending Messages." The "Messages Enqueued" is the number of messages sent to the queue (but not necessarily in the queue currently) since the broker was started. The "Messages Dequeued" is the number of messages consumed from the queue. In your case you have 66 messages which have been enqueued and dequeued (i.e. consumed) and therefore 0 pending messages.
If you want to keep a copy of every message sent to your queue for auditing purposes you can use a mirrored queue. As noted previously, you can only inspect messages which are in the queue and a mirrored queue will hold a copy of every message sent to the source queue allowing you to inspect those messages at your convenience.
I have a publisher that sends messages to a consumer that moves a motor.
The motor has a work queue which I cannot access, and it works slower than the rate of the incoming messages, so I'm trying to control the traffic on the consumer.
To keep updated and relevant data coming to the motor without the queue filling up and creating a traffic jam, I set the RabbitMQ queue size limit to 5 and basicQos to 1.
The idea is that the RabbitMQ queue will drop the old messages when it is filled up, so the newest commands are at the front of the queue.
Also by setting basicQos to 1 I ensure that the consumer doesn't grab all messages from the queue and bombards the motor at once, which is exactly what i'm trying to avoid since I can't do anything once the command was sent to the motor.
This way the consumer takes messages from the queue one by one, while new messages replace the old ones on the queue.
Practically this moves the bottleneck to the RabbitMQ queue instead of the motor's queue.
I also cannot check the motor's work queue, so all traffic control must be done on the consumer.
I added messageId and tested, and found out many messages are still coming and going long after the publisher is being shut down.
I'm expecting around 5 messages after shutdown since that's the size of the queue, but i'm getting hundreds.
I also added a few seconds of sleep inside the callback to make sure this isn't the robot queue that's acting up, but i'm still getting many messages after shutdown, and I can see in the logs the callback is being called every time so it's definitely still getting messages from somewhere.
Please help.
Thanks.
Moving the acknowledgment to the end of the callback solved the problem.
I'm guessing that by setting basicQos to 1 it did execute the callback for each message one after another, but in the background it kept grabbing messages from the queue.
So even when the publisher was shutdown, the consumer still had messages that were taken from the queue in it, and those messages were the ones that I saw being executed.
i have set TTL of persistent messages in a queue as 5 seconds, the messages did expire and landed in the DLQ, however, i notice that the expired messages will only appear in the DLQ onlyafter 10+ to 20+ seconds at random after it was sent even though the TTL is set as 5 seconds. Is there a way to configure such that expired messages are moved to DLQ queue immediately after it expires ?
In the absence of some consumer pulling messages off the Queue and the broker seeing prior to dispatch that the message has expired there is a periodic task that is run to scan for expired messages that are in memory (those paged to disk will be expired when paged back in).
You can configure the scan to run more often but it will have an impact on broker performance. The option is documented in the ActiveMQ Destination Policy options for Queue only values.
I am creating a bulk video processing system using spring-boot. Here the user will provide all the video related information through an xlsx sheet and we will process the videos in the backend. I am using the Rabbitmq for queuing up the request.
Let say a user has uploaded a sheet with 100 rows,then there will be 100 messages in the Rabbitmq queue. In the back-end, we are auto-scaling the subscribers (servers). So we will start with one subscriber-only and based on the load (number of messages in the queue) we will scale up to 15 subscribers.
But our producer is very fast and it allocating all the messages to our first subscriber (before other subscribers are coming up) and all our new subscriber are not getting any messages from the queue.
If all the subscribers are available before producer started pushing the messages then it is allocating the messages to all servers.
Please provide me a solution of how can our new subscribers pull the messages from the queue that were produced earlier.
You are probably being affected by the listener container prefetchCount property - it defaults to 250 with recent versions (since 2.0).
So the first consumer will get up to 250 messages when it starts.
It sounds like you should reduce it to a small number, even all the way down to 1 so only one message is outstanding at each consumer.
ActiveMQ: 5.10.2 inside ServiceMix's Karaf OSGi
KahaDB persistence.
Default broker settings.
Default settings in connections(tcp://x.x.x.x:61616)
16 queues predefined in activemq.xml.
Two client connections to ActiveMQ. One for producer sessions, one for consumer sessions.
Producers send messages to all queues.
16 consumer sessions consumes messages.
All going ok, but:
If I reduce number of consumers to 1 (or 2 or three, I don't know where is threshold) so that messages from 1 queue are consuming and messages from another queues are storing.
While some time passing, I see this picture:
That 1 consumer stop receiving message. He think that there are no more messages.
From activemqweb I can see that message count on that consuming queue is > 0
From activemqweb I cannot see any messages in Message Browser in that consuming queue.
I can see messages from other queues in Message Browser.
If I start some other consumer(or restart activemq) to consume messages from different queue I see:
I start to see messages in first queue Message Browser(those that were sent before but haven't been seen after "freeze").
First queue continue to consuming
Second queue begin to consuming.
The "freeze" can occur again in some time and start consuming another queue will help again.
If I start all consumers I see no "message freeze".
If just stop and start consumer on "frozen" queue, nothing happens. It need to be done on "unfrozen queue" to "unfroze" "frozen queue".
It also happens if there is no active producer, only consumer.
What can it be?
Thank you.
Oups. I've found what it was.
It's just available memory exceeded.
I didn't set -Xms and -Xmx, so it run with only 512mb of max heap.
And when messages size stored and not consumed is closed to the top, I get these behavior.