ActiveMQ: view the content of the enqueued messages - activemq

I am using ActiveMQ with web console (activemq-web-console-5.16.4) in TomEE. The ActiveMQ-web-console-5.16.4.war was added to the TomEE webapps folder. Afterwards, I could access the web console. Currently, I want to view/monitor the content of enqueued/processed messages in the web console "Messages Enqueued". How can I manage that in my case? Should I bind the KahaDB message store or other databases?
In my application I use Apache Camel and send messages from one route to another by ActiveMQ.
I would appreciate any help.
Screenshots:

You can use the web console itself to view the content of the message assuming it fits into the narrow constraints of what the console can decode into human readable format.
First, click the "Browse" link.
Second, click the link for the actual message.
Third, see the "Message Details."
To be clear, you can only inspect the content of messages which are in the queue. This is represented by the "Number of Pending Messages." The "Messages Enqueued" is the number of messages sent to the queue (but not necessarily in the queue currently) since the broker was started. The "Messages Dequeued" is the number of messages consumed from the queue. In your case you have 66 messages which have been enqueued and dequeued (i.e. consumed) and therefore 0 pending messages.
If you want to keep a copy of every message sent to your queue for auditing purposes you can use a mirrored queue. As noted previously, you can only inspect messages which are in the queue and a mirrored queue will hold a copy of every message sent to the source queue allowing you to inspect those messages at your convenience.

Related

Message Delivery Guarantee for Multiple Consumers in Pub/Sub and Messaging Queues

Requirement
A system undergoes some state change, and multiple other parts of the system has to know this(lets call them observers) so that they can perform some actions based on the current state, the actions of the observers are important, if some of the observers are not online(not listening currently due to some trouble, but will be back soon), the message should not be discarded till all the observers gets the message.
Trying to accomplish this with pub/sub model, here are my findings, (please correct if this understanding is wrong) -
The publisher creates an event on specific topic, and multiple subscribers can consume the same message. This model either provides no delivery guarantee(in redis), or delivery is guaranteed once(with messaging queues), ie. when one of the consumer acknowledges a message, the message is discarded(rabbitmq).
Example
A new Person Profile entity gets created in DB
Now,
A background verification service has to know this to trigger the verification process.
Subscriptions service has to know this to add default subscriptions to the user.
Now both the tasks are important, unrelated and can run in parallel.
Now In Queue model, if subscription service is down for some reason, a BG verification process acknowledges the message, the message will be removed from the queue, or if it is fire and forget like most of pub/sub, the delivery is anyhow not guaranteed for both the services.
One more point is both the tasks are unrelated and need not be triggered one after other.
In short, my need is to make sure all the consumers gets the same message and they should be able to acknowledge them individually, the message should be evicted only after all the consumers acknowledged it either of the above approaches doesn't do this.
Anything I am missing here ? How should I approach this problem ?
This scenario is explicitly supported by RabbitMQ's model, which separates "exchanges" from "queues":
A publisher always sends a message to an "exchange", which is just a stateless routing address; it doesn't need to know what queue(s) the message should end up in
A consumer always reads messages from a "queue", which contains its own copy of messages, regardless of where they originated
Multiple consumers can subscribe to the same queue, and each message will be delivered to exactly one consumer
Crucially, an exchange can route the same message to multiple queues, and each will receive a copy of the message
The key thing to understand here is that while we talk about consumers "subscribing" to a queue, the "subscription" part of a "pub-sub" setup is actually the routing from the exchange to the queue.
So a RabbitMQ pub-sub system might look like this:
A new Person Profile entity gets created in DB
This event is published as a message to an "events" topic exchange with a routing key of "entity.profile.created"
The exchange routes copies of the message to multiple queues:
A "verification_service" queue has been bound to this exchange to receive a copy of all messages matching "entity.profile.#"
A "subscription_setup_service" queue has been bound to this exchange to receive a copy of all messages matching "entity.profile.created"
The consuming scripts don't know anything about this routing, they just know that messages will appear in the queue for events that are relevant to them:
The verification service picks up the copy of the message on the "verification_service" queue, processes, and acknowledges it
The subscription setup service picks up the copy of the message on the "subscription_setup_service" queue, processes, and acknowledges it
If there are multiple consuming scripts looking at the same queue, they'll share the messages on that queue between them, but still completely independent of any other queue.
Here's a screenshot from this interactive visualisation tool that shows this scenario:
As you mentioned it is not something that you can control with Redis Pub/Sub data structure.
But you can do it easily with Redis Streams.
Streams will allow you to post messages using the XADD command and then control which consumers are dealing with the message and acknowledge that message has been processed.
You can look at these sample application that provides (in Java) example about:
posting and consuming messages
create multiple consumer groups
manage exceptions
Links:
Getting Started with Redis Streams and Java
Redis Streams in Action ( Project that shows how to use ADD/ACK/PENDING/CLAIM and build an error proof streaming application with Redis Streams and SpringData )

Moving single message in RabbitMq

I have got several messages on error queue which has name TestQueue_errors.
One of the messages on error queue is important and should be moved back to service queue TestQueue so it can be processed again. The other messages on error queue are broken and should stay on error queue.
I have tried to do that with shovel plugin but it seems it is able only to move all messages from one queue to another. Is there a way I could achieve that, to move single message from one queue to another?
As far as I know Rabbit Management does not allow to do it. The only thing you can do is to publish this message again.
Maybe there are some tools which give possibility to achieve it but it is not a standard behaviour.
Here are actions which you are able to perform on the queue (from RabbitMQ Management page):
Move all messages from one queue to another
Get all messages without requeue option (they would not be in the queue anymore)
Get first N messages without requeue option and then move the rest of messages to another queue

Move message From one Queue to other Queue without deleting it Rabbitmq

I have the following problem.
My program sends messages directly to the Queue (without exchange). I need to monitor incoming of new messages and send them to other Queue without removing them from source queue.
I don't have access to program code, so I'm not able to publish messages to exchange first.
Is it possible to solve this problem using the management web interface of RabbitMQ?
I tried to use shovel plugin, but it removes all messages from source queue after ack.
First to clear up few things:
My program sends messages directly to the Queue (without exchange) This is not true, at the very least (and most likely in this case) nameless exchange is used.
removes all messages from source queue after ack
this is by design and therefore perfectly fine.
You should never keep messages in the queue, queue is made to be consumed. As Derick Bailey says here
RabbitMQ is not a database. RabbitMQ is a message broker and queueing system.
on the same link you will find your answer. I cannot give a concrete one since you didn't provide motivation, but whatever it is keeping messages in the queue is never good!
Maybe you want to log/store your message first and then process it with the consequence of processing being some 3rd action or whatever...

What belongs into a DLQ / Invalid Message Queue?

Is there a good best practice about what kind of messages an application is allowed to reject?
My understanding is that all messages which can't be handled should be rejected to the dead letter queue - no matter if the problem is a syntax error or a semantic error in the message or if the application is temporarily not able to handle the message (for instance because the db just went down).
Of course - if the app already knows upfront that it will not be able to handle a message (DB down), it should stop accepting messages.
So what's the common understanding / best practice?
My response is with respect to WebSphere MQ:
A Dead Letter Queue (DLQ for short) is a place where messages that could not be delivered to their destination are put. Messages can be put on the DLQ by queue managers, message channel agents (MCAs), and applications. All messages on the DLQ must be prefixed with a dead-letter header structure, MQDLH. The MQDLH header is automatically fixed when queue manager or MCAs put messages whereas applications must prefix the MQDLH explicitly.
As far applications are concerned, if they are unable to handle the message, say for example the message format is not understood, they can put the message to a BACKOUT queue instead of a DLQ. A BACKOUT queue is just like any normal queue where messages rejected by applications can be put. The advantage of BACKOUT queue is that you can specify a BACKOUT queue on a per queue basis and the messages put there need not have MQDLH header prefixed.
An application can be written to read the messages from BACKOUT and route them back to the target queue as it is. However the messages in a DLQ require additional processing to remove the MQDLH before they are put onto a target queue.

RabbitMQ use of immediate and mandatory bits

I am using RabbitMQ server.
For publishing messages, I set the immediate field to true and tried sending 50,000 messages. Using rabbitmqctl list_queues, I saw that the number of messages in the queue was zero.
Then, I changed the immediate flag to false and again tried sending 50,000 messages. Using rabbitmqctl list_queues, I saw that a total of 100,000 messages were in queues (till now, no consumer was present).
After that, I started a consumer and it consumed all the 100,000 messages.
Can anybody please help me in understanding about the immediate bit field and this behavior too? Also, I could not understand the concept of the mandatory bit field.
The immediate and mandatory fields are part of the AMQP specification, and are also covered in the RabbitMQ FAQ to clarify how its implementers interpreted their meaning:
Mandatory
This flag tells the server how to
react if a message cannot be routed to
a queue. Specifically, if mandatory is
set and after running the bindings the
message was placed on zero queues then
the message is returned to the sender
(with a basic.return). If mandatory
had not been set under the same
circumstances the server would
silently drop the message.
Or in my words, "Put this message on at least one queue. If you can't, send it back to me."
Immediate
For a message published with immediate
set, if a matching queue has ready
consumers then one of them will have
the message routed to it. If the lucky
consumer crashes before ack'ing
receipt the message will be requeued
and/or delivered to other consumers on
that queue (if there's no crash the
messaged is ack'ed and it's all done
as per normal). If, however, a
matching queue has zero ready
consumers the message will not be
enqueued for subsequent redelivery on
from that queue. Only if all of the
matching queues have no ready
consumers that the message is returned
to the sender (via basic.return).
Or in my words, "If there is at least one consumer connected to my queue that can take delivery of a message right this moment, deliver this message to them immediately. If there are no consumers connected then there's no point in having my message consumed later and they'll never see it. They snooze, they lose."
http://www.rabbitmq.com/blog/2012/11/19/breaking-things-with-rabbitmq-3-0/
Removal of "immediate" flag
What changed? We removed support for the
rarely-used "immediate" flag on AMQP's basic.publish.
Why on earth did you do that? Support for "immediate" made many parts
of the codebase more complex, particularly around mirrored queues. It
also stood in the way of our being able to deliver substantial
performance improvements in mirrored queues.
What do I need to do? If you just want to be able to publish messages
that will be dropped if they are not consumed immediately, you can
publish to a queue with a TTL of 0.
If you also need your publisher to be able to determine that this has
happened, you can also use the DLX feature to route such messages to
another queue, from which the publisher can consume them.
Just copied the announcement here for a quick reference.