I have some clients that are connected to an exchange via autodelete:yes. These all are publishers and consumers. But basically for now let's assume they are publising messages. Because each client has a unique binding key I can do explicit stuff on each message on the machine that consumes these machines. Everything works fine.
Now if the clients crashed or I terminate it manually (via SIGINT, ctrl+c) then the queue get deleted. Is there any way I can notifiy the consumers on the remote machines that the queue is deleted?
I'm thinking of creating a signal handler on my client application, thus whenever I catch a SIGINT or SIGTERM, then I'll notify the remote consumer (I'll send them a message that that the queue with the unique id is going to be deleted)
Is there any other ways to do this, or is my way the correct way to do this?
As a general rule in messaging, consuming applications do not care about the status of producing applications.
In RabbitMQ, producing applications may become aware of a consuming application's status by way of one of two mechanisms. The first (and preferred) method is via a Dead-Letter Exchange (dlx). When your message can't be delivered (because the destination queue does not exist), it is routed here, and your application is able to pull messages off queues configured on the DLX to figure out if they didn't make it to their destination.
The second method is to set the Mandatory flag on the message. This will cause the broker to send the message right back to the producing application via a Basic.Return method in cases where the destination queue is no longer there.
If the above items don't meet your needs, you may want to revisit your architecture somewhat as there is probably a better way to design your application.
Related
Requirement
A system undergoes some state change, and multiple other parts of the system has to know this(lets call them observers) so that they can perform some actions based on the current state, the actions of the observers are important, if some of the observers are not online(not listening currently due to some trouble, but will be back soon), the message should not be discarded till all the observers gets the message.
Trying to accomplish this with pub/sub model, here are my findings, (please correct if this understanding is wrong) -
The publisher creates an event on specific topic, and multiple subscribers can consume the same message. This model either provides no delivery guarantee(in redis), or delivery is guaranteed once(with messaging queues), ie. when one of the consumer acknowledges a message, the message is discarded(rabbitmq).
Example
A new Person Profile entity gets created in DB
Now,
A background verification service has to know this to trigger the verification process.
Subscriptions service has to know this to add default subscriptions to the user.
Now both the tasks are important, unrelated and can run in parallel.
Now In Queue model, if subscription service is down for some reason, a BG verification process acknowledges the message, the message will be removed from the queue, or if it is fire and forget like most of pub/sub, the delivery is anyhow not guaranteed for both the services.
One more point is both the tasks are unrelated and need not be triggered one after other.
In short, my need is to make sure all the consumers gets the same message and they should be able to acknowledge them individually, the message should be evicted only after all the consumers acknowledged it either of the above approaches doesn't do this.
Anything I am missing here ? How should I approach this problem ?
This scenario is explicitly supported by RabbitMQ's model, which separates "exchanges" from "queues":
A publisher always sends a message to an "exchange", which is just a stateless routing address; it doesn't need to know what queue(s) the message should end up in
A consumer always reads messages from a "queue", which contains its own copy of messages, regardless of where they originated
Multiple consumers can subscribe to the same queue, and each message will be delivered to exactly one consumer
Crucially, an exchange can route the same message to multiple queues, and each will receive a copy of the message
The key thing to understand here is that while we talk about consumers "subscribing" to a queue, the "subscription" part of a "pub-sub" setup is actually the routing from the exchange to the queue.
So a RabbitMQ pub-sub system might look like this:
A new Person Profile entity gets created in DB
This event is published as a message to an "events" topic exchange with a routing key of "entity.profile.created"
The exchange routes copies of the message to multiple queues:
A "verification_service" queue has been bound to this exchange to receive a copy of all messages matching "entity.profile.#"
A "subscription_setup_service" queue has been bound to this exchange to receive a copy of all messages matching "entity.profile.created"
The consuming scripts don't know anything about this routing, they just know that messages will appear in the queue for events that are relevant to them:
The verification service picks up the copy of the message on the "verification_service" queue, processes, and acknowledges it
The subscription setup service picks up the copy of the message on the "subscription_setup_service" queue, processes, and acknowledges it
If there are multiple consuming scripts looking at the same queue, they'll share the messages on that queue between them, but still completely independent of any other queue.
Here's a screenshot from this interactive visualisation tool that shows this scenario:
As you mentioned it is not something that you can control with Redis Pub/Sub data structure.
But you can do it easily with Redis Streams.
Streams will allow you to post messages using the XADD command and then control which consumers are dealing with the message and acknowledge that message has been processed.
You can look at these sample application that provides (in Java) example about:
posting and consuming messages
create multiple consumer groups
manage exceptions
Links:
Getting Started with Redis Streams and Java
Redis Streams in Action ( Project that shows how to use ADD/ACK/PENDING/CLAIM and build an error proof streaming application with Redis Streams and SpringData )
I'm not sure how to resiliently handle RabbitMQ messages in the event of an intermittent outage.
I subscribe in a windows service, read the message, then store it my database. If I can't process the record because of the data I publish it to a dead letter queue for a human to address and reprocess.
I am not sure what to do if I have some intermittent technical issue that will fix itself (database reboot, network outage, drive space, etc). I don't want hundreds of messages showing up on dead letter that just needed to wait for a for a glitch but now would be waiting on a human.
Currently, I re-queue the event and retry it once, but it retries so fast the issue is not usually resolved. I thought of retrying forever but I don't want a real issue to get stuck in an infinite loop.
Is a broad topic but from the server side you could persist your messages and make your queues durable, this means that in the eventuality the server gets restarted they won't be lost, check more here How to persist messages during RabbitMQ broker restart?
For the consumer (client) it will depend on how you configure your client, from the docs:
In the event of network failure (or a node crashing), messages can be duplicated, and consumers must be prepared to handle them. If possible, the simplest way to handle this is to ensure that your consumers handle messages in an idempotent way rather than explicitly deal with deduplication.
If a message is delivered to a consumer and then requeued (because it was not acknowledged before the consumer connection dropped, for example) then RabbitMQ will set the redelivered flag on it when it is delivered again (whether to the same consumer or a different one). This is a hint that a consumer may have seen this message before (although that's not guaranteed, the message may have made it out of the broker but not into a consumer before the connection dropped). Conversely if the redelivered flag is not set then it is guaranteed that the message has not been seen before. Therefore if a consumer finds it more expensive to deduplicate messages or process them in an idempotent manner, it can do this only for messages with the redelivered flag set.
Check more here: https://www.rabbitmq.com/reliability.html#consumer
I have an application with RabbitMQ at the backend. So I want to develop custom 3rd party analysis code which it connects application queues on RabbitMQ and collect data. So my issue is I want to be sure both application and my code do not lose any data from rabbitmq.
If it is possible how can I configure RabbitMQ queues? I have administrative access on RabbitMQ.
I hope it's not code of producer issue because I don't have access the application code
Thanks for your help
Change the current exchange/queue mapping to allow for message replication
At the moment we can simplify that existing producer sends a message to existing exchange, that routes the message to some queue, from which the messages are now consumed:
[producer-app] ---> existing-exchange ---> existing-queue ---> [existing-consumer]
Now, what you want to have a following design, with new consumer consuming the same messages:
[producer-app] ---> existing-exchange ---> existing-queue ---> [existing-consumer]
\--> new-queue --------> [your-consumer]
You might need to change configuration of existing-exchange to allow replication of your message - for example direct and fanout will create the same message on each of the queues.
Depending on your application it might be quite easy to perform without changes in producer, but you need to be aware of possible pitfalls:
producer might re-declare exchanges/queues/bindings from time to time, and throw exceptions if the current state cannot be change to its request (this might happen if you change exchange's type)
you need to manage the new-queue on your own (preferably from your consumer artifact), as it is going to receive all the messages; in case your consumer shuts down, the queue is not going to disappear unless it is made exclusive or has TTL set
I have the following problem.
My program sends messages directly to the Queue (without exchange). I need to monitor incoming of new messages and send them to other Queue without removing them from source queue.
I don't have access to program code, so I'm not able to publish messages to exchange first.
Is it possible to solve this problem using the management web interface of RabbitMQ?
I tried to use shovel plugin, but it removes all messages from source queue after ack.
First to clear up few things:
My program sends messages directly to the Queue (without exchange) This is not true, at the very least (and most likely in this case) nameless exchange is used.
removes all messages from source queue after ack
this is by design and therefore perfectly fine.
You should never keep messages in the queue, queue is made to be consumed. As Derick Bailey says here
RabbitMQ is not a database. RabbitMQ is a message broker and queueing system.
on the same link you will find your answer. I cannot give a concrete one since you didn't provide motivation, but whatever it is keeping messages in the queue is never good!
Maybe you want to log/store your message first and then process it with the consequence of processing being some 3rd action or whatever...
I was looking for an ActiveMQ broker admin command, to tell it to pause a queue - that is:
continue accepting messages from producing clients
cease delivering to consuming clients, allowing the queue backlog to grow until the queue is resumed, whereupon the backlog is sent to clients.
I was unable to find such a command. The commonest answer was that it should be managed at the client end -- that is, locate every consumer and stop it. Other answers were workarounds, like manipulating network routes or firewalls so that the clients and broker could no longer communicate.
A cursory survey of other message queues indicates that ActiveMQ is not unusual in this regard.
It seems to me there are two reasons this functionality might not be implemented:
It is difficult to implement -- but I can't think of any reason why.
It is counter to the design philosophy of message queues
Which is it, and why?
Being able to pause a queue is supported in the newly released ActiveMQ 5.12.0:
When the queue is "paused":
NO messages sent to the associate consumers
messages still to be enqueued on the queue
ability to be able to browse the queue
all the JMX counters for the queue to be available and correct.
...
implemented pause/resume/isPaused queue view mbean ops and attribute
when paused, there is no dispatch to regular queue consumers, send
and browse work as normal. Any inflight messages will continue inflight
till ackes as normal.
See https://issues.apache.org/jira/browse/AMQ-5229
If you have Jolokia enabled (I think it is enabled by default nowadays), you can use something like the following curl request to pause the queue:
curl --user admin:admin http://127.0.0.1:8161/api/jolokia/exec/org.apache.activemq:brokerName=localhost,destinationName=myQueue,destinationType=Queue,type=Broker/pause
(Using the default username, password and broker name and a queue called myQueue)
Replace "pause" with "resume" in order to resume the queue.
Probably not too complicated to implement - as you say.
I don't know if it's an active design decision of if there has been no demand. Other similar products such as IBM WebSphere MQ implements "get/put inhibited" on queues, so it's obviously is not totally against the philosofy of messaging - rather a tool to operate and trouble shoot live systems.
I'm a bit biased, but I actually like to decouple the sender from the receive (if the are two different systems, that might eventually get switched/upgraded/changed..).
An easy way to decouple the systems, and be able to do what you want is to make the sender send to one queue "DATA.OUT" and the receiver listen to another "DATA.IN". Then you can use Apache Camel (which is typically bundled with ActiveMQ to achieve Enterprise Integration Patterns), to route from DATA.OUT to DATA.IN.
A Camel Route is possible to start/stop via JMX, which will achieve something similar to what you described.
I guess ActiveMQ design in the matter rather have you do these kind of things in a middleware layer, such as Apache Camel, rather than direct on the queues.