ActiveMQ network of brokers don't forward messages - activemq

I had two ActiveMQ brokers (A and B) that were configured as store-forward network. They work perfectly to forward messages from A to B when there is a consumer connected on broker B and producer sends messages to A. The problem is that when the consumer is killed and reconnected to A, the queued messages on B (they were forwarded from A) won't forward back to A where the consumer connected to. Even I send new messages to B, all messages were stuck on B until I restart brokers. I have tried to set networkTTL="4" and duplex="true" on the broker network connector, but it doesn't work.

Late answer, but hopefully this will help someone else in the future.
Messages are getting stuck in B because by default AMQ doesn't allow messages to be sent back to a broker to which they have previously been delivered. In the normal case, this prevents messages from going in cycles around mesh-like network topologies without getting delivered, but in the failover case it results in messages stuck on one broker and unable to get to the broker where all the consumers are.
To allow messages to go back to a broker if the current broker is a dead-end because there are no consumers connected to it, you should use replayWhenNoConsumers=true to allow forwarding messages that got stuck on B back to A.
That configuration option, some settings you might want to use in conjunction with it, and some considerations when using it, are described in the "Stuck Messages (version 5.6)" section of http://activemq.apache.org/networks-of-brokers.html, http://tmielke.blogspot.de/2012/03/i-have-messages-on-queue-but-they-dont.html, and https://issues.apache.org/jira/browse/AMQ-4465. Be sure that you can live with the side effects of these changes (e.g. the potential for duplicate message delivery of other messages across your broker-to-broker network connections).

Can you give more information on the configuration of broker A and B, as well as what you are trying to achieve?
It seems to me you could achieve what you want by setting a network of brokers (with A and B), with the producer only connecting to one, the consumer to the other.
The messages will automatically be transmitted to the other broker as long as the other broker has an active subscription to the destination the message was sent to.
I would not recommend changing the networkTTL if you are not sure of the consequences it produces (it tends to lead to unwanted messages loops).

Related

Confirmation of messages between nodes in dynamic shovelling

I know Rabbit MQ supports the mechanism of Publisher Confirms – the broker's acknowledgements to publishers. The documentation states the broker confirms messages as it handles them by sending a basic.ack on a channel that was set in “confirm mode”. This communication is between a broker and a publisher client.
Let’s assume that I have a main node A and a secondary B in another data center and that dynamic shovelling is set from A to B. According to the documentation “ack-mode” determines how the shovel acknowledges messages. If set to on “on-confirm” messages are acknowledge to the source broker (A) after they have been confirmed by the destination (broker B).
I’d like to ask whether these two mechanisms are connected (or whether they can be). When a client connected to node A receives a confirmation, does that mean that the message has been published to node B too (if ack-mode=on-confirm)?
No these are not connected , in case of Dynamic Shovels what comes into picture is ack-mode which is one of the configuration parameter of the shovel. It can take three possible values and these are
on-confirm
on-publish
no-ack
This is how it works.
ack-mode Determines how the shovel should acknowledge messages. If set to on-confirm (the default), messages are acknowledged to the source broker after they have been confirmed by the destination. This handles network errors and broker failures without losing messages, and is the slowest option.
If set to on-publish, messages are acknowledged to the source broker after they have been published at the destination. This handles network errors without losing messages, but may lose messages in the event of broker failures.
If set to no-ack, message acknowledgements are not used. This is the fastest option, but may lose messages in the event of network or broker failures.

Resiliently processing messages from RabbitMQ

I'm not sure how to resiliently handle RabbitMQ messages in the event of an intermittent outage.
I subscribe in a windows service, read the message, then store it my database. If I can't process the record because of the data I publish it to a dead letter queue for a human to address and reprocess.
I am not sure what to do if I have some intermittent technical issue that will fix itself (database reboot, network outage, drive space, etc). I don't want hundreds of messages showing up on dead letter that just needed to wait for a for a glitch but now would be waiting on a human.
Currently, I re-queue the event and retry it once, but it retries so fast the issue is not usually resolved. I thought of retrying forever but I don't want a real issue to get stuck in an infinite loop.
Is a broad topic but from the server side you could persist your messages and make your queues durable, this means that in the eventuality the server gets restarted they won't be lost, check more here How to persist messages during RabbitMQ broker restart?
For the consumer (client) it will depend on how you configure your client, from the docs:
In the event of network failure (or a node crashing), messages can be duplicated, and consumers must be prepared to handle them. If possible, the simplest way to handle this is to ensure that your consumers handle messages in an idempotent way rather than explicitly deal with deduplication.
If a message is delivered to a consumer and then requeued (because it was not acknowledged before the consumer connection dropped, for example) then RabbitMQ will set the redelivered flag on it when it is delivered again (whether to the same consumer or a different one). This is a hint that a consumer may have seen this message before (although that's not guaranteed, the message may have made it out of the broker but not into a consumer before the connection dropped). Conversely if the redelivered flag is not set then it is guaranteed that the message has not been seen before. Therefore if a consumer finds it more expensive to deduplicate messages or process them in an idempotent manner, it can do this only for messages with the redelivered flag set.
Check more here: https://www.rabbitmq.com/reliability.html#consumer

ActiveMQ, Network of brokers, offline durable subscriber dedupe

Scenario: Two ActiveMQ nodes A, B. No master slave, but peers, with network connectors between them.
A durable topic subscriber is registered with both (as it uses failover and at one point connects to A and at another point connects to B).
Issue: If subscriber is being online against A, a copy of each message is placed in the offload subscription on B.
Question: Is this by design? Can this be configured so that a message is deduped and only sent to the subscriber in one of subscriptions?
Apparently by-design: http://activemq.apache.org/how-do-distributed-queues-work.html
See "Distributed Topics in Store/Forward" where it says:
For topics the above algorithm is followed except, every interested client receives a copy of the message - plus ActiveMQ will check for loops (to avoid a message flowing infinitely around a ring of brokers).

Ideal setup for Rebus and RabbitMQ, non-durable messages, and request-reply peers

I'm exploring using Rebus and RabbitMQ together to cover a couple of different scenarios.
Scenario A
I want to be able to have a central server push notifications to a list of arbitrary subscribers, but the messages don't need to be durable or persisted. If a subscriber is connected, they should receive a notification, but if they disconnect, then there's no need to queue a message for any client.
In my tests so far, I'm able to get a producer and consumer communicating with UseRabbitMqInOneWayMode() and ManageSubscriptions(), however, the messages build up in RabbitMQ when there are no subscribers, or if a subscriber disconnects. I've tried setting the header to false for RabbitMqMessageQueue.InternalHeaders.MessageDurability, but it has no effect. I suspect it's because the default queue that Rebus sets up is durable. Is there a way within Rebus to control this behavior?
Scenario B
As clients come online, or disconnect, I'd like to setup a request/reply channel between clients. For example:
Client A and client B connect
Client A will send a message requesting data that only Client B has. Client B gathers the info, and replies back to A.
Client B disconnects
Client A requests data from Client B, and should receive an error because B is no longer available.
What's the recommended config for this case?
Thanks.
I'm not an expert on RabbitMQ, and so the Rebus support for RabbitMQ mostly comes from community contributions.
I think scenario A can be solved pretty easily though by using the RabbitMQ concept of "auto-delete queues", which you can configure with Rebus like this:
Configure.With(...)
.Transport(t => t.UseRabbitMq(...)
.ManageSubscriptions()
.AutoDeleteInputQueue())
.(..)
which causes RabbitMQ to delete the queue when the last subscriber disconnects.
In scenario B it sounds to me like you would be better off with something that is meant for synchronous communication, because that's what you really want. I suggest you use HTTP because it's pretty good at doing request/reply :)

configure the broker

I am using ActiveMQ as message Broker with something like 140 Topics.
I am facing a problem that the broker keeps old messages, instead of discarding them in order to send new messages (so clients gets old data instead of current data).
How do I configure the broker not to keep old messages? the important data is allways the last data, so if a consumer didn't get data, he will get next time the most updated.
I have configured on producer TTL as 250, but it doesn't seem to work...
One other thing,
How can I disable the creation of advisory topics?
Any help will be appreciated...
Advisory messages are required for
dynamic network broker topologies as
NetworkConnectors subscribe to
advisory messages. In the absence of
advisories, a network must be
statically configured.
Beware that using advisorySupport="false" will NOT work with dynamic network brokers as per this reference page: http://activemq.apache.org/advisory-message.html
Are you using a durable consumer to receive these messages from the topics concerned? If so, the broker will be holding on to all messages sent when you were disconnected. Switch to a regular consumer in order to only see "current" messages on the topic.
To prevent the creation of advisory topics and their associated messages add the advisorySupport="false" property to the <broker /> element of the ActiveMQ config file.