ActiveMQ Messages appear to be dequeued from one broker but not arrive at other in Network of Brokers - activemq

I have a network of brokers, whereby messages are produced from a Java app onto a small local broker in the field, this then has a static network destination for the queue pointing at a broker (in AWS), which has a consumer sat across all the queues for the remote brokers.
If there is any interruption in the connection, the remote broker will reconnect successfully. The issue is that in some cases, the remote broker will reconnect, the queue on the remote broker will appear to rack up the dequeue count, however the central broker to which it is meant to be forwarding doesn't show up an increasing enqueue for the queue.
The messages enqueued at the remote end are Persistent = YES, priority = 4. If I manually add a message on the broker with Persistent = Yes, the same behaviour exhibits. If I set persistent = NO, the message successfully hits the other end. If I restart the broker, the persistent messages flow again (although the ones it thought it was successfully sending are lost).
What situation could cause this loss of messages, and is there a configuration option that can be tweaked to fix this?
The configuration for the remote brokers is a standard AMQ install, with the following network connector defined:
<networkConnectors>
<networkConnector
uri="${X.remoteMsgURI}"
staticBridge="true"
dynamicOnly="false"
alwaysSyncSend="true"
userName="${X.remoteUsername}"
password="${X.remotePassword}"
>
<staticallyIncludedDestinations>
<queue physicalName="cloud.${X.environment}.queue.${X.queuebase}"/>
</staticallyIncludedDestinations>
</networkConnector>
</networkConnectors>
The connection string for the static remote is:
remoteMsgURI=static:(failover:(ssl:/X-1.mq.eu-west-2.amazonaws.com:61617?wireFormat.maxInactivityDuration=30000,ssl://X-2.amazonaws.com:61617?wireFormat.maxInactivityDuration=30000))

Related

Logstash with rabbitmq cluster

I have a 3 node cluster of Rabbitmq behind a HAproxy Load Balancer. When I shut down a node, Rabbitmq successfully switches the queue to the other nodes. However, I notice that Logstash stops pulling messages from the queue unless I restart it. Is this a problem with the way rabbitmq operates? i.e. it deactivates all active consumers. I am not sure if log stash has any retry capability. Anyone run into this issue?
Quoting rabbit mq documentation, page for clustering first
What is Replicated? All data/state required for the operation of a
RabbitMQ broker is replicated across all nodes. An exception to this
are message queues, which by default reside on one node, though they
are visible and reachable from all nodes.
and high availability
Clients that are consuming from a mirrored queue may wish to know that
the queue from which they have been consuming has failed over. When a
mirrored queue fails over, knowledge of which messages have been sent
to which consumer is lost, and therefore all unacknowledged messages
are redelivered with the redelivered flag set. Consumers may wish to
know this is going to happen.
If so, they can consume with the argument x-cancel-on-ha-failover set
to true. Their consuming will then be cancelled on failover and a
consumer cancellation notification sent. It is then the consumer's
responsibility to reissue basic.consume to start consuming again.
So, what does all this mean:
You have to mirror queues
The consumers should use manual ACK
The consumers should reconnect on their own
So the answer to your question is no, it's not a problem with rabbitmq, that's simply how it works. It's up to clients to reconnect.

ActiveMQ cosumer connection differ from producer

The following is my ActiveMQ setup:
I have two AMQ broker which are configured with failover.
I have 40 producer but only on consumer.
Now the problem:
From time to time, one of the producer lost the connection to the master broker. The failover reacts and the producer gets a new connection to the slave which gets the messages. So far so good. But the consumer does not have the problem, he consumes still the messages from the master. He does not know, that the slave has also some messages.
How can i now solve the problem woth losing those messages thay are sent to the slave?
Thank in advance
I would recommend you configure a network of brokers. That way, your brokers will be connected as well, and it no longer matters which broker your producers and consumers connect to - the messages will get propagated across the network.

Message sent to Topic not forwarded by ActiveMQ

Just wondering if anyone has come across this problem with ActiveMQ.
I am using network of brokers to forward messages using AMQ 5.11.0
<networkConnectors>
<networkConnector name="linkToBrokerB" uri="static (tcp://(brokerAddress):61616)">
<dynamicallyIncludedDestinations>
<queue physicalName="QueueName"/>
<topic physicalName="VirtualTopic.Message.Event.EventName"/>
</dynamicallyIncludedDestinations>
</networkConnector>
when I queue a message on broker A it gets forwarded to broker B respective queue using the configuration above. However, it does not work for topics. When I send a topic to broker A it does not get forwarded to broker B Topic. I have a consumer on both brokers listening to that respective topic. If I try to forward messages using one or more queues it works without any issues, but I cannot figure out why it does not work for topics.
I tried using the ">" but it does not forward anything. I can see that the topic has a consumer and that broker B is connected to broker A in the "network" tab but it does not forward my topic as it does with my queues. I have also checked that the physical name used in the configuration is the same one as it appears under "topics" category
Any help would be appreciated

How Activemq Virtual topic subscription propagation in Network of brokers works?

Could somebody clarify behavior of activemq virtual topics in a context of Network of Brokers?
I have a confusion about subscription propagation.
For example, there is one broker which has network connector to another one. Lets say broker mq001 has following network connector open to the broker mq002:
<networkConnectors>
<networkConnector name="connectorToRemoteBroker" uri="static:(tcp://mq002:61616)?maxReconnectAttempts=0" duplex="false" networkTTL="3" decreaseNetworkConsumerPriority="true">
</networkConnectors>
Then I run consumer (A) to a Virtual topic on the broker mq001:
endpointURI: activemq:Consumer.A.VirtualTopic.tempTopic
I can notice some interesting behavior in the activemq console. First of all, there is no topic "VirtualTopic.tempTopic" created. However, there is queue (underlying physical queue of virtual topic) available - Consumer.A.VirtualTopic.tempTopic
And this queue has one active local consumer.
Then I start another consumer (B) to the same virtual topic but already on the broker 2 (mq002).
endpointURI - activemq:Consumer.B.VirtualTopic.tempTopic
If I take a look at activemq console on the broker 2 now. I still do not see any virtual topic available. There is another physical queue created Consumer.B.VirtualTopic.tempTopic which has one active consumer (also local for mq002).
When I take a look at console on the broker one I see two queues now:
Consumer.A.VirtualTopic.tempTopic - with an active local consumer
Consumer.B.VirtualTopic.tempTopic - with an active remote consumer.
So subscription propagation works on the level of physical queues at least. And because it is not an duplex it works from mq002 to mq001 only.
Then I publish message to the topic:
activemq:topic:VirtualTopic.tempTopic
It is being consumed by both consumers on mq001 and mq002. Also there is finally topic available in the activemq console (VirtualTopic.tempTopic).
So each consumer consumed exactly one message. If I repeat it with bigger number of messages it still works the same. No duplicates arrived and there are also no lost messages. The number of enqueued messages on the each physical queue matches with the number on the virtual topic.
That is exactly behavior I would expect from a virtual topic in case of network of brokers.
But now the source of my confusion:
http://activemq.apache.org/virtual-destinations.html#VirtualDestinations-AvoidingDuplicateMessageinaNetworkofBrokers
it is likely you will get duplicate messages if you use the default
network configuration. This is because a network node will not only
forward message sent to the virtual topic, but also the associated
physical queues.
First of all I have not seen any duplicates, and it worked well. But what would happen if I will follow the advice and disable the physical queue destination?
<networkConnectors>
<networkConnector name="connectorToRemoteBroker" uri="static:(tcp://mq002:61616)?maxReconnectAttempts=0" duplex="false" networkTTL="3" decreaseNetworkConsumerPriority="true">
<excludedDestinations>
<queue physicalName="Consumer.*.VirtualTopic.>"/>
</excludedDestinations>
</networkConnector>
</networkConnectors>
Then when I do start consumers, I do not see remote consumer anymore listening to the physical queue Consumer.B on the broker mq001. And if I publish a messages to the virtual topic, then it is consumed by Consumer.A (local) only. So it looks like subscription propagation is ignored for virtual topics and works on the physical queues only.
It looks for me like activemq documentation is a little bit outdated. Can anybody confirm or refute it?
Thanks in advance!
So your tests above are correct.
I just updated the docs to specify that you can get the dups when using both traditional topic subscribers AND virtual topic subscribers to the same destination over the network. That means, in your example, if I had a topic subscriber to "VirtualTopic.tempTopic" on mq002 as well as a consumer to queue "Consumer.B.VirtualTopic.tempTopic" then I can end up with dups. Hope that's clears things up. If you're using ONLY queue-based subscribers, then don't exclude the queue-based demand forwarding.
I have written a unit test that you can take a look at here:
http://svn.apache.org/viewvc/activemq/trunk/activemq-unit-tests/src/test/java/org/apache/activemq/usecases/TwoBrokerVirtualTopicForwardingTest.java?view=markup

activemq failover : primary node recover consumers?

I am new to activemq. I have configured two servers of activemq and using them in failover transport. They are working fine. I mean if one activemq goes down another pick up the queues. My problem is when main server comes up it do not restore the queues. Is there any such configuration or protocol that can manage it if main server is up then consumers should come back to to it.
Currently my configuration is :
<transportConnectors>
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616" updateClusterClients="true" rebalanceClusterClients="true"/>
</transportConnectors>
<networkConnectors>
<networkConnector uri="static:(tcp://192.168.0.122:61616)"
networkTTL="3"
prefetchSize="1"
decreaseNetworkConsumerPriority="true" />
</networkConnectors>
and my connection uri is :
failover:(tcp://${ipaddress.master}:61616,tcp://${ipaddress.backup}:61616)?randomize=false
Also i want to send mail in case of failover occurs so that i could know if activemq is down.
What you have configured there is not a true HA deployment, but a network of brokers. If you have two brokers configured in a network, each has its own message store, which at any time contains a partial set of messages (see how networks of brokers work).
The behaviour that you would likely expect to see is that if one broker falls over, the other takes its place resuming from where the failed one left off (with all of the undelivered messages that the failed broker held). For that you need to use a (preferably shared-storage) master-slave configuration.
I have done that. And posting solution in case any one is having same problem.
This feature is available in activemq 5.6. priorityBackup=true in connection url is the key to tell to tell consumer to come back on primary node if available.
My new connection uri is :
failover:master:61616,tcp://backup:61616)?randomize=false&priorityBackup=true
see here for more details.