How to listen to multiple brokers in activemq - activemq

I'm facing difficulty in finding a solution where my listener code in activemq should listen messages from multiple brokers. For an example: we have 4 brokers(1,2,3,4) which serves messages to consumers which is hosted in 4 servers (A,B,C,D). The consumerA should listen for response messages from broker1,2,3 & 4. If its finds the message, then consumerA should pick and process the message. If in case consumerA is down for any reason consumerB should listen to all 4 brokers.
Configuring failover transport in below way doesn't help me to achieve above design.
activemq.broker.url=failover:(tcp://localhost:61716,tcp://localhost:61717,tcp://localhost:61718,tcp://localhost:61719)?randomize=false,timeout=5000,maxReconnectAttempts=3
With above uri configuration my listener code only listens to broker on port 61716 and if the message is available on another broker say on port 61717 its not able to pick and process it. Any help will be really appreciated.
P.S: Is there any example for one consumer listening to multiple brokers at the same time?

As i'm not finding a solution from activemq for one consumer listening to multiple brokers, we have implemented a solution of creating multiple beans each pointing to one specific broker url. That way we are pointing to 4 urls from same server and from same listener configuration file.

Related

How to send messages to specialised node in a cluster rabbitmq

I have a cluster of rabbitmq servers. But the machines where the servers are hosted show differences in terms of installed software. So, they have no overlapping capabilities, they are specialized workers, for example, only one node has email software installed.
I know that a queue is bound to the node on which is created. My question is, how I can set up my queues so I can send certain messages to the special endowed node, where my special software is waiting for work, bypassing rabbitmq distributing messages round-robin algorithm.
Maybe that is not a solution, am open to any working solution
You could always connect to the IP address of the specific node in the cluster, rather than connecting to some kind of a load balancer that is in front of the cluster - so specify a different IP in the client's method for opening the connection. This of course defeats the purpose of the cluster, but it seems to me that your setup does the same thing :)
By rabbitmq server do you mean actual rabbitmq server or clients/workers?
If I understand correctly, you can create a single exchange of type "topic". For each worker create an exclusive queue and bind it to the exchange with some unique routing key which in your case will be the feature of the host. When submitting a message to the exchange, use the feature as a routing key. The message will be routed to the appropriate host.

Message sent to Topic not forwarded by ActiveMQ

Just wondering if anyone has come across this problem with ActiveMQ.
I am using network of brokers to forward messages using AMQ 5.11.0
<networkConnectors>
<networkConnector name="linkToBrokerB" uri="static (tcp://(brokerAddress):61616)">
<dynamicallyIncludedDestinations>
<queue physicalName="QueueName"/>
<topic physicalName="VirtualTopic.Message.Event.EventName"/>
</dynamicallyIncludedDestinations>
</networkConnector>
when I queue a message on broker A it gets forwarded to broker B respective queue using the configuration above. However, it does not work for topics. When I send a topic to broker A it does not get forwarded to broker B Topic. I have a consumer on both brokers listening to that respective topic. If I try to forward messages using one or more queues it works without any issues, but I cannot figure out why it does not work for topics.
I tried using the ">" but it does not forward anything. I can see that the topic has a consumer and that broker B is connected to broker A in the "network" tab but it does not forward my topic as it does with my queues. I have also checked that the physical name used in the configuration is the same one as it appears under "topics" category
Any help would be appreciated

Replicate Activemq Message to once server to another server activemq

Q: we want publish same message in different Activemq servers. can we have any approach. like we will publish once and activemq changes will give a forward that message to another instance.
or is there any way we can do it by the activemq config changes?
There is not much context in the question but a simple Topic together with Network of brokers should do that.
The idea is that you connect multiple brokers using "network of brokers", then messages sent to a topic will be available to all clients on all brokers throughout the network.
There are a lot of corner cases when it comes to network of brokers and topics, but it should do the work.

ActiveMQ and randomize

Let's say I have the following ActiveMQ connection string:
failover:(tcp://broker1:61616,tcp://broker2:61616)?randomize=true
I am sending in like a few thousands requests to the brokers from a Java producer which has this configuration.
Sometimes I noticed that all messages end up going to just 1 broker with the other not receiving a single message.
Is this normal behavior?
Out of 10 tests, I made I may have noticed this behavior a couple of times. And at other times both the brokers received the message.
How randomize=true works?
The only explanation I found on http://activemq.apache.org/failover-transport-reference.html is: "use a random algorithm to choose the the URI to use for reconnect from the list provided"
The randomize flag on the failover transport indicates that the transport should choose at random one of the configured broker URIs to connect to (in your case there are two to choose from. Once a client is connect to one of those brokers the client will remain happily connected and send messages only to that broker until such time as something happens to interrupt the connection. Once the connection is interrupted the client will again attempt to connect to one of those two brokers. So in your case the single producer sending all its messages to one broker means, its working just like its expected too.

ActiveMQ network of brokers with durable subscription topics

I have a little problem here with my sample JMS layout.
I have two brokers (A, B) on two machines, which are linked via network connector. The idea is that the producer can send to any broker and the consumer can listen to any broker and the topic to send to/receive from is available globally.
The topic has two durable subscriber clients (one on each machine) that both will process all the messages in the topic. I want it to be a durable subscription so that the processes won't loose any workload if a process has to be restarted. Both subscriber clients are configured to have a failover broker url, so that they first try to connect to their localhost broker and if not available to the other. Failover of the clients seems to work, but I found a problem in the following situation:
Each broker 'A' and 'B' have a subscriber client connected The producer is sending to 'A'. Broker 'B' gets restarted. Client of 'B' registers connection loss and switches to 'A'. 'B' comes up again, and because it had itself registered as a durable subscriber to 'A' it gets the message feed. It has no active durable subscriber now ('A' has now three, including 'B') and piles up until it reaches its connection limits.
Is my configuration wrong? Is it possible what I've intended?
Cheers,
Kai
Are you running master-slave configuration?
Why do you want both brokers to have connected clients at the same time?
If you user failover connection string (identifying both brokers in it) your consumers/producers will use ActiveMQ failover implementation and will connect/reconnect to the active node when needed. I don't think having two active instances with active clients is a good idea - unless you are trying to duplicate your processes (in this case there will be no synchronization)
To make both nodes (master and slave) to always have the same durable data you need
to persist your messages to the same place accessible to both nodes. It can be JDBC adapter connected to a single instance of database (probably behind the cluster) or it can be NAS with shared network folder for KahaDB.