ActiveMQ Artemis - Is the "classic" configuration the correct one to use for a two-node replication with traffic from one virtual IP address? - replication

In this post, if I read it correctly, it was suggested that replication with ActiveMQ Artemis could be achieved with only two nodes as follows:
However, it's still possible for the virtual IP address to direct traffic to one of the two VMs based on the broker's availability since the backup broker will not be able to receive connections until the primary broker fails at which point the backup broker will become active and begin accepting connections.
I'm assuming from the answer that this be accomplished simply using the classic HA policy to configure one as the master, the other as the slave and configuring them to be a part of the same cluster, as per the documentation.
Is this a correct assumption?

The point from what you quoted from my answer was really just about the use of a virtual IP address in front of a primary/backup pair of brokers. I didn't mean to imply anything about the configuration of the primary/backup pair itself.
In short, even with a virtual IP address in front of the pair of brokers you still need a way to mitigate split brain and the minimum viable way to do that is with a ZooKeeper node. If you use the "classic" configuration approach then there will be no mitigation for split brain.

Related

ActiveMQ replicated levelDB with zookeeper, client must know all brokers?

client must know all brokers using Failover Transport, right? Like that,
failover:(tcp://broker1:61616,tcp://broker2:61616,tcp://broker3:61616)
Is there optimization,so that the client does not have to know the existence of each broker ?
Put a TCP load balancer in front of the brokers. Only forward requests to the master broker. The LB can ping who's online or not by checking the "Slave" attribute of the broker via Jolokia/JMX.
A standalone approach would be to provide an URL to a comma separated list of broker URLs to try in case of failure. Can be done using the updateURIsURL option in the failover URI.
There is also some possibilities to auto-discover brokers using Multicast or by querying an LDAP directory, but that requires certain infrastructure in place. Read more about it here.

How to send messages to specialised node in a cluster rabbitmq

I have a cluster of rabbitmq servers. But the machines where the servers are hosted show differences in terms of installed software. So, they have no overlapping capabilities, they are specialized workers, for example, only one node has email software installed.
I know that a queue is bound to the node on which is created. My question is, how I can set up my queues so I can send certain messages to the special endowed node, where my special software is waiting for work, bypassing rabbitmq distributing messages round-robin algorithm.
Maybe that is not a solution, am open to any working solution
You could always connect to the IP address of the specific node in the cluster, rather than connecting to some kind of a load balancer that is in front of the cluster - so specify a different IP in the client's method for opening the connection. This of course defeats the purpose of the cluster, but it seems to me that your setup does the same thing :)
By rabbitmq server do you mean actual rabbitmq server or clients/workers?
If I understand correctly, you can create a single exchange of type "topic". For each worker create an exclusive queue and bind it to the exchange with some unique routing key which in your case will be the feature of the host. When submitting a message to the exchange, use the feature as a routing key. The message will be routed to the appropriate host.

Redis cluster via HAProxy

I have a Redis Cluster that clients are connecting to via HAPRoxy with a Virtual IP. The Redis cluster has three nodes (with each node sharing the same server with a running sentinel instance).
My question is, when i clients gets a "MOVED" error/message from a cluster node upon sending a request, does it bypass the HAProxy the second time when it connects since it has been provided with an IP:port when the MOVEd message was issued? If not, how does the HAProxy know the second time to send it to the correct node?
I just need to understand how this works under the hood.
If you want to use HAProxy in front of Redis Cluster nodes, you will need to either:
Set up an HAProxy for each master/slave pair, and wire up something to update HAProxy when a failure happens, as well as probably intercept the topology related commands to insert the virtual IPs rather than the IPs the nodes themselves have and report via the topology commands/responses.
Customize HAProxy to teach it how to be the cluster-aware Redis client so the actual client doesn't know about cluster at all. This means teaching it the Redis protocol, storing the cluster's topology information, and selecting the node to query based on the key(s) being accessed by the consumer code.
With Redis Cluster the client must be able to access every node in the cluster. Of the two options above Option 2 is the "easier" one, but at this point I wouldn't recommend either.
Conceivably you could use the VIP as a "first place to get the topology info" IP but I suspect you'd have serious issues develop as that original IP would not be one of the ones properly being reported as a nod handling data. For that you could simply use round-robin DNS and avoid that problem, or use the built-in "here is a list of cluster IPs (or names?)" to the initial connection configuration.
Your simplest, and least likely to be problematic, route is to go "full native" and simply give full and direct access to every node in the cluster to your clients and not use HAProxy at all.

ActiveMQ Master/Slave Pair with Network of Brokers

I was able to set up Network of Brokers with store and forward strategy and working fine. I was given bigger machines now and would like to set up Master/Slave pair within the network of brokers. I understand Masters don't need any config changes but Slaves should indicate its corresponding master with URI. However, I'm not very clear on what uri to specify in the client. I'm using 5.6 release.
For example: Two machines with MasterA, SlaveB on 1 machine, and MasterB, SlaveA on another machine. No Network connectors between Masters and Slaves but network connectors between MasterA and MasterB. I hope that I'm right till this point. What about client uri? I'm currently using nio protocol at the clients like failover:(nio:localhost1:61616,nio:localhost2:61616)?randomize=true. I specify randomize=true to balance the load between the brokers.
Please suggest what client URI should I use? Should I include all brokers URI or just masters URI? Can I still use nio protocol? I prefer to use randomize=true so that load will be balanced.
In the simplest case, the client uri should contain 4 brokers, both pairs of master/slave uris.
For the network connectors, they will need to be prepared to bridge master to master or master to slave, which ever is available.
There is a new masterslave: discovery agent in 5.6 that simplifies the configuration for a networkconnector.
http://activemq.apache.org/networks-of-brokers.html#NetworksofBrokers-MasterSlaveDiscovery

ActiveMQ - network of multiple brokers configuration

I'm trying to set up three brokers in a network for load balancing -- clients and producers can connect to any of these brokers.
Questions:
What is the recommended topology to use to network these brokers? More specifically, what is the networkConnector configuration to use on each of these brokers? should duplex setting be enabled? (I guess duplex setting depends on the topology we choose)
A->B->C->A or A<-->B<-->C<-->A
Client should use failover protocol to connect to these brokers, right? e.g. failover://(tcp://b1:6161, tcp://b2:6161, tcp://b3:6161)
Any duplicate message handling required on the client side in case of restarts? See http://forum.springsource.org/showthread.php?108461-Failover-issue-in-ActiveMQ -- not clear why duplicate message issue exists here
Ideally we want to set up topology as shown in this post http://edelsonmedia.com/?p=143 -- not clear how to set up networkConnector on masters and slaves.
1.) I can't actually recommend a topology. This choice depends on the number of hops (between the broker where the messages enters the cluster and the broker where the consumer conects to) you can accept. In a heave traffic scenario every hop adds to the network load.
In my company we use a hypercube network (every broker knows every other broaker) and it works great.
Generaly you should make sure that your node configurations are as similar as possible. Using duplex makes sure you have less connections to configure (since the connection from B to A is already part of the duplex connection from A to B) but it introduce a large number of differences into your config file.
Personaly i created my own start script for ActiveMQ that auto-generated the connection config based on the dns names of my cluster (mycluster-01 to 06).
2.) yes. You might want to add ?randomize=false if you want to make sure the client uses the first entry in the list.
3.) Duplicate entries can happen if there are failures during message transport or as race conditions during heavy load. In general one message only is owned by one broker.
4.) dont set up network connectors between masters and slaves (REALLY DONT). Use the pure Master Slave feature of activeMQ and configure the master for each slave (you don't have to configure anything on the masters). For the all Masters configure NetworkConnections to the other Masters with failover to their slaves)
http://activemq.apache.org/pure-master-slave.html