I am trying to set up cluster of brokers, which should have same feature like rabbitMQ cluster, but over WAN (my machines are in different locations), so rabbitMQ cluster does not work.
I am looking to alternatives, rabbitMQ federation is just backup the messages in the downstream, can not make sure they have exactly the same messages available at any time (downstream still keeps the old messages already consumed in the upstream)
how about ActiveMQ Master/Slave, I have found :
http://activemq.apache.org/how-do-distributed-queues-work.html
"queues and topics are all replicated between each broker in the cluster (so often to a master and maybe a single slave). So each broker in the cluster has exactly the same messages available at any time so if a master fails, clients failover to a slave and you don't loose a message."
My concern is that if it can automatically update to make sure Master/Slave always have the same messages, which means the consumed messages in Master will also disappear in Slaves.
Thanks :)
ActiveMQ has various clustering features.
First there is High Availability - "Master/Slave". The idea is that several physical servers act as a single logical ActiveMQ broker. If one goes down, another takes it place without losing data. You can do that by sharing the message store (shared file system or shared JDBC), or you could setup a replicated cluster, which replicates read/writes to the master down to all slaves (you need three+ servers). ActiveMQ is using LevelDB and Apache Zookeeper to achieve this.
The other format of cluster available in ActiveMQ is to be able to distribute load and separate security over several logical brokers. Brokers are then connected in a network of brokers. Messages are by default passed around to the broker with available consumers for that message. However, there is a rich toolbox of features in ActiveMQ to tweak a network of brokers to do things as always send a copy of a message to specific broker etc. It takes some messing with the more advanced features though (static network connectors and queue mirroring, maybe more).
Maybe there is a better way to solve your requirements, which is not really specified in the question?
Related
Overview
A RabbitMQ broker is a logical grouping of one or several Erlang
nodes, each running the RabbitMQ application and sharing users,
virtual hosts, queues, exchanges, bindings, and runtime parameters.
Sometimes we refer to the collection of nodes as a cluster.
Why would you do this? I understand to increase durability of messages (if a node goes down, other queues still get the messages). But what about performance? How does cluster improve performance. Won't all consumers/producers connect to the master node's queue anyway? If so, aren't we still getting traffic on a single node regardless? Do we put a load balancer so traffic is directed at different nodes each time?
How does RabbitMQ cluster increase performance?
Well, right after that paragraph, the documentation states the following:
What is Replicated?
All data/state required for the operation of a RabbitMQ broker is
replicated across all nodes. An exception to this are message queues,
which by default reside on one node, though they are visible and
reachable from all nodes. To replicate queues across nodes in a
cluster, see the documentation on high availability (note that you
will need a working cluster first).
So, you would cluster to provide higher capacity in your RabbitMQ broker than a single node can provide alone. Note that clustering by itself is not a high-availability strategy.
Your assertion that message durability is increased is false, as message queues continue to reside on one broker (unless mirroring is used).
By default, contents of a queue within a RabbitMQ cluster are located on a single node (the node on which the queue was declared) [1]
Without mirroring, when that node goes down, messages on it will be lost. The cluster will put the queue onto a different node. RabbitMQ does not handle network partitions well, so this can be a bit of a problem.
"Aren't we still getting traffic on a single node regardless?" - if you only have one queue, then yes. However, a bigger question is "why would you run a message broker with only one queue?" Similarly, if you only create queues on one node, then you will still have one point of failure in the system.
Is i correct understand that best way provide reliability of queue it is network of master-slave brokers (for example master-slave by using ZooKeeper)?
In consumers and producers failover settings we set master's addresses and when one of the masters go offline another master-slave nodes of the brokers network get this master's messager from his slaves and we don't lose messages.
When broken master go online, its get new consumers and producers and get some messages.
I'm right?
There are two ways to provide high availability with ActiveMQ.
Master/slave setup using a shared store. For KahaDB (default store), that would be a shared disk somewhere. NFS/Windows file share or similar. There are many ways to create reliable shared disks. SAN and what not.
Replicated master/slave. That would be LevelDB with Zookeeper. If you can't get a high performance, reliable shared disk, this would be your best option.
You are correct the client should enter a failover address when they connect.
I am new to RabbitMq. I am not able to understand the concept here. Please find the scenario.
I have two machines (RMQ1, RMQ2) where I have installed rabbitmq in both the machines which are running. Again I clustered RMQ2 to join RMQ1
cmd:/> rabbitmqctl join_cluster rabbit#RMQ1
If you see the status of the machines here it is as below
In RMQ1
c:/> rabbitmqctl cluster_status
Cluster status of node rabbit#RMQ1...
[{nodes,[{disc,[rabbit#RMQ1,rabbit#RMQ2]}]},
{running_nodes,[rabbit#RMQ1,rabbit#RMQ2]}]
In RMQ2
c:\> rabbitmqctl cluster_status
Cluster status of node rabbit#RMQ2 ...
[{nodes,[{disc,[rabbit#RMQ1,rabbit#RMQ2]}]},
{running_nodes,[rabbit#RMQ1,rabbit#RMQ2]}]
The in order to publish and subscribe message I am connecting to RMQ1. Now I see the whenever I sent or message to RMQ1, I see message mirrored in both RMQ1 and RMQ2. This I understand clearly that as both the nodes are in same cluster they are getting mirrored across nodes.
Let say I bring down the RMQ2, I still see message getting published to RMQ1.
But when I bring down the RMQ1, I cannot publish the message anymore. From this I understand that RMQ1 is master and RMQ2 is slave.
Now I have below questions, without changing the code :
How do I make the RMQ2 take up the job of accepting the message.
What is the meaning of Highly Available Queues.
How should be the strategy for implementing this kind scenario.
Please help
Question #2 is best answered first, since it will clear up a lot of things for you.
What is the meaning of highly available queues?
A good source of information for this is the Rabbit doc on high availability. It's very important to understand that mirroring (which is how you achieve high availability in Rabbit) and clustering are not the same thing. You need to create a cluster in order to mirror, but mirroring doesn't happen automatically just because you create a cluster.
When you cluster Rabbit, the nodes in the cluster share exchanges, bindings, permissions, and other resources. This allows you to manage the cluster as a single logical broker and utilize it for scenarios such as load-balancing. However, even though queues in a cluster are accessible from any machine in the cluster, each queue and its messages are still actually located only on the single node where the queue was declared.
This is why, in your case, bringing down RMQ1 will make the queues and messages unavailable. If that's the node you always connect to, then that's where those queues reside. They simply do not exist on RMQ2.
In addition, even if there are queues and messages on RMQ2, you will not be able to access them unless you specifically connect to RMQ2 after you detect that your connection to RMQ1 has been lost. Rabbit will not automatically connect you to some surviving node in a cluster.
By the way, if you look at a cluster in the RabbitMQ management console, what you see might make you think that the messages and queues are replicated. They are not. You are looking at the cluster in the management console. So regardless of which node you connect to in the console, you will see a cluster-wide view.
So with this background now you know the answer to your other two questions:
What should be the strategy for implementing high availability? / how to make RMQ2 accept messages?
From your description, you are looking for the failover that high availability is intended to provide. You need to enable this on your cluster. This is done through a policy, and there are various ways to do it, but the easiest way is in the management console on the Admin tab in the Policies section:
The previously cited doc has more detail on what it means to configure high availability in Rabbit.
What this will give you is mirroring of queues and messages across your cluster. That way, if RMQ1 fails then RMQ2 will still have your queues and messages since they are mirrored across both nodes.
An important note is that Rabbit will not automatically detect a loss of connection to RMQ1 and connect you to RMQ2. Your client needs to do this. I see you tagged your question with EasyNetQ. EasyNetQ provides this "failover connect" type of feature for you. You just need to supply both node hosts in the connection string. The EasyNetQ doc on clustering has details. Note that EasyNetQ even lets you inject a simple load balancing strategy in this case as well.
I'm trying to set up three brokers in a network for load balancing -- clients and producers can connect to any of these brokers.
Questions:
What is the recommended topology to use to network these brokers? More specifically, what is the networkConnector configuration to use on each of these brokers? should duplex setting be enabled? (I guess duplex setting depends on the topology we choose)
A->B->C->A or A<-->B<-->C<-->A
Client should use failover protocol to connect to these brokers, right? e.g. failover://(tcp://b1:6161, tcp://b2:6161, tcp://b3:6161)
Any duplicate message handling required on the client side in case of restarts? See http://forum.springsource.org/showthread.php?108461-Failover-issue-in-ActiveMQ -- not clear why duplicate message issue exists here
Ideally we want to set up topology as shown in this post http://edelsonmedia.com/?p=143 -- not clear how to set up networkConnector on masters and slaves.
1.) I can't actually recommend a topology. This choice depends on the number of hops (between the broker where the messages enters the cluster and the broker where the consumer conects to) you can accept. In a heave traffic scenario every hop adds to the network load.
In my company we use a hypercube network (every broker knows every other broaker) and it works great.
Generaly you should make sure that your node configurations are as similar as possible. Using duplex makes sure you have less connections to configure (since the connection from B to A is already part of the duplex connection from A to B) but it introduce a large number of differences into your config file.
Personaly i created my own start script for ActiveMQ that auto-generated the connection config based on the dns names of my cluster (mycluster-01 to 06).
2.) yes. You might want to add ?randomize=false if you want to make sure the client uses the first entry in the list.
3.) Duplicate entries can happen if there are failures during message transport or as race conditions during heavy load. In general one message only is owned by one broker.
4.) dont set up network connectors between masters and slaves (REALLY DONT). Use the pure Master Slave feature of activeMQ and configure the master for each slave (you don't have to configure anything on the masters). For the all Masters configure NetworkConnections to the other Masters with failover to their slaves)
http://activemq.apache.org/pure-master-slave.html
I am wanting to setup RabbitMQ as a two (or more) node cluster with HA.
Use case: a client producer app (C#.NET) knows that the cluster has two nodes and publishes to the cluster. Various consumer apps (also C#.NET) connect to the cluster and get all messages generated by the producer. So long as at least one node is up and running the producer and consumers will all continue to work without error. Supposing nodes A and B are running and B dies for a while, then gets restarted, then a while later A dies, the clients all continue to function without receiving an error since at all times at least one node is up.
Can it be made to work like this out of the box?
Are there any other MQs that would be more appropriate (commercial ok) for a Windows/.NET application environment?
RabbitMQ v2.6.0 now supports high-availability queues using active/active clustering. Microsoft and a number of other companies have collaborated on Apache QPid which has C# bindings and which also supports active/active HA clustering.
Can it be made to work like this out of the box?
No. When a node goes down, all of its connections are closed. Since AMQP connections are stateful, there's no way around this. What you could achieve is 1) broker goes down, 2) all clients disconnect, 3) clients connect to other node (masquerading as original) and are none the wiser.
On a side note, rabbit does not support active-active HA clustering at the moment. It does support active-passive clustering and a form of logical clustering (which might be what you're looking for).