Rabbitmq high availability queues without message replication - rabbitmq

I have rabbitMQ broker running on two nodes as a cluster. I have observed that if node, where queue have been created, goes down, then queue would not be available on other node. If I try to publish a message from other node it fails. Even if I remove the failed node from cluster(using forget cluster command) and try to publish message from other node, the behavior is same.
I dont want to enable mirroring of the queue for the simple reason that it would replicate the messages which would be additional load on inter-network.
Is there way available in rabbitMQ to achieve this?

The behaviour you are experiencing is the default behaviour of rabbitmq and its exactly what i supposed to happen.The node where you created the queue becomes the producer now and if this node goes down then any connection available to it or queues or exchanges associated with it will not work at all. There are two options to resolve this issue.
One option is that there is one separate queue for every node and any node that wants to listen to receive messages from a particular node can subscribe to that particular queue's exchange. This seems to be not a very good idea since you need to manage a lot of things for it.
Second option is to always declare a queue before you publish so if your queue is not available then a new queue would take its place and all the nodes subscribe to would be able to listen and any producer node will be able to post that queue. This option will resolve the problems of node getting down or node not available. from the docs
before sending we need to make sure the recipient queue exists. If we send a message to non-existing location, RabbitMQ will just drop the message. Let's create a hello queue to which the message will be delivered:

RabbitMQ lets you import and export definitions. Definitions are json files which contain all broker objects (queues, exchanges, bindings, users, virtual hosts, permissions and parameters). They do not include the messages of queues.
You can export definitions of the node who owns the queue and import them to the slave node of the cluster periodically. You have to enable the management plugin for this task.
More information here: https://www.rabbitmq.com/management.html#configuration

Related

In our RabbitMQ setup, using federated exchanges, I see a a redundant queue which is piling up messages. What is the reason?

We have the following setup:
Now, on the Upstream side, I see two connections to the Cluster. One to rabbitmq-1 and one to rabbitmq-2.
The one to rabbitmq-1 is piling up messages. Note the message count of 413'584.
In the downstream, on the Cluster, I see only the connection to rabbitmq-2.
If I delete the queue to rabbitmq-1 it reappears after some time.
Why are there two queues, and why is the one to rabbitmq-1 not processing any messages?
This happens in the following case:
Your cluster has no name defined. In such case the name of the node is used as a cluster name.
Your cluster is behind a load balancer which selects node randomly.
You use the load balancer url to setup the federation upstream. In such case when the node restarts. The connection from another node is made which has different name.
Solution
The easiest solution is to set the cluster name on any node in the cluster with the following command.
rabbitmqctl set_cluster_name "rabbitmq-cluster"
After that all nodes in the cluster will return the same name and no redundant exchanges or queues will be created

Why would you run a messaging queue (eg RabbitMQ) cluster?

Overview
A RabbitMQ broker is a logical grouping of one or several Erlang
nodes, each running the RabbitMQ application and sharing users,
virtual hosts, queues, exchanges, bindings, and runtime parameters.
Sometimes we refer to the collection of nodes as a cluster.
Why would you do this? I understand to increase durability of messages (if a node goes down, other queues still get the messages). But what about performance? How does cluster improve performance. Won't all consumers/producers connect to the master node's queue anyway? If so, aren't we still getting traffic on a single node regardless? Do we put a load balancer so traffic is directed at different nodes each time?
How does RabbitMQ cluster increase performance?
Well, right after that paragraph, the documentation states the following:
What is Replicated?
All data/state required for the operation of a RabbitMQ broker is
replicated across all nodes. An exception to this are message queues,
which by default reside on one node, though they are visible and
reachable from all nodes. To replicate queues across nodes in a
cluster, see the documentation on high availability (note that you
will need a working cluster first).
So, you would cluster to provide higher capacity in your RabbitMQ broker than a single node can provide alone. Note that clustering by itself is not a high-availability strategy.
Your assertion that message durability is increased is false, as message queues continue to reside on one broker (unless mirroring is used).
By default, contents of a queue within a RabbitMQ cluster are located on a single node (the node on which the queue was declared) [1]
Without mirroring, when that node goes down, messages on it will be lost. The cluster will put the queue onto a different node. RabbitMQ does not handle network partitions well, so this can be a bit of a problem.
"Aren't we still getting traffic on a single node regardless?" - if you only have one queue, then yes. However, a bigger question is "why would you run a message broker with only one queue?" Similarly, if you only create queues on one node, then you will still have one point of failure in the system.

RabbitMQ Management Console - node name

Can I change the node name from RabbitMq Management Console for a specific queue? I tried, but I think that this is created when I started my app. Can I change it afterwards? My queue is on node RabbitMQ1, and my connection on node RabbitMQ2, so I cannot read messages from that queue. Maybe I can change my connection node?
The node name is not just a label, but it's where the queue is physically located. In fact by default queues are not distributed/mirrored, but created on the server where the application connected, as you correctly guessed.
However you can make your queue mirrored using policies, so you can consume messages from both the servers.
https://www.rabbitmq.com/ha.html
You can change the policy for the queues by using the rabbitmqctl command or from the management console, admin -> policies.
You need to synchronize the queue in order to clone the old messages to the mirror queue with:
rabbitmqctl sync_queue <queue_name>
Newly published messages will end in both the copies of the queue, and can be consumed from both alternatively (the same message won't be consumed from both).

When to use RabbitMQ shovels and when Federation plugin?

For the company I work for we would like to use RabbitMQ as our main message bus. The idea we have is that every single application uses their own vhost for internal communication and that via the shovel or federation plugin we would make it possible to share certain type of the events across multiple vhosts (maybe even multiple machines (non-clustered)).
We chose for application per vhost to separate internal communication from public events and to keep the security adjustable per application.
Based on the information published on the RabbitMQ website I don't get it when I have to choose for shovels or when I have to choose for the federation plugin.
RabbitMQ has the following explanation when to use what:
Typically you would use the shovel to link brokers across the internet when you need more control than federation provides.
What is the fine grain control in shovels which I am missing when I choose for federation?
At this moment I think I would prefer the federation plugin because I could automate the inter-vhost-communication via the REST API provided by the federation plugin.
In case of shovels I would need to change the shovel configuration and reboot the RabbitMQ instance every time we would like to share an event between vhosts. Are my thoughts correct about this?
We are currently running RMQ on Windows with clients connecting from .NET. In the near future Java/Perl/PHP clients will join.
To summarize my questions:
What is the fine grain control in shovels which I am missing when I
choose for federation?
Is it correct that the only way to change the
inter-vhost-communication when I use shovels is by changing theconfig file and rebooting the instance?
Does the setup (vhost per application) make sense or am I missing the point completely?
Shovels and queue provide different means to be forward messages from one RabbitMQ node to another.
Federated Exchange
With a federated exchange, queues can be connected to the queue on the upstream(source) node. In addition, an exchange on the downstream(destination) node will receive a copy of messages that are published to the upstream node.
Federated exchanges are a similar to exchange-to-exchange bindings, in that, they can (optionally) subscribe to a limited set of messages from an upstream exchange.
Federated Queue
(NOTE: These are new in RabbitMQ 3.2.x)
With a federated queue, consumers can be connected to the queue on both the upstream(source) and downstream(destination) nodes.
In essence the downstream queue is a consumer on the upstream queue, with the expectation that there will be additional downstream consumers that process the messages in the same manner as a consumer attached to the upstream queue.
Any messages consumed by the downstream (federated) queue will not be available for consumers on the upstream queue.
Use Case:
If consumers are being migrated from one node to another, a federated queue will allow this to happen without messages being missed, or processed twice.
Use Case: from the RabbitMQ docs
The typical use would be to have the same "logical" queue distributed
over many brokers. Each broker would declare a federated queue with
all the other federated queues upstream. (The links would form a
complete bi-directional graph on n queues.)
Shovel
Shovels on the other hand, attach an "upstream" queue to a "downstream" exchange. (I place the terms in quotes because the shovel documentation does not describe the nodes with the same semantics as the federation documentation.)
The shovel consumes the messages from the queue and sends them to the exchange on the destination node. (NOTE: While not normally discussed as part of this the pattern, there is nothing stopping a consumer from connecting to the queue on the origin node.)
To answer the specific questions:
What is the fine grain control in shovels which I am missing when I
choose for federation?
A shovel does not have to reside on an "upstream" or "downstream" node. It can be configured and operate from an independent node.
A shovel can create all of the elements of the linkage by itself: the source queue, the bindings of the queue, and the destination exchange. Thus, it is non-invasive to either the source or destination node.
Is it correct that the only way to change the
inter-vhost-communication when I use shovels is by changing theconfig
file and rebooting the instance?
This has generally been the accepted downside of the shovel.
With the following command (caveat: only tested on RabbitMQ 3.1.x, and with a very specific rabbitmq.config file that only contain ) you can reload a shovel configuration from the specified file. (in this case /etc/rabbitmq/rabbitmq.config)
rabbitmqctl eval 'application:stop(rabbitmq_shovel), {ok, [[{rabbit, _}|[{rabbitmq_shovel, [{shovels, Shovels}] }]]]} = file:consult("/etc/rabbitmq/rabbitmq.config"), application:set_env(rabbitmq_shovel, shovels, Shovels), application:start(rabbitmq_shovel).'
.
Does the setup (vhost per application) make sense or am I missing the
point completely?
This decision is going to depend on your use case. vhosts primarily provide logical (and access) separation between queues/exchanges and authorized users.
Shovel acts like a well-designed built-in consumer. It can consume messages from a source broker and queue, and publish them into a destination broker and exchange. You could write an application to do that, but shovel already got it right - if all you need is to move messages from a queue to an exchange in the same or another broker, shovel can do it for you. Just as a well-behaving app, it can declare exchanges/queues/bindings, reconnect, change the routing key etc. You can set it up on the source or on the destination broker, or even use a third broker. It's basically an AMQP client.
Federation, on the other hand is used to connect your broker to one or multiple upstream brokers, or you can even create chains of brokers, bending the topology any way you like. You can federate exchanges or queues, and e.g. distribute messages to multiple brokers without the need to bind additional queues to a topic exchange or using a fanout exchange, and shoveling messages from each queue to a downstream broker.
To recap, federation operates at a higher level, while shovel is mostly "just" a well-written client.
To reconfigure shovel, you have to restart the broker, unfortunately.
I don't think you really need a per app vhost. You can add a per-app user to the broker without separate vhosts. Not sure what you mean on "share an event between vhosts", though.

message deleted from queue

I have used BlockingQueue implementation to process my events by services from a queue. However in case if the server goes down, all my events from that queue are getting deleted and hence I am missing events to process. (I am looking for some internal DB where server can store the event/messages from queue and if server goes down and up again, it can load all events/messages to process again, without manually intervention).
Any help on this. I am not sure if I should use Apache ActiveMQ. I am using apache servicemix.
Thanks in advance.
I can not answer about how to do this with BlockingQueue.
But ActiveMQ has two features that you will benefit from:
Persistent Queues and possibly you might also want to look at Durable Queues
It has a built in database that just does this under the hood and allows messages to be persisted in queue even if broker or consumer has to restart.