why each openstack nova services has 3 rabbitmq consumers associated with it - rabbitmq

In openstack nova, different types of rabbitmq consumers ((ex. topic consumer, node_topic consumer, fanout consumer) are associated with each nova services (ex. nova-schedule, nova-compute etc) . Following lines is an Excerpt from https://github.com/openstack/nova/blob/master/nova/service.py which uses 3 consumers for each nova service.FYI: service.py is a wrapper for generating nova-services.
self.conn.create_consumer(self.topic, rpc_dispatcher, fanout=False)
node_topic = '%s.%s' % (self.topic, self.host)
self.conn.create_consumer(node_topic, rpc_dispatcher, fanout=False)
self.conn.create_consumer(self.topic, rpc_dispatcher, fanout=True)
I guess, each consumer is associated with a different Rabbit Queue. I think, we may need a node_topic consumer for sending messages directly to a node. what may be the purposes of two other consumers ( topic, & fanout consumer ) in each service?
Also, when I list exchanges of rabbitmq in my devstack node with the following command, it shows a whole lot of (11685) messages
stack#9734efd5-6fcd-4127-92b4-6715a66fda9d:/opt/stack$ sudo rabbitmqctl list_exchanges | wc -l
11685
can someone explain why are there so many exchanges in the rabbitmq of openstack devstack implemenation?

Related

Connect multiple rabbitMQ to each other

I have been searching, and I couldn't find if it is possible to connect two rabbitmq instances together. I am thinking of this as an alternative to RabbitMQ Clustering feature.
My goal is that for each message that a broker receives, it routes to another broker. Does the exchange or queues in rabbitMQ allow to have this architecture?
Producer -> Broker <-> Broker -> Consumer
You can use exchange federation to publish to both the original broker and another (downstream) broker.

RabbitMQ Set the HA Policy

I know the HA Policy is set by the following command:
$ rabbitmqctl set_policy ha-all "" '{"ha-mode":"all","ha-sync-mode":"automatic"}'
My question which seems basic:
Do I have to issue this command on each node or just one of them?
RabbitMQ provides to distributes the policy to all the cluster, so it does not matter which node you select the info will be distribute to the other nodes.
Please read here: https://www.rabbitmq.com/clustering.html
A RabbitMQ broker is a logical grouping of one or several Erlang
nodes, each running the RabbitMQ application and sharing users,
virtual hosts, queues, exchanges, bindings, and runtime parameters.
Sometimes we refer to the collection of nodes as a cluster.

When to use RabbitMQ shovels and when Federation plugin?

For the company I work for we would like to use RabbitMQ as our main message bus. The idea we have is that every single application uses their own vhost for internal communication and that via the shovel or federation plugin we would make it possible to share certain type of the events across multiple vhosts (maybe even multiple machines (non-clustered)).
We chose for application per vhost to separate internal communication from public events and to keep the security adjustable per application.
Based on the information published on the RabbitMQ website I don't get it when I have to choose for shovels or when I have to choose for the federation plugin.
RabbitMQ has the following explanation when to use what:
Typically you would use the shovel to link brokers across the internet when you need more control than federation provides.
What is the fine grain control in shovels which I am missing when I choose for federation?
At this moment I think I would prefer the federation plugin because I could automate the inter-vhost-communication via the REST API provided by the federation plugin.
In case of shovels I would need to change the shovel configuration and reboot the RabbitMQ instance every time we would like to share an event between vhosts. Are my thoughts correct about this?
We are currently running RMQ on Windows with clients connecting from .NET. In the near future Java/Perl/PHP clients will join.
To summarize my questions:
What is the fine grain control in shovels which I am missing when I
choose for federation?
Is it correct that the only way to change the
inter-vhost-communication when I use shovels is by changing theconfig file and rebooting the instance?
Does the setup (vhost per application) make sense or am I missing the point completely?
Shovels and queue provide different means to be forward messages from one RabbitMQ node to another.
Federated Exchange
With a federated exchange, queues can be connected to the queue on the upstream(source) node. In addition, an exchange on the downstream(destination) node will receive a copy of messages that are published to the upstream node.
Federated exchanges are a similar to exchange-to-exchange bindings, in that, they can (optionally) subscribe to a limited set of messages from an upstream exchange.
Federated Queue
(NOTE: These are new in RabbitMQ 3.2.x)
With a federated queue, consumers can be connected to the queue on both the upstream(source) and downstream(destination) nodes.
In essence the downstream queue is a consumer on the upstream queue, with the expectation that there will be additional downstream consumers that process the messages in the same manner as a consumer attached to the upstream queue.
Any messages consumed by the downstream (federated) queue will not be available for consumers on the upstream queue.
Use Case:
If consumers are being migrated from one node to another, a federated queue will allow this to happen without messages being missed, or processed twice.
Use Case: from the RabbitMQ docs
The typical use would be to have the same "logical" queue distributed
over many brokers. Each broker would declare a federated queue with
all the other federated queues upstream. (The links would form a
complete bi-directional graph on n queues.)
Shovel
Shovels on the other hand, attach an "upstream" queue to a "downstream" exchange. (I place the terms in quotes because the shovel documentation does not describe the nodes with the same semantics as the federation documentation.)
The shovel consumes the messages from the queue and sends them to the exchange on the destination node. (NOTE: While not normally discussed as part of this the pattern, there is nothing stopping a consumer from connecting to the queue on the origin node.)
To answer the specific questions:
What is the fine grain control in shovels which I am missing when I
choose for federation?
A shovel does not have to reside on an "upstream" or "downstream" node. It can be configured and operate from an independent node.
A shovel can create all of the elements of the linkage by itself: the source queue, the bindings of the queue, and the destination exchange. Thus, it is non-invasive to either the source or destination node.
Is it correct that the only way to change the
inter-vhost-communication when I use shovels is by changing theconfig
file and rebooting the instance?
This has generally been the accepted downside of the shovel.
With the following command (caveat: only tested on RabbitMQ 3.1.x, and with a very specific rabbitmq.config file that only contain ) you can reload a shovel configuration from the specified file. (in this case /etc/rabbitmq/rabbitmq.config)
rabbitmqctl eval 'application:stop(rabbitmq_shovel), {ok, [[{rabbit, _}|[{rabbitmq_shovel, [{shovels, Shovels}] }]]]} = file:consult("/etc/rabbitmq/rabbitmq.config"), application:set_env(rabbitmq_shovel, shovels, Shovels), application:start(rabbitmq_shovel).'
.
Does the setup (vhost per application) make sense or am I missing the
point completely?
This decision is going to depend on your use case. vhosts primarily provide logical (and access) separation between queues/exchanges and authorized users.
Shovel acts like a well-designed built-in consumer. It can consume messages from a source broker and queue, and publish them into a destination broker and exchange. You could write an application to do that, but shovel already got it right - if all you need is to move messages from a queue to an exchange in the same or another broker, shovel can do it for you. Just as a well-behaving app, it can declare exchanges/queues/bindings, reconnect, change the routing key etc. You can set it up on the source or on the destination broker, or even use a third broker. It's basically an AMQP client.
Federation, on the other hand is used to connect your broker to one or multiple upstream brokers, or you can even create chains of brokers, bending the topology any way you like. You can federate exchanges or queues, and e.g. distribute messages to multiple brokers without the need to bind additional queues to a topic exchange or using a fanout exchange, and shoveling messages from each queue to a downstream broker.
To recap, federation operates at a higher level, while shovel is mostly "just" a well-written client.
To reconfigure shovel, you have to restart the broker, unfortunately.
I don't think you really need a per app vhost. You can add a per-app user to the broker without separate vhosts. Not sure what you mean on "share an event between vhosts", though.

Can topic messages be made persistent in activemq?

I am very new to JMS and ESB.
I am using activemq as JMS and mule as ESB. When i am forwarding the messages from one queue to another with jms connector parameter "persistentDelivery" as "true" it retains the messages in the target queue after activemq re-start. But in case of forwarding messages from one topic to another,the messages are not retained in the target topic after restart.
Is there any limitation for persistence of messages in case of topic in activemq?
Thanks in advance.
Regards,
Arijit
topics are different in that messages are only retained if there is a durable consumer.
see these for more info...
http://activemq.apache.org/how-do-durable-queues-and-topics-work.html
http://stefanlearninglog.blogspot.com/2009/07/persistent-jms-topics-using-activemq.html
Topics in Activemq are not durable and persistent, so in case one of your consumer is down. You would lost your messages.
To make topic durable and persistent you can create a durable consumer by creating unique client id per consumer.
But again, that is not distributed in case you are following microservices architecture. So multiple pods or replicas will create problem while consuming messages as in no load balancing is possible for durable consumers.
To mitigate this scenario, there is a option of Virtual topics in Activemq.More details have been provided below,
You can send your messages via your producer in topic named as VirtualTopic.MyTopic.
** Note: you must have to follow this naming convention for default activemq configuration. But yes there is also a way to override this naming convention.
Now, to consume your messages via multiple consumers, you have to set naming convention for your consumer side destination as well for eg. Consumer.A.VirtualTopic.MyTopic
Consumer.B.VirtualTopic.MyTopic
These two consumer will receive messages through the topic created above, also with load balancing enabled between multiple replicas of same consumer.
I hope this will help you fixing your problem with activemq topic.

Way to break a connection from rabbitmq

I've got an application which has some bugs. For some reason 2 consumers are created when only one should be there - and one of them is not checked for messages anymore.
I can detect that situation by listing queues and the number of consumers on the server. Is there some way to destroy that consumer from the server side?
consumer can be kill by rabbitmqctl using close_connection input connectionpid
example
> rabbitmqctl close_connection "<rabbit#hardys-Mac-mini.1.4195.0>" "reason here"
connectionpid can get by
> rabbitmqctl list_consumers
Listing consumers ...
send_email_1 <rabbit#hardys-Mac-mini.1.4185.0> amq.ctag-oim8CCP2hsioWc-3WwS-qQ true 1 []
send_email_2 <rabbit#hardys-Mac-mini.1.4195.0> amq.ctag-WxpxDglqZQN2FNShN4g7QA true 1 []
RabbitMQ 3.5.4
You can kill connections to the RabbitMQ broker using the rabbitmqctl tool (see the man page) or by using the Web UI. You could also purge and delete the queue which belonged to the rogue consumer.
However, you can't kill the consumer process itself using those tools. You really should just focus on fixing the bugs in the application so that only the correct number of consumers get created.
You need to mark you consumer as "exclusive". Then only one consumer is registered with queue and other consumers are ignored even they tries to get data from that queue.