I am trying to federate messages using Federate queue in two machines.But I am not able to see federated queue automatically creating in downstream server.
Lets say example,I have two server A(Upstream) and B(Downstream).I did configuration properly and installed federation plugin and used policy ,
rabbitmqctl set_policy --apply-to queues federate-me "" \ '{"federation-upstream-set":"all"}'
and added upstream ,
rabbitmqctl set_parameter federation-upstream my-upstream '{"uri":"amqp://****:****#address","expires":3600000}'
After that I have published one message to Queue("Test_Queue") to A server but the "Test_Queue" should automatically create in B Server right? but it is not happening.
Any help would be appreciated.
Thank you.
Related
I'm trying to configure federation between two RabbitMQ environments using the Federation Plugin.
I followed this article. But when I look in Federation Status page under Admin tab, then I can't see any link. All I see is ... no links ....
Can anyone show me the right direction to make federation work? I have questions like, do I have to create the policy on the upstream or the downstream server? And the same for the configuration of the Federation Upstream.
I only want queue federation, no exchange federation. In other words, I only want single consumption of a message. The article I mentioned above looks perfectly fitting for this. But unfortunately I can't see any link in Federation Status ...
Any help is appreciated.
EDIT
Downstream RMQ specs
A cluster with 3 nodes
Uses SSL
version 3.7.13 Erlang 21.3
Upstream RMQ specs
Single node, not clustered
No SSL
version 3.7.5 Erlang 20.2
Federation configuration on downstream RMQ cluster
New policy:
I added a policy with a pattern exactly matching the queue name and with definition federation-upstream-set: all:
Pattern: RmqQueue
Apply to: queues
Definition: federation-upstream-set: all
Priority: 0
When I look at the Queues tab, I can see that this policy is applied to the queue.
New Federation Upstream:
I created a new Federation Upstream from downstream (cluster) to upstream (single-node). Only name and uri is filled, other fields are left empty.
General parameters
URI amqp://<username>:<password>#hostnamesinglenode
Prefetch Count ?
Reconnect Delay
Ack Mode on-confirm
Trust User-ID ○
Federated exchange parameters
Exchange ?
Max Hops ?
Expires
Message TTL
HA Policy ?
Federated queue parameters
Queue ?
Upstream servers are the servers towards messages are published initially. Downstream servers are where the messages get forwarded to, so messages should be federated from the upstream server to downstream server. All configuration settings that you need to do, should be done on the "new" server, the server to where you want you messages to be moved (the downstream server).
Here is a link with more images.
Please note that you can move the publisher and/or the consumer in any order, after you have configured the federation. The federated queue will ONLY retrieve messages when it has run out of messages locally, when it has consumers that need messages, or when the upstream queue has "spare" messages that are not being consumed.
Can I change the node name from RabbitMq Management Console for a specific queue? I tried, but I think that this is created when I started my app. Can I change it afterwards? My queue is on node RabbitMQ1, and my connection on node RabbitMQ2, so I cannot read messages from that queue. Maybe I can change my connection node?
The node name is not just a label, but it's where the queue is physically located. In fact by default queues are not distributed/mirrored, but created on the server where the application connected, as you correctly guessed.
However you can make your queue mirrored using policies, so you can consume messages from both the servers.
https://www.rabbitmq.com/ha.html
You can change the policy for the queues by using the rabbitmqctl command or from the management console, admin -> policies.
You need to synchronize the queue in order to clone the old messages to the mirror queue with:
rabbitmqctl sync_queue <queue_name>
Newly published messages will end in both the copies of the queue, and can be consumed from both alternatively (the same message won't be consumed from both).
I know the HA Policy is set by the following command:
$ rabbitmqctl set_policy ha-all "" '{"ha-mode":"all","ha-sync-mode":"automatic"}'
My question which seems basic:
Do I have to issue this command on each node or just one of them?
RabbitMQ provides to distributes the policy to all the cluster, so it does not matter which node you select the info will be distribute to the other nodes.
Please read here: https://www.rabbitmq.com/clustering.html
A RabbitMQ broker is a logical grouping of one or several Erlang
nodes, each running the RabbitMQ application and sharing users,
virtual hosts, queues, exchanges, bindings, and runtime parameters.
Sometimes we refer to the collection of nodes as a cluster.
I try to build a system with multiple servers messages exchange.
I have server called Master and another server called Slave.
Master sends messages to Slave and Slave sends messages To Master asynchronously .
I have rabbitmq server on both servers and use federation plugin on both of them to get messages.
So publishers and consumers on both servers communicate only with local rabbitmq server. And all messages exchanges between servers are done only with rabbitmq .
It works fine. When both servers are online.
My requirement is that when there is no network connection between servers then messages should be accomulated until a connection is back.
And it doesn't work with federation plugin . If federation connection is not active then messages are not stored on local rabbitmq.
What should i do to have a model where messages can wait for connection to be delivered to other rabbitmq server?
Do i need to provide more info on my current model?
There is simpler description
RabbitMQ1 has exchange MASTER. RabbitMQ2 created federation with a link to RabbitMQ1 and assigned permissions to the exchange MASTER
Publisher writes to RabbitMQ1 to exchange MASTER with routing key 'myqueue'
Consumer listens RabbitMQ2 on exchange MASTER and queue 'myqueue'.
If there is connection then all works fine
if no connection then messages posted to RabbitMQ1 are not delivered to RabbitMQ2 when connection is back.
How to solve this?
I found the solution for this. Federation is not good plugin for such solution
I used shovel . It does exactly what i need
I've got an application which has some bugs. For some reason 2 consumers are created when only one should be there - and one of them is not checked for messages anymore.
I can detect that situation by listing queues and the number of consumers on the server. Is there some way to destroy that consumer from the server side?
consumer can be kill by rabbitmqctl using close_connection input connectionpid
example
> rabbitmqctl close_connection "<rabbit#hardys-Mac-mini.1.4195.0>" "reason here"
connectionpid can get by
> rabbitmqctl list_consumers
Listing consumers ...
send_email_1 <rabbit#hardys-Mac-mini.1.4185.0> amq.ctag-oim8CCP2hsioWc-3WwS-qQ true 1 []
send_email_2 <rabbit#hardys-Mac-mini.1.4195.0> amq.ctag-WxpxDglqZQN2FNShN4g7QA true 1 []
RabbitMQ 3.5.4
You can kill connections to the RabbitMQ broker using the rabbitmqctl tool (see the man page) or by using the Web UI. You could also purge and delete the queue which belonged to the rogue consumer.
However, you can't kill the consumer process itself using those tools. You really should just focus on fixing the bugs in the application so that only the correct number of consumers get created.
You need to mark you consumer as "exclusive". Then only one consumer is registered with queue and other consumers are ignored even they tries to get data from that queue.