I have a number of RabbitMQ servers arranged effectively in a star topology. I need to federate a different exchange bi-directionally between the central hub server and each of the outer servers. Configuration of the outer servers isn't problematic, but although the exchanges are different the hub doesn't want to accept more than one federation policy.
Defining multiple upstreams and upstream sets works as expected:
$ rabbitmqctl list_parameters
Listing runtime parameters ...
federation-upstream-set leaf1 [{"upstream":"leaf1-1"}]
federation-upstream-set leaf2 [{"upstream":"leaf2-1"}]
federation-upstream leaf2-1 {"uri":"--snipped--","expires":3600000}
federation-upstream leaf1-1 {"uri":"--snipped--","expires":3600000}
...done.
The first federation policy applies as expected:
$ rabbitmqctl set_policy --apply-to exchanges federate-me "^leaf1$" '{"federation-upstream-set":"leaf1"}'
Setting policy "federate-me" for pattern "^leaf1$" to "{\"federation-upstream-set\":\"leaf1\"}" with priority "0" ...
...done.
$ rabbitmqctl list_policies
Listing policies ...
/ federate-me exchanges ^leaf1$ {"federation-upstream-set":"leaf1"} 0
...done.
But as soon as I try to specify a second federation policy, it simply replaces the first one:
$ rabbitmqctl set_policy --apply-to exchanges federate-me "^leaf2$" '{"federation-upstream-set":"leaf2"}'
Setting policy "federate-me" for pattern "^leaf2$" to "{\"federation-upstream-set\":\"leaf2\"}" with priority "0" ...
...done.
$ rabbitmqctl list_policies
Listing policies ...
/ federate-me exchanges ^leaf2$ {"federation-upstream-set":"leaf2"} 0
...done.
It doesn't matter if I specify different priorities for the two policies, either; whatever I do, only the single most recently entered federation policy is listed. I know that only a single policy can apply to each exchange, but the exchange specification for each policy here is different, and moreover the documentation suggests that the policy with the highest priority should win in the event that there are multiple matching policies.
Can anyone help?
You have to specify unique name for each policy you want to add. Setting different policy with existent name will just override existent policy with that name.
Related
I have referred the link https://www.rabbitmq.com/lazy-queues.html here and set the rabbitmq queue to lazy queue using the following command
rabbitmqctl set_policy Lazy "^lazy-queue$" '{"queue-mode":"lazy"}' --apply-to queues
However, when checking it using the command curl -u guest:guest 'localhost:15672/api/queues' it still shows the default queue as below
"mode":"default" .
How do I set the queue to lazy queue in rabbitmq. Could someone please help
A Policy defines a rule that applies to all queues whose name matches a particular pattern.
Let's have a closer look at the command you've copied:
rabbitmqctl set_policy Lazy "^lazy-queue$" '{"queue-mode":"lazy"}' --apply-to queues
We are creating or updating a policy called "Lazy"; as far as I know, this can be any name you like
The pattern we want it to apply to is ^lazy-queue$; this is a regular expression which only matches the exact name "lazy-queue"
The configuration we want to apply is to set "queue-mode" to "lazy"
So, if you want it to apply to multiple queues, you need to adjust the policy to apply to those queues. For instance, you could apply it to all queues whose names begin "lazy-":
rabbitmqctl set_policy Lazy "^lazy-" '{"queue-mode":"lazy"}' --apply-to queues
Or any name that ends in a four-digit number:
rabbitmqctl set_policy Lazy "-[0-9]{4}$" '{"queue-mode":"lazy"}' --apply-to queues
Or just apply to every queue:
rabbitmqctl set_policy LazyEverything ".*" '{"queue-mode":"lazy"}' --apply-to queues
When configuring Redis 6 with ACLs in a cluster environment an additional user must be created (assuming the default user is not desired or does not have access to the PSYNC command). What are the exact commands that must be assigned to this user?
There is a small note about ACL rules for Sentinel and Replicas in the documentation indicating that Sentinel needs:
AUTH, CLIENT, SUBSCRIBE, SCRIPT, PUBLISH, PING, INFO, MULTI, SLAVEOF,
CONFIG, CLIENT, EXEC
and replicas need:
PSYNC, REPLCONF, PING
My best guess is to combine the two for a command set of:
AUTH, CLIENT, SUBSCRIBE, SCRIPT, PUBLISH, PING, INFO, MULTI, SLAVEOF,
CONFIG, CLIENT, EXEC, PSYNC, REPLCONF
Excerpt from redis.conf which indicates "and/or other commands needed for replication":
# If the master is password protected (using the "requirepass" configuration
# directive below) it is possible to tell the replica to authenticate before
# starting the replication synchronization process, otherwise the master will
# refuse the replica request.
#
masterauth mymasterpassword
#
# However this is not enough if you are using Redis ACLs (for Redis version
# 6 or greater), and the default user is not capable of running the PSYNC
# command and/or other commands needed for replication. In this case it's
# better to configure a special user to use with replication, and specify the
# masteruser configuration as such:
#
masteruser mymasteruser
#
# When masteruser is specified, the replica will authenticate against its
# master using the new AUTH form: AUTH <username> <password>.
I'm setting up a RabbitMQ cluster reading from its docs.
While setting it up, it joins Machine2 with Machine1 via command rabbitmqctl join_cluster rabbit#rabbit1. Now what is rabbit#rabbit1?
I know its user#hostname, but when I fire this command, it says Error: {cannot_discover_cluster,"Cannot cluster node with itself"}.
When I type-in the IP instead of hostname, it says Error: {cannot_discover_cluster,"The nodes provided are either offline or not running"}.
I've also added IP rabbit1 in the /etc/hosts file as well.
What exactly am I missing here?
Rabbit#rabbit1,
In this case the rabbit1 is the name of the computer/host where the rabbitmq server is present.
You can just use the name of the server like Rabbit#name_of_the_server where you want to do clustering with.
You can also see what is the name of the current rabbitmq host:
rabbitmqctl cluster_status
That will give you the name I mean host name.
And you need to make sure that before you do clustering you need to stop the rabbitmq server on that machine and then do clustering and then restart the rabbitmq node.
Check this link:
https://www.rabbitmq.com/clustering.html
"Cannot cluster node with itself" is true. You have to change cluster name for it to join in. Use set_cluster_name to change the cluster name on other nodes first, and then come back to this node and join it to newly named cluster. For example,
On node2,
`rabbitmqctl set_cluster_name rabbit#new`
Back on node1,
`rabbitmqctl stop_app`
`rabbitmqctl reset`
`rabbitmqctl join_cluster rabbit#new`
`rabbitmqctl start_app`
Quite simple way.
you are trying to join one to itself.
You have two possible errors:
error in /etc/hosts ( wrong alias )
you actual try to join the rabbit#rabbit1 to rabbit#rabbit1
Sorry, should be shot for having to even ask this, but wasted day on this - and feel like I've read everything there is.
I can't create a cluster on my EC2 instances (3) that are spread on three different regions. The hosts:
rabbit#ip-172-31-47-217
rabbit#ip-172-31-1-82
rabbit#ip-172-31-36-111
The initial state before trying to make the cluster:
ubuntu#ip-172-31-47-217:~$ sudo rabbitmqctl cluster_status
Cluster status of node 'rabbit#ip-172-31-47-217' ...
[{nodes,[{disc,['rabbit#ip-172-31-47-217']}]},
{running_nodes,['rabbit#ip-172-31-47-217']},
{partitions,[]}]
ubuntu#ip-172-31-36-111:~$ sudo rabbitmqctl cluster_status
Cluster status of node 'rabbit#ip-172-31-36-111' ...
[{nodes,[{disc,['rabbit#ip-172-31-36-111']}]},
{running_nodes,['rabbit#ip-172-31-36-111']},
{partitions,[]}]
ubuntu#ip-172-31-1-82:~$ sudo rabbitmqctl cluster_status
Cluster status of node 'rabbit#ip-172-31-1-82' ...
[{nodes,[{disc,['rabbit#ip-172-31-1-82']}]},
{running_nodes,['rabbit#ip-172-31-1-82']},
{partitions,[]}]
When I try to check status from one server for another:
sudo rabbitmqctl status -n rabbit#ip-172-31-1-82
Status of node 'rabbit#ip-172-31-1-82' ...
Error: unable to connect to node 'rabbit#ip-172-31-1-82': nodedown
nodes in question: ['rabbit#ip-172-31-1-82']
hosts, their running nodes and ports:
- unable to connect to epmd on ip-172-31-1-82: timeout (timed out)
current node details:
- node name: 'rabbitmqctl3835#ip-172-31-36-111'
- home dir: /var/lib/rabbitmq
- cookie hash: 0tsf/OyQZI7zobmv1Ia97w==
All three servers have the same erlang cookie hash.
I can verify the host names are setup properly:
host ip-172-31-36-111
ip-172-31-36-111.us-west-2.compute.internal has address 172.31.36.111
I know the ports are open:
netstat -plten | grep beam
Because I opened all TCP and UDP at this point as a test, no change.
and finally if this would behave differently given those failures:
sudo rabbitmqctl join_cluster --ram rabbit#ip-172-31-1-82
Clustering node 'rabbit#ip-172-31-47-217' with 'rabbit#ip-172-31-1-82' ...
Error: {cannot_discover_cluster,"The nodes provided are either offline or not running"}
Please help, being driven insane by this.
The problem is that they are in different regions (presumably in EC2-classic - you didn't mention whether you were using a VPC). This means they cannot communicate via their private IPs (see e.g. Can EC2 instances in different regions communicate over their private IP addresses?)
ping 172.31.36.111
will fail from one of the other servers, for example. Pinging using the hostname probably will probably even fail on the DNS lookup.
Your options are:
Put them in separate zones in a single region (in EC2 classic, they will be able to communicate). You could also use a VPC in this case, putting the in separate subnets but allowing interconnections via appropriately set up security groups.
Set up /etc/hosts on each server to point the relevant public IPs of the other servers (you could attach elastic IPs to each server to ensure stability across server restarts). You could also set the hostname of each server for clarity. Set you your security groups to allow access on the relevant ports that rabbitmq uses. There may be security implications of doing this, since the data will be travelling over the public internet.
Set up a VPN between each server in the cluster. Amazon VPC has a VPN facility, but there are ways of setting it up yourself I think.
I think only option 1 is simplest. Option 2 has major security implications (I believe there are ways of securing the connection between the cluster servers, but they aren't documented on the rabbitmq website as far as I can tell). Option 3 is complex but probably the best option if you need multiple regions.
Note that rabbitmq clusters aren't meant to be run over wide geographical areas, since they aren't too reliable in the face of network partitions. See here: https://www.rabbitmq.com/clustering.html
I'm working with rabbitmq permissions with python. The application has multiple clients and one service provider. I want to limit clients to specific queues while service provider should be capable to read all queues and not write to any. I try to set permissions as follow:
For service provider account I have set the following
rabbitmqctl set_permissions -p vhost service_provider ".*-client-queues" "" ".*-client-queues"
For clients I did
rabbitmqctl set_permissions -p vhost client1 "client1-client-queues" "client1-client-queues" ""
And the message is never delivered to service provider. However, if I set
rabbitmqctl set_permissions -p vhost client1 ".*" ".*" ".*"
it works. But I need to limit the clients to specific queues.
Does anyone of you try to achieve such thing? Any hints will be appreciated. Thanks.
service_provider and client1 must be the users that the respective components use instead of the default (guest) to connect to the RabbitMQ broker.
You need to create the users and set their passwords with rabbitmqctl add_user ..., then let the respective components use them.
Also note that the exchanges that you use to publish messages to, must match the write permission that you specify. See here for details.
I suggest you add the permissions one-by-one, so you see rapidly what you are doing wrong.
What I'm missing is the exchange name while I set the permissions. I've solved my problem with the following permissions: (I'm using default exchange)
For clients:
rabbitmqctl set_permissions -p vhost client1 "client1-client-queues|amq\.default" "client1-client-queues|amq\.default" "amq\.default"
For service provider:
set_permissions -p vhost service_provider ".*-client-queues|amq\.default" "amq\.default" ".*-client-queues|amq\.default"