When I set permissions to the rabbitmq user, there is output the vhost:
[root#ha-node1 my.cnf.d]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/" ...
What is the meaning of the vhost when I set permission, and what function does it have?
In RabbitMQ virtual hosts are logical groups of entities, they are similar to virtual hosts in Apache or server blocks in Nginx.
Virtual hosts are created using rabbitmqctl or HTTP API and they provide logical grouping and separation of resources.
Every virtual host has a name. When an AMQP 0-9-1 client connects to RabbitMQ, it specifies a vhost name to connect to.
If authentication succeeds and the username provided was granted permissions to the vhost, connection is established.
Let me say this by giving you an analogy.
Vhosts are to Rabbit what virtual machines are to physical servers: Vhosts allow you to run data for multiple applications safely and securely by providing logical separation between instances.
This is useful for anything from separating multiple customers on the same Rabbit to avoiding naming collisions on queues and exchanges. Where otherwise you might have to run multiple Rabbits
Every RabbitMQ server has a ability to create virtual message brokers called virtual hosts (vhosts). Each one is essentially a mini-RabbitMQ server with its own queues, exchanges, and bindings ... etc, more important, with its own permissions.
For details information ref: https://livebook.manning.com/book/rabbitmq-in-action/chapter-2/
Related
We want to set up Redis 6.2 clustering behind a LB. There are only master nodes and there is no Redis Sentinel being used. Each cluster-enabled Redis instance is running on a different host with the same configuration (eg. all of them are configured with port 6379). Is this possible with some port configuration on the LB such that a unique port on an LB maps to a unique_ip:6379?
Our idea is to use a cluster-aware Redis client like Lettuce RedisClusterClient which would issue CLUSTER NODES/SLOTS commands or react to MOVED/ASK redirection. It would also take care of split up a pipeline into using separate connections based on the slot for a command
It seems like this is not possible to achieve if the same port is used on all Redis hosts. Using https://docs.redis.com/latest/rs/networking/cluster-lba-setup/ as a guide, the best we could manage was to configure each Redis with a unique port and set cluster-announce-ip as the virtual IP (points to LB) and then manually make sure that the same port is used on LB as the Redis host. With this, the CLUSTER SLOTS and MOVED responses from Redis hosts could be correctly acted upon by the client. But this complicates our setup when a new Redis host has to be added or removed
You can use Route 53 if you're on AWS to achieve this.
Create A setup like this:
Add all hosts(IP addresses) in Route 53 and set TTL to smaller values like 30 seconds or so. Route 53 will return one of these Redis IP addresses, using this endpoint Redis clients like Lettuce or Jedis will discover all the Redis nodes.
You can use any other DNS system as well, record type should be A.
I have a RabbitMQ server which receives messages to an exchange within a virtual host called "ce_func", this exchange is bound to a queue called "azure_trigger".
I'd like to use Azure Functions new RabbitMQ binding to collect from Rabbit. Unfortunately, this is limited to collecting only from virtual host '/' . I was hoping that I could use Rabbit's federation functionality to automatically route to an "azure_trigger" queue within the "/" virtual host of the same server but so far I've failed.
I created a Rabbit "upstream" and "policy" applied to that upstream but I can't figure out the configuration. I have a Federation Status of "Running" but it's only checking the "ce_func" virtual host, I can't see where I can set the target exchange as the "/" virtual host.
Does anyone have any pointers please?
If I understand correctly, you want to deliver message between queues in different vhosts.
RabbitMQ community recommend to use Shovel plugin to handle this situation:
The source and destination can be on the same broker (typically in different vhosts) or distinct brokers.
It is possible to reference any virtual host (vhost) in the in the uri field of the federation-upstream's configuration in the form:
"amqp://" [ username [ ":" password ] "#" ] host [ ":" port ] [ "/" vhost ]
So in simple terms you can wack the vhost on the end of the uri e.g. amqp://localhost:5672/myvhost... if your vhost name is blank then just make sure you include the trailing slash '/' e.g. amqp://localhost:5672/.
A note specific to the blank vhost from the rabbitmq docs (https://www.rabbitmq.com/uri-spec.html)
The vhost component may be absent; this is indicated by the lack of a
"/" character following the amqp_authority. An absent vhost component
is not equivalent to an empty (i.e. zero-length) vhost name.
I use RabbitMQ with its mqtt plugin. Also, there is a guest user who can reach multiple virtual hosts. For example, I want to publish an MQTT message directly to a virtual host (/cse-id-1) but it sends the message to the default one (/). What should I do to send the message to the specified virtual host while using MQTT?
There are several options for specifying the vhost when connecting the client, like prepending the name of the vhost followed by a colon to the username (format vhost:username), so in your case the username would be cse-id-1:guest.
See details and other options in the official documentation: https://www.rabbitmq.com/mqtt.html#virtual-hosts
I'm going to explain my situation.
Background:
I'm running three virtual machines with Debian Jessie on Open Nebula, one as master and the other two as slaves. In them i've installed JBoss AS 7.1 and mod_cluster 1.2.
Goal:
Run a stateful app, so when I shutdown the master server the cluster allows me to continue using the app with shared session and mantain the variables values.
I followed this guide with the given web application.
Errors:
I can't access directly the app at http://master/cluster-demo/ like as in the guide above, I have to specify the port (8330 for server-three).
When I shutdown server-three the slaves notices that the server is shutted down but the session is not shared and the application is no more accessible. This is the output on slave when i shoutdown server-three on master.
Configuration Files
I attach my configuration files:
/opt/jboss/domain/configuration/domain.xml
/opt/jboss/httpd/httpd/conf/httpd.conf
/opt/jboss/domain/configuration/host.xml in the master
/opt/jboss/domain/configuration/host.xml in the slaves
Answer
mod_cluster does not have anything in common with messaging (JMS, HornetQ) subsystems. mod_cluster setting also does not have anything in common with clustering subsystem, i.e. Infinispan and its workhorse, JGroups.
What AS7 mod_cluster subsystem does is that is listens to UDP multicast advertising messages emitted by Apache HTTP Server mod_cluster modules. When it receives such message, it registers itself with your Apache HTTP Server load balancer. From that moment, your registered AS7 "worker" node keeps sending specialized HTTP messages (via TCP), informing Apache HTTP Server about:
its name (jvmRoute or generated)
its current load
its deployments, i.e. application contexts
aliases etc.
When there are no worker nodes registered with your Apache HTTP Server balancer, there are no contexts, hence there is nowhere to forward your requests to.
According to the configuration you posted, you rely on UDP multicast messages being sent to/received from 224.0.1.105:23364.
Open Nebula, firewall and UDP multicast
It is possible that Open Nebula doesn't allow UDP multicast between hosts or that your iptables are blocking it. Try this:
use curl on your worker host to access the balancer host -- exactly the VirtualHost where you have the directive EnableMCPMReceive defined.
if it doesn't work, you must fix iptables, selinux, httpd's allow/deny and such
if it works, it's a good sign that worker can talk to the balancer
go to your AS7 xml, modcluster subsystem, and add attribute to the config: <mod-cluster-config advertise-socket="modcluster" proxy-list="your-httpd-address:port"> -- the one you've just tried with curl
now it should work even without UDP multicast
if you would like to debug your UDP multicast settings in Open Nebula, give it a shot with Advertize.java
1.2.0 is too old, do not use vulnerable code
Please, do not use mod_cluster 1.2.0 with your Apache HTTP Server. The version is completely obsolete and it contains serious bugs, including a code injection CVE and severe performance issue. Download mod_cluster 1.3.1.Final for httpd 2.4.x or build your own from the sources, if you desire httpd 2.2.x support. If you happen to need any any help with that, ask.
We want to use a single Redis server for servers that span two subnets.
If we put Redis on just subnet A, the servers on B will have to go across a router to get to redis.
Our thought is to make the Redis server multi homed (multiple nics), attached to both subnets A and B.
1) Will this work?
2) Will Redis then attach to both IP's?
Thanks!
You can provide the bind address in the redis configuration file (bind parameter).
Now if you comment out the definition and do not provide a bind address, Redis will listen to its port on all the interfaces (i.e. it will listen to 0.0.0.0).
I did not try, but I would say a configuration with 2 addresses should work.