RabbitMQ: Can vhost span multiple physical nodes in a Rabbit cluster? - rabbitmq

As the question reads, I would like to know if it is feasible to create a vhost for redundancy, replication, and high availability on multiple nodes in a cluster.

A vhost is always created on all nodes in a RabbitMQ cluster.
Now if you want your vhost to be in multiple clusters you'd have to add the vhost via rabbitmqctl or the HTTP API.

Related

AWS ELB + apache httpd + tomcat

We are currently using the "standard" architecture created by AWS OpsWorks.
We have set up AWS ELB in front of multiple machines, which sends the requests to one machine using round-robin algorithm ( we have stateless application without any cookies ). Apache httpd + Apache tomcat is installed on every machine ( everything set up and configured by AWS OpsWorks ). So Apache httpd handles the connection and then sends it to Tomcat via AJP connection.
I would like to get rid of the apache httpd.
Few reasons for that:
Easier architecture, easier configuration
Maybe slight gain in performance
Less monitoring ( need to monitor only Tomcat, but not Apache httpd )
I have checked the following thread:
Why use Apache Web Server in front of Glassfish or Tomcat?
and haven't find any reasons why I shouldn't remove apache httpd from my architecture.
However, I know that some applications have nginx in front of the Tomcat for the following reasons:
Slow clients handling ( ie worker thread of tomcat is freed, but async nginx thread sends clients )
DDoS SYN ( using SYN cookies ) protection
Questions to consider:
Does apache httpd protects from these DDoS techniques?
Does AWS ELB protects from these DDoS techniques?
Should I remove apache httpd ( given that I don't need anything from the list )? Should I replace it with nginx? Should I replace it with nginx ( taking into account that we have a DDoS protection with Incapsula )?
Any other advice/comment would be highly appreciated!
Thank you in advance!
Does apache httpd protects from these DDoS techniques?
No apache httpd does not automatically protect from DDOS attack you have to enable and configure the security modules.
Does AWS ELB protects from these DDoS techniques?
AWS ELB features are High Availability, Health Checks, Security Features By managing associated security groups, SSL Offloading for encryption etc.No AWS ELB does not protect from the DDOS and DDoS techniques.
Should I remove apache httpd?
By using Apache HTTP as a front end you can let Apache HTTP act as a front door to your content to multiple Apache Tomcat instances. If one of your Apache Tomcats fails, Apache HTTP ignores it and your Sysadmin can sleep through the night. This point could be ignored if you use a hardware loadbalancer and Apache Tomcat's clustering capabilities. This option is when you are not using AWS ELB.
Should I replace it with nginx?
If you have Incapsula for DDOS there no need to complex the process by adding nginx.

Send requests to second tomcat node only in case of failure of first node

I need to have the following setup for an application
Apache HTTP Server
Tomcat Node A
Tomcat Node B
I need to load balance the two tomcat instances in such a way that
-initially all requests should go to nodeA.
-only in case Node A is down, requests should start going to NodeB.
-in no scenario should both nodes be serving the requests at the same time.
I am unable to understand what values should I configure for lbfactor for such a setup.
There is a similar question HTTP Load Balancing - rdirect only if first worker fails using mod_jk but it does not have any answers.
You want to mark the 2nd server a a hot standby with status=+H on the ProxyPass line that defines it inside of a balancer.
It will only be used if all other balancer members are offline.

HTTP Load Balancing - rdirect only if first worker fails using mod_jk

I have used Apache HTTPD mod_jk and Tomcat for a high availability solution. Here is the workers.properties for it.
worker.list=myworker
worker.myworker1.port=8009
worker.myworker1.host=host1
worker.myworker1.type=ajp13
worker.myworker1.lbfactor=1
worker.myworker2.port=8009
worker.myworker2.host=host2
worker.myworker2.type=ajp13
worker.myworker2.lbfactor=1
worker.myworker.type=lb
worker.myworker.balance_workers=myworker1,myworker2
worker.myworker.sticky_session=True
Right now, the requests are equally distributed among the workers and applications are working fine. What I want is, all the requests must go to myworker1. Only if myworker1 is down, it should be redirected to myworker2.
Is there a way possible with mod_jk for this?
Redirect to myworker2 in case myworker1 fails
Disable myworker2 for all the requests except for failover
These two lines must be added to your file
worker.myworker1.redirect=myworker2
worker.myworker2.activation=disabled
See:
https://salonegupta.wordpress.com/2014/08/27/apache-load-balancer-setup-with-failover-mechanism/ for more information

Load Balancing across all ports

I'm looking to add a load balancer in front of a service that might be listening on any port. I've looked at a few options (apache, haproxy) but they seem to all work by specific ports -- e.g.
example.com:80
server1:80
What I need:
example.com:N
server1:N
server2:N
Where N can be any port. In otherwords, basically round robin dns, but with failover support.
Ideas about how this can be done on mod_proxy_balancer, haproxy or any other freely avaliable lb? Thanks.

apache mod_jk send request to all cluster nodes

I have a distrubuted cluster system. I have set up apache server and set loadbalancing (mod_jk) conditions. And also sticky session is true mode.
Is it possible that could I send some special requests (after request header control) to all tomcat cluster nodes ? Is there any rule or method ?
There is no need to send back to clients, all nodes be informed from special url is enough. I have configured uriworkermap.properties, there are 3 status(active, disabled, stopped) for loadbalancer nodes. Is there any solution by configuring uriworkermap.properties or workers.properties?
For solution of this problem, suggesting alternatives of mod_jk ?