I have a few HTTP/PHP servers fronted by an Apache/mod_proxy_balancer load balancer, with a typical cluster setup as below:
<Proxy balancer://mycluster>
BalancerMember http://192.168.1.10
BalancerMember http://192.168.1.11
BalancerMember http://192.168.1.12
BalancerMember http://192.168.1.13
</Proxy>
My question: is there a way to configure Apache such that some specific requests can be sent to all members of the cluster instead of being the standard proxied to one?
I ask because each member in my cluster uses a local data cache based on XCache. Each member has a http-accessible script to unset specific cache item on itself. On some rare events, I need to clear the same cache entry on all servers.
I could make a separate bash/curl script to hit each server in sequence, but since the cluster definition is in my httpd.conf, it'd be easier to not have to copy it somewhere else, and also, I'm hoping to retain a curl-able endpoint on the load balancer itself to do a global cache clearing.
Note: I am not asking about memcache. I'm already using memcache for other things, where servers need to have synchronized storage. In this case I use Xcache for almost-persistent, no-sync-required, data caching.
Related
What apache module would fit better to build the following cluster:
2x Red Hat each has a tomcat an apache.
No Scalability needs.
High availability needs.
Session replication needs.
DNS
|
Load Balancer
/ \
APACHE1 APACHE2
TOMCAT1 TOMCAT2
The question is regarding what module to use for the load balancing with apache?
mod_proxy
mod_cluster
other?
If I understand mod_cluster correctly, it must be used with JBoss, or with a modified Tomcat. So if you are using plain-old Tomcat (or TomEE), then I think mod_cluster is out.
The easiest out-of-the box option is to use mod_proxy with either the AJP or HTTP back-end. If you are comfortable building additional modules, mod_jk is available from the Tomcat folks and offers a few advantages over mod_proxy, though mod_proxy has nearly achieved feature-parity.
Your diagram suggests that a load-balancer will be choosing between two httpd instances which are coupled directly to a single Tomcat instance each. In that scenario, httpd is not performing any load-balancing at all (the lb is doing the work), and so httpd might be superfluous in that configuration.
If you instead want to cross-link both httpds with both Tomcats, that's when you start having to configure cluster-like behavior with mod_proxy's "balancer" configurations. It would look something like this:
<Proxy balancer://appA>
BalancerMember http://tomcatA:8080/appA
BalancerMember http://tomcatB:8380/appA
</Proxy>
ProxyPass /appA balancer://appA
ProxyPassReverse /appA balancer://appA
There are tons of options for mod_proxy that you should read about and apply to suit your configuration. You can configure things like sticky-sessions, hot-standbys (not present in your example diagram but a good idea if you really need HA), and asymmetric load-balancing.
I am looking for load balancer for my web application that will support master-slave kind of configuration or algorithm support.
For now I am using apache proxy but with round robin LB method.
I am not sure if apache load balancer has master-slave support or any module?
Here is what I want exactly: Forward all request to one back end server and once the master server is down the slave or other server will act as hot stub.
Please suggest if any open source load balancer I can use w.r.t to my above requirement.
You can use nginx with its Upstream module.
Example configuration:
upstream myBackend {
server main.example.com:8080;
server back.example.com:8080 backup;
}
server {
location / {
proxy_pass http://myBackend;
}
}
While the first server (main.example.com) is up, nginx will use it. When it comes down, it will use the second server. You can read in the linked manual page for various other tuning parameters (for example when to mark server as failed). Nginx supports HTTPS for both incoming connections and also for connections to the proxy backend.
EDIT: For Apache it seems to be possible in version 2.4 using the Proxy Balancer. I have not tested this config. For more details see manual for ProxyPass.
ProxyPass "/" "balancer://hotcluster/"
<Proxy "balancer://hotcluster">
BalancerMember "http://1.2.3.4:8000"
# The server below is on hot standby
BalancerMember "http://1.2.3.6:8000" status=+H
</Proxy>
We use the apache2-worker package with mod_proxy, mod_proxy_balancer and mod_status.
Apache is configured as a load balancer / dispatcher to WFS servers.
OS: SuSE SLES 11 SP2
Apache httpd: version 2.2.12
All of our workers (WFS servers) can handle only one request at a time. So in /etc/apache2/server-tuning.conf, section <IfModule worker.c>, we sat the parameter ServerLimit to 1.
In the configuration of a BalancerMember we used the parameter max=1.
I.e. /etc/apache2/conf.d/proxy.conf looks like that:
<Proxy balancer://wfscluster>
BalancerMember http://wfsserver01:9090 max=1 timeout=3600 acquire=30000
BalancerMember http://wfsserver01:9091 max=1 timeout=3600 acquire=30000
BalancerMember http://wfsserver02:9090 max=1 timeout=3600 acquire=30000
</Proxy>
ProxyPass /wfs balancer://wfscluster/ nofailover=On
The parameter acquire:
Documentation says:
If set this will be the maximum time to wait for a free connection in the connection pool, in milliseconds. If there are no free connections in the pool the Apache will return SERVER_BUSY status to the client.
My understanding of the parameter acquire is the following, and that my desired behaviour, too:
The load balancer gets some requests from clients. At some point in time all workers are busy.
The next request remains on hold by the load balancer until a worker becomes free. If a worker becomes free the pending request is assigned to the free worker, which then accepts the connection.
If there are no free workers in the time specified by parameter acquire the client gets an error response.
But the parameter acquire doesn't work as expected. The load balancer assignes the next request to a busy worker.
Even if another worker gets free in the meantime, the request is still assigned to the busy worker an the client has to wait until the busy worker finished the current request and accepts the new one.
If you do a /etc/init.d/apache2 reload you get an error message in apaches error_log:
BalancerMember Acquire timeout has wrong format
After that message httpd dies.
If you only start or restart the httpd you don't get that message and the httpd is alive.
I also tried to specify a unit like in acquire=30000ms. But the error remains.
The only thing which helps is to remove the acquire parameter, but the described behaviour is the same.
So the question is:
How do I have to use the parameter acquire? Has someone a working example?
Do I have to use other parameters to get the desired behaviour?
I'm trying to setup Apache as a load balancer for 2 Tomcat instances with session affinity.
The goal is to have the session stick to one server but to have next session (when it's changed by the backend server) to go to the next available server (let's say using round-robin algorithm for easier implementation). When using the "jvmRoute" in Tomcat and an equivalent "route" in Apache the actual value that does the routing is the route name which does not change and all requests are routed always to the same backend server for a single client.
I found out so far that there's an chicken/egg problem when using just the JSESSIONID cookie. Let's consider the following setup:
2 Tomcat servers listening on ports 8009 and 8010 (AJP13)
1 Apache server with the following configuration
<Proxy balancer://hello-cluster>
BalancerMember ajp://127.0.0.1:8009/hello
BalancerMember ajp://127.0.0.1:8010/hello
</Proxy>
ProxyPass /hello balancer://hello-cluster stickysession=JSESSIONID
And here's the scenario:
The first request has no cookie so Apache selects the next available server in the load balancer to handle the request.
The backend Tomcat server sets JSESSIONID but does not note the actual value being returned.
The next request comes in, Apache notes that there's no backend server noted for the given JSESSIONID so it selects the next available, which in this case the other one as served the first request
Tomcat notices that the value of JSESSIONID is invalid so it creates a new one.
Apache does not take a note that the JSESSIONID has changed to pin it down to that backend server.
Back to pt. 3
Is there a way to convince Apache to note the value returned by Tomcat?
maybe if you try with tomcat session replication. I found this interesting post:
http://www.bradchen.com/blog/2012/12/tomcat-auto-failover-using-apache-memcached
.
You could try too with redis:
http://shivganesh.com/2013/08/15/setup-redis-session-store-apache-tomcat-7/
Let me know your experience please.
I am using mod_proxy_balancer to manage failover of backend servers. Backend servers may return an error code instead of timing out when some other backend service fails such as NFS and we want such servers also to be marked as failed nodes. Hence we are using failonstatus directive.
<Proxy balancer://avatar>
ProxySet failonstatus=503
BalancerMember http://active/ retry=30
# the hot standby
BalancerMember http://standby/ status=+H retry=0
</Proxy>
Currently the failover works perfectly with one glitch. When active node fails the user gets a 503 error and from the next request the Standby server takes over.
I dont want even a single request to fail though. Cant mod_proxy failover with out ever returning an error to the client? If active node fails I want mod_proxy to try the Standby for the same request and not just from the subsequent request!
I think you asked this on the Apache HTTPd mailing list but sadly didn't get a satisfactory answer. I've asked almost the same question in ServerFault so I'm joining them together.
https://serverfault.com/questions/414024/apache-httpd-workers-retry
There is a new module that accomplishes what you are asking
https://httpd.apache.org/docs/2.4/mod/mod_proxy_hcheck.html