httpd mod_proxy_balancer failover failonstatus - transparent switching - apache

I am using mod_proxy_balancer to manage failover of backend servers. Backend servers may return an error code instead of timing out when some other backend service fails such as NFS and we want such servers also to be marked as failed nodes. Hence we are using failonstatus directive.
<Proxy balancer://avatar>
ProxySet failonstatus=503
BalancerMember http://active/ retry=30
# the hot standby
BalancerMember http://standby/ status=+H retry=0
</Proxy>
Currently the failover works perfectly with one glitch. When active node fails the user gets a 503 error and from the next request the Standby server takes over.
I dont want even a single request to fail though. Cant mod_proxy failover with out ever returning an error to the client? If active node fails I want mod_proxy to try the Standby for the same request and not just from the subsequent request!

I think you asked this on the Apache HTTPd mailing list but sadly didn't get a satisfactory answer. I've asked almost the same question in ServerFault so I'm joining them together.
https://serverfault.com/questions/414024/apache-httpd-workers-retry

There is a new module that accomplishes what you are asking
https://httpd.apache.org/docs/2.4/mod/mod_proxy_hcheck.html

Related

tomcat cluster apache in front - How to load balance?

What apache module would fit better to build the following cluster:
2x Red Hat each has a tomcat an apache.
No Scalability needs.
High availability needs.
Session replication needs.
DNS
|
Load Balancer
/ \
APACHE1 APACHE2
TOMCAT1 TOMCAT2
The question is regarding what module to use for the load balancing with apache?
mod_proxy
mod_cluster
other?
If I understand mod_cluster correctly, it must be used with JBoss, or with a modified Tomcat. So if you are using plain-old Tomcat (or TomEE), then I think mod_cluster is out.
The easiest out-of-the box option is to use mod_proxy with either the AJP or HTTP back-end. If you are comfortable building additional modules, mod_jk is available from the Tomcat folks and offers a few advantages over mod_proxy, though mod_proxy has nearly achieved feature-parity.
Your diagram suggests that a load-balancer will be choosing between two httpd instances which are coupled directly to a single Tomcat instance each. In that scenario, httpd is not performing any load-balancing at all (the lb is doing the work), and so httpd might be superfluous in that configuration.
If you instead want to cross-link both httpds with both Tomcats, that's when you start having to configure cluster-like behavior with mod_proxy's "balancer" configurations. It would look something like this:
<Proxy balancer://appA>
BalancerMember http://tomcatA:8080/appA
BalancerMember http://tomcatB:8380/appA
</Proxy>
ProxyPass /appA balancer://appA
ProxyPassReverse /appA balancer://appA
There are tons of options for mod_proxy that you should read about and apply to suit your configuration. You can configure things like sticky-sessions, hot-standbys (not present in your example diagram but a good idea if you really need HA), and asymmetric load-balancing.

Which load balancer supports Master/Slave configuration?

I am looking for load balancer for my web application that will support master-slave kind of configuration or algorithm support.
For now I am using apache proxy but with round robin LB method.
I am not sure if apache load balancer has master-slave support or any module?
Here is what I want exactly: Forward all request to one back end server and once the master server is down the slave or other server will act as hot stub.
Please suggest if any open source load balancer I can use w.r.t to my above requirement.
You can use nginx with its Upstream module.
Example configuration:
upstream myBackend {
server main.example.com:8080;
server back.example.com:8080 backup;
}
server {
location / {
proxy_pass http://myBackend;
}
}
While the first server (main.example.com) is up, nginx will use it. When it comes down, it will use the second server. You can read in the linked manual page for various other tuning parameters (for example when to mark server as failed). Nginx supports HTTPS for both incoming connections and also for connections to the proxy backend.
EDIT: For Apache it seems to be possible in version 2.4 using the Proxy Balancer. I have not tested this config. For more details see manual for ProxyPass.
ProxyPass "/" "balancer://hotcluster/"
<Proxy "balancer://hotcluster">
BalancerMember "http://1.2.3.4:8000"
# The server below is on hot standby
BalancerMember "http://1.2.3.6:8000" status=+H
</Proxy>

Apache mod_proxy: Parameter `acquire` doesn't work

We use the apache2-worker package with mod_proxy, mod_proxy_balancer and mod_status.
Apache is configured as a load balancer / dispatcher to WFS servers.
OS: SuSE SLES 11 SP2
Apache httpd: version 2.2.12
All of our workers (WFS servers) can handle only one request at a time. So in /etc/apache2/server-tuning.conf, section <IfModule worker.c>, we sat the parameter ServerLimit to 1.
In the configuration of a BalancerMember we used the parameter max=1.
I.e. /etc/apache2/conf.d/proxy.conf looks like that:
<Proxy balancer://wfscluster>
BalancerMember http://wfsserver01:9090 max=1 timeout=3600 acquire=30000
BalancerMember http://wfsserver01:9091 max=1 timeout=3600 acquire=30000
BalancerMember http://wfsserver02:9090 max=1 timeout=3600 acquire=30000
</Proxy>
ProxyPass /wfs balancer://wfscluster/ nofailover=On
The parameter acquire:
Documentation says:
If set this will be the maximum time to wait for a free connection in the connection pool, in milliseconds. If there are no free connections in the pool the Apache will return SERVER_BUSY status to the client.
My understanding of the parameter acquire is the following, and that my desired behaviour, too:
The load balancer gets some requests from clients. At some point in time all workers are busy.
The next request remains on hold by the load balancer until a worker becomes free. If a worker becomes free the pending request is assigned to the free worker, which then accepts the connection.
If there are no free workers in the time specified by parameter acquire the client gets an error response.
But the parameter acquire doesn't work as expected. The load balancer assignes the next request to a busy worker.
Even if another worker gets free in the meantime, the request is still assigned to the busy worker an the client has to wait until the busy worker finished the current request and accepts the new one.
If you do a /etc/init.d/apache2 reload you get an error message in apaches error_log:
BalancerMember Acquire timeout has wrong format
After that message httpd dies.
If you only start or restart the httpd you don't get that message and the httpd is alive.
I also tried to specify a unit like in acquire=30000ms. But the error remains.
The only thing which helps is to remove the acquire parameter, but the described behaviour is the same.
So the question is:
How do I have to use the parameter acquire? Has someone a working example?
Do I have to use other parameters to get the desired behaviour?

Apache proxy caching "service temporarily unavailable" response when target is down

I have apache sitting in front of my node server. Node is running on certain port, I am using apache to proxy to that port and also have apache configured for https.
When I start apache and then start my node server everything runs great. If I bring down the node server and try to hit my service apache says 'Service Temporarily Unavailable'. This is expected as my node server is down.
However when I bring my server back up without touching apache and try to hit me service again apache still says 'Service Temporarily Unavailable'. Its like apache is not trying again. If I bounce apache all is well again.
Since I am running with forever there is a chance my server could be down for a few second if a fatal happens. I don't want to have to bounce apache if that happens.
Is there anyway to get apache to always try and not cache the fact that a Service it recently tried to hit was unavailable?
You need to add retry=0 to ProxyPass directive.
So it will be something like:
ProxyPass /example http://backend.example.com retry=0
Check some info here: http://httpd.apache.org/docs/current/mod/mod_proxy.html#proxypass

Apache httpd mod_proxy_balancer: broadcast to all cluster members?

I have a few HTTP/PHP servers fronted by an Apache/mod_proxy_balancer load balancer, with a typical cluster setup as below:
<Proxy balancer://mycluster>
BalancerMember http://192.168.1.10
BalancerMember http://192.168.1.11
BalancerMember http://192.168.1.12
BalancerMember http://192.168.1.13
</Proxy>
My question: is there a way to configure Apache such that some specific requests can be sent to all members of the cluster instead of being the standard proxied to one?
I ask because each member in my cluster uses a local data cache based on XCache. Each member has a http-accessible script to unset specific cache item on itself. On some rare events, I need to clear the same cache entry on all servers.
I could make a separate bash/curl script to hit each server in sequence, but since the cluster definition is in my httpd.conf, it'd be easier to not have to copy it somewhere else, and also, I'm hoping to retain a curl-able endpoint on the load balancer itself to do a global cache clearing.
Note: I am not asking about memcache. I'm already using memcache for other things, where servers need to have synchronized storage. In this case I use Xcache for almost-persistent, no-sync-required, data caching.