How to properly configure Apache Httpd as Load Balancer where some hosts may be unavailable - apache

I am using an Apache Httpd instance as proxy in front of multiple Java Tomcat instances. Apache acts as load balancer for the Tomcat instances.
The apache config basically looks like follows
<Proxy balancer://mycluster>
BalancerMember ajp://host1:8280 route=jvmRoute-8280
BalancerMember ajp://host2:8280 route=jvmRoute-8280
BalancerMember ajp://host3:8280 route=jvmRoute-8280
</Proxy>
<VirtualHost *:80>
ProxyPass / balancer://mycluster/
ProxyPassReverse / balancer://mycluster/
</VirtualHost>
This basically works when the AJP ports are configured in the Tomcat instances. Requests are sent to one of the hosts and the load is distributed across the Tomcat instances.
However I see very long delays that seem to be caused inside Httpd whenever one of the hosts is not available, i.e. it seems Apache does not remember that one of the hosts is not available and repeatedly tries to send requests also to the missing hosts instead of sending it to one of the available hosts and trying the failing host at some time later.
Is there a way to configure mod_proxy et.al. from Apache Httpd to support such a failover scenario, i.e. having multiple hosts and don't cause huge delays when one host fails? Preferably Apache should periodically check in the background which hosts are gone and not as them for any requests.
I did find HAProxy which seems to be more suited for this kind of thing, but I would prefer to stick with Apache for a number of unrelated reasons.
Update
In the meantime I found out that part of my problem was caused by clients which kept the connection open endlessly and thus no more connections/threads were available.
Thus I change the question to:
What configuration options would you use to minimize the effect of something like this? I.e. allow many open connections or close them quickly in this case? Otherwise this sounds like a very easy DOS-attack with my current config?

Clients will not keep the connection open endlessly. Check your Apache server-tuning.conf and look for the KeepAliveTimeout setting. Lower it to something sensible.
Your changes to connectiontimeout and retry are indeed what you have to do. I'd lower connectiontimeout though. 10 seconds is still ages. If the back end is in the same location why not set it in miliseconds? connectiontimeout=200ms should leave plenty of time to set up the connection.

I think I found at least sort of a workaround or simple solution. mod_proxy seems to have a very long connectiontimeout by default (300 seconds). if you do not set it differently, it will take a long time until offline nodes are detected as being in "err" state.
By setting a short connectiontimeout and increasing the retry I could make it work better for me:
BalancerMember ajp://host1:8280 route=jvmRoute-8280 connectiontimeout=10 retry=600
This will ensure that failing connections are detected fairly quickly and Apache does not retry too often to reach failing servers. Unfortunately it seems Apache uses actual requests for checking the balance members and thus from time to time single requests may be slow when it tries to reach a server previously put into err-state. It seems there is no heartbeat or watchdog feature. For something like this other load balancing solutions bring such features, notably HAProxy
Read up on mod_proxy and mod_proxy_balancer for more details.
Additionally server-status via mod_status and balance manager via a page provided by mod_balancer have been a great help in diagnosing this!

It seems you have forgotten the ping tag (Actually it's called CPING - 100-Continue)
Like so:
<Proxy "balancer://www">
BalancerMember "http://192.168.0.100:80" max=128 ttl=300 retry=60 connectiontimeout=5 timeout=300 ping=2
BalancerMember "http://192.168.0.101:80" max=128 ttl=300 retry=60 connectiontimeout=5 timeout=300 ping=2
BalancerMember "http://192.168.0.102:80" max=128 ttl=300 retry=60 connectiontimeout=5 timeout=300 ping=2
BalancerMember "http://192.168.0.103:80" max=128 ttl=300 retry=60 connectiontimeout=5 timeout=300 ping=2
BalancerMember "http://192.168.0.104:80" max=128 ttl=300 retry=60 connectiontimeout=5 timeout=300 ping=2
BalancerMember "http://192.168.0.105:80" max=128 ttl=300 retry=60 connectiontimeout=5 timeout=300 ping=2
BalancerMember "http://192.168.0.106:80" max=128 ttl=300 retry=60 connectiontimeout=5 timeout=300 ping=2
SetEnv proxy-nokeepalive 1
</Proxy>
ProxyPass "/www/" "balancer://www/"
ProxyPassReverse "/www/" "balancer://www/"

Related

Load balancing between servers using Apache and JBoss

I am facing the following scenario: I have three servers, each with an instance of my application deployed in a standalone JBoss, I am trying to use a machine that will do the load balancing service between these three servers, for this I am using the module mod_proxy_balancer from Apache (or at least trying), and it was even easy to do the balancing, it worked correctly, however I am having problems in keeping users session and cookies, because whenever a new request is made, the balancer sends it to another server, causing that the user loses his session, I would like that when a user already had a session in one of the servers the same one was sent to him, or something of the type.
Is it possible to achieve the desired result using such resources? If so, how should I make such a setup? If not, what other tool or feature should I use?
Here's the virtual host configuration:
<VirtualHost *:80>
ServerName server.int
ProxyPass / balancer://balance/ stickysession=JSESSIONID|jsessionid scolonpathdelim=On
ProxyPass /balancer-manager !
ProxyPassReverse / balancer://balance/ stickysession=JSESSIONID|jsessionid scolonpathdelim=On
ProxyPassReverseCookiePath / /
<Proxy balancer://balance/>
BalancerMember "http://server1.int" loadfactor=50
BalancerMember "http://server2.int" loadfactor=25
BalancerMember "http://server3.int" loadfactor=25
ProxySet lbmethod=byrequests
</Proxy>
<Location /balancer-manager>
SetHandler balancer-manager
</Location>
</VirtualHost>
Although no one has answered, I will leave here the solution to my problem if this helps anyone in the future. I ended up using HAProxy that can do exactly what I needed in a very simple way.
frontend app
bind *:80
bind *:443 ssl crt /etc/haproxy/certs/cert.pem
redirect scheme https if !{ ssl_fc }
mode http
default_backend app
backend app
balance leastconn
mode http
option httpchk HEAD / HTTP/1.0
cookie SERVERID insert indirect nocache
server server1 server1.test.com:80 check weight 50 fall 3 rise 2 cookie server1
server server2 server2.test.com:80 check weight 50 fall 3 rise 2 cookie server2
server server3 server3.test.com:80 check weight 50 fall 3 rise 2 cookie server3

Connections from Apache to Tomcat using proxypass are not closed after each request

I currently have apache2 configured so that requests on specific urls such as /myapp are directed to my internal tomcat server at tomcathost:8080/myapp.
All the requests to myapp to apache2 work as expected.
The problem that I'm facing is that whenever a request is sent to myapp trough apache2 it seems that apache2 is keeping the connection open to tomcat and after a while all the threads in tomcat are taken by apache and apparently not released.
Could somebody point me in the right direction to solve this issue ?
ProxyPass /myapp balancer://apps/myapp
ProxyPassReverse /myapp balancer://apps/myapp
<Proxy balancer://apps>
BalancerMember http://appserver01:8080 route=Node01 loadfactor=1
BalancerMember http://appserver02:8080 route=Node02 loadfactor=1
ProxySet lbmethod=byrequests
ProxySet stickysession=JSESSIONID|jsessionid
ProxySet nofailover=On
</Proxy>
You can set connectiontime out in server.xml so incase if any thread is open it get timedout.
connectionTimeout="120000"
Also you might use JConsole which is part of JDK installation to monitor threads via JMX for your tomcat to see if there is any loop of threads which causes increase in threads.

apache http server load balancer monitoring

I configured apache http server to act as load balancer using mod-proxy module
<Proxy balancer://clusterABCD>
BalancerMember http://192.168.0.222:8080/geoserver/wms loadfactor=8
BalancerMember http://192.168.0.14:8081/geoserver/wms loadfactor=8
BalancerMember http://192.168.0.222:8082/geoserver/wms status=+H
ProxySet lbmethod=bytraffic
Order allow,deny
Allow from all
</Proxy>
ProxyPass /LGroup balancer://clusterABCD/
Is there any way to monitor the load balancer functionality
My question is
is there any way to find from which BalanceMember the request is processing
is there any settings available to increase functionality
Thanks IN Advance
In response to your both your questions, yes it is possible but you will need to enhance your configuration for Apache Load Balancing via Mod Proxy to have this functionality available.
I suggest you use the sample setup below:
<VirtualHost *:80>
ProxyRequests off
ServerName servername.local
<Proxy balancer://mycluster>
# TomcatA
BalancerMember http://172.20.20.101:8080 route=tomcatA
# TomcatB
BalancerMember http://172.20.20.102:8080 route=tomcatB
# TomcatC
BalancerMember http://172.20.20.103:8080 route=tomcatC
# Security – to determine who is allowed to access
# Currently all are allowed to access
Order Deny,Allow
Deny from none
Allow from all
# Load Balancer Settings
# We will be configuring a simple Round
# Robin style load balancer. This means
# that all nodes take an equal share of
# of the load.
ProxySet lbmethod=byrequests
</Proxy>
# balancer-manager
# This tool is built into the mod_proxy_balancer
# module and will allow you to do some simple
# modifications to the balanced group via a gui
# web interface.
<Location /balancer-manager>
SetHandler balancer-manager
# I recommend locking this one down to your
# administering location
Order deny,allow
Allow from all
</Location>
# Point of Balance
# This setting will allow to explicitly name the
# location in the site that we want to be
# balanced, in this example we will balance "/"
# or everything in the site.
ProxyPass /balancer-manager !
ProxyPass / balancer://mycluster/ stickysession=JSESSIONID|jsessionid nofailover=Off scolonpathdelim=On
To view the Balance Request you need to have the module
mod_proxy_balancer
installed and then use the configuration from above.
In regards to availability, it depends on the Load Balancer Settings the Round Robin approach share the traffic equally between the nodes, and is seen as possibly the best option for availabilty:
ProxySet lbmethod=byrequests
Also, if you are considering sharing sessions with your request from Apache to app servers, then configuration to the AJP instead of the HTTP port is needed along with changes needed on the Application Servers (such as Tomcat). More details are available at:
Load Balancing: Apache versus Physical Appliance
May be too simple, but what about monitoring the (access-) logs of your balancer members? This should show you, which member is processing the request.

Apache Timeout Configuration

We are using apache 2.2 version and we have three servers configured for load balancing purpose (like as below)
BalancerMember http://node1:port/ route=node1
BalancerMember xxxx://node2:xxxx/ route=node2
BalancerMember xxxx://node3:xxxx/ route=node3
However the backend application nodes configured in balancer member requires lot of processing time and hence we were facing issues related to timeout like as below
“The timeout specified has expired: proxy: error reading status line
from remote server ”
As I had a customised .conf file ,I had to add the below lines explicitly to avoid picking default timeout value from default http-default.conf file
<VirtualHost server:port>
Timeout 500
<Proxy balancer://xxxxx>
BalancerMember http://node1:port/ route=node1 timeout=500
</Proxy>
</VirtualHost>
So now my questions are:
Do I need to explicitly configure timeout value at both the levels as shown above,
a) Timeout 500 outside Proxy.
b) timeout=500 at BalancerMember level.
I read in internet that if the timeout of the Apache BalancerMember is
not configured the global Apache timout is inherited there. Please suggest..
Also please suggest the exact parameters needs to be tuned when huge
concurrent requests are anticipated ?
Thanks

mod_ajp_proxy configurations and session stickiness

All,
I have a jboss and apache setup hosting my .war file. I have enabled session stickiness to forward requests from apache to jboss . Assume i have 2 apache and 2 jboss instances.
Is the below setting correct ? Currently session stickyness is not working and each time request is appended with a new JSESSSION ID.
<Proxy balancer://cluster>
Order deny,allow
Allow from all
BalancerMember ajp://1.1.1.1:8010/testing keepalive=On loadfactor=1 ping=10 ttl=600
BalancerMember ajp://2.2.2.2:8010/testing keepalive=On loadfactor=1 ping=10 ttl=600
</Proxy>
ProxyPass /testing balancer://cluster timeout=60 stickysession=JSESSIONID nofailover=On
Do i need to add route variable to the balancer member configuration ??? and do i need to enable useJK flag in jboss.
YES, you need to add route to each balancer member.
route=member1
route=member2
That is how Apache knows which way to direct later requests. Look at your cookies in your browser.