Apache Timeout Configuration - apache

We are using apache 2.2 version and we have three servers configured for load balancing purpose (like as below)
BalancerMember http://node1:port/ route=node1
BalancerMember xxxx://node2:xxxx/ route=node2
BalancerMember xxxx://node3:xxxx/ route=node3
However the backend application nodes configured in balancer member requires lot of processing time and hence we were facing issues related to timeout like as below
“The timeout specified has expired: proxy: error reading status line
from remote server ”
As I had a customised .conf file ,I had to add the below lines explicitly to avoid picking default timeout value from default http-default.conf file
<VirtualHost server:port>
Timeout 500
<Proxy balancer://xxxxx>
BalancerMember http://node1:port/ route=node1 timeout=500
</Proxy>
</VirtualHost>
So now my questions are:
Do I need to explicitly configure timeout value at both the levels as shown above,
a) Timeout 500 outside Proxy.
b) timeout=500 at BalancerMember level.
I read in internet that if the timeout of the Apache BalancerMember is
not configured the global Apache timout is inherited there. Please suggest..
Also please suggest the exact parameters needs to be tuned when huge
concurrent requests are anticipated ?
Thanks

Related

Apache load balancer dropping the HTTP request body

I have configured an Apache http server with mod_proxy to load balance between two jetty servers (sticky sessions).
Everything works fine and as expected while the two servers are up and running. But if I get one of the servers down and then attempt to make an http post to that server, the Apache balancer redirects the post to the running server but with an empty body, losing the original request.
After the request that triggered the redirect to the running server, all subsequent requests work fine.
Apache configuration:
<Proxy balancer://cluster>
BalancerMember http://localhost:9090 route=node1
BalancerMember http://localhost:9091 route=node2
ProxySet stickysession=JSESSIONID
</Proxy>
ProxyPreserveHost On
ProxyPass "/" "balancer://cluster/"
ProxyPassReverse "/" "balancer://cluster/"
I'm using Apache Server 2.4 and Jetty 9.4.22
Any ideas on why this is happening?
Thanks.
It looks like you hit the bug introduced as a regression in 2.4.41. You can check out the details here: https://bz.apache.org/bugzilla/show_bug.cgi?id=63891
To remedy, you will need to upgrade to 2.4.42 or greater.

apache mod_proxy_balancer randomly stops sending traffic to backend server, but no errors

I am using mod_proxy_balancer to load balance two back-end IIS servers. When monitoring the balancer-manager gui, I noticed that occasionally apache will stop sending traffic to one of the members. However, there are no errors present in the logs, and nothing to indicate that the server is unavailable. I have tried various lbmethods (bytraffic, bybusyness) and see the same result. I need to determine why traffic stops going to a member that is seemingly in good health and not returning errors. This generally happens under heavy load, which results in performance issues as one server is handling all requests.
Relevant config:
<Proxy balancer://cluster1>
BalancerMember http://iis1:80 route=2 timeout=45 keepAlive=On
BalancerMember http://iis2:80 route=1 timeout=45 keepAlive=On
ProxySet stickysession=ROUTEID
</Proxy>
Figured this out - it's because we are using sticky sessions on both our hardware load balancer and apache balancer configs. So when we run a load test using jMeter, all of the traffic goes to one server. I hope this helps.

Memcached creates three keys every 60 seconds

I have something occurring I don't quite understand and it may or may not be related to Memcached. My current setup is three tomcat nodes all on the same machine load balanced by Apache HTTP 2.4 with Lucee 4.5 running on each node. I am using Memcached in a virtual machine as a session store.
Everything seems to be working great. However I noticed when I was looking at the contents of the Memcached items that every 60 seconds like clockwork there are new keys added based on the number of nodes I am running. So if I have all three nodes running, every 60 seconds there will be three keys added to the cache with my initial session key still there like it should be. In the image below you will see that the very first stats cachedump 12 100 it lists an item, that is my initial session right after making my first request.
Because there are the same amount of keys added every minute that there are nodes running it makes me wonder if it is a setting in Apache or Tomcat that I have wrong or if what I am seeing is common for Memcached (doubtful). My guess would be Tomcat but I have nothing to go on and all of my searches for this lead no where so I am hoping someone here can point me in the right direction on solving this. It might not be that big of deal but it seems like unnecessary data is getting stored when it doesn't need to be.
Below is a snippet of the proxy area in my Apache configuration that deals with the load balancing
<Proxy balancer://nodes>
BalancerMember ajp://127.0.0.1:5009 loadfactor=5
BalancerMember ajp://127.0.0.1:6009 loadfactor=5
BalancerMember ajp://127.0.0.1:7009 loadfactor=5
# BalancerMember ajp://192.168.56.101:8009 route=node1 loadfactor=5
# BalancerMember ajp://192.168.56.102:8009 route=node2 loadfactor=5
# BalancerMember ajp://192.168.56.103:8009 route=node3 loadfactor=5
ProxySet lbmethod=byrequests
</Proxy>
ProxyPreserveHost On
ProxyPassMatch ^/(.*)?$ balancer://nodes/$1$2

How to properly configure Apache Httpd as Load Balancer where some hosts may be unavailable

I am using an Apache Httpd instance as proxy in front of multiple Java Tomcat instances. Apache acts as load balancer for the Tomcat instances.
The apache config basically looks like follows
<Proxy balancer://mycluster>
BalancerMember ajp://host1:8280 route=jvmRoute-8280
BalancerMember ajp://host2:8280 route=jvmRoute-8280
BalancerMember ajp://host3:8280 route=jvmRoute-8280
</Proxy>
<VirtualHost *:80>
ProxyPass / balancer://mycluster/
ProxyPassReverse / balancer://mycluster/
</VirtualHost>
This basically works when the AJP ports are configured in the Tomcat instances. Requests are sent to one of the hosts and the load is distributed across the Tomcat instances.
However I see very long delays that seem to be caused inside Httpd whenever one of the hosts is not available, i.e. it seems Apache does not remember that one of the hosts is not available and repeatedly tries to send requests also to the missing hosts instead of sending it to one of the available hosts and trying the failing host at some time later.
Is there a way to configure mod_proxy et.al. from Apache Httpd to support such a failover scenario, i.e. having multiple hosts and don't cause huge delays when one host fails? Preferably Apache should periodically check in the background which hosts are gone and not as them for any requests.
I did find HAProxy which seems to be more suited for this kind of thing, but I would prefer to stick with Apache for a number of unrelated reasons.
Update
In the meantime I found out that part of my problem was caused by clients which kept the connection open endlessly and thus no more connections/threads were available.
Thus I change the question to:
What configuration options would you use to minimize the effect of something like this? I.e. allow many open connections or close them quickly in this case? Otherwise this sounds like a very easy DOS-attack with my current config?
Clients will not keep the connection open endlessly. Check your Apache server-tuning.conf and look for the KeepAliveTimeout setting. Lower it to something sensible.
Your changes to connectiontimeout and retry are indeed what you have to do. I'd lower connectiontimeout though. 10 seconds is still ages. If the back end is in the same location why not set it in miliseconds? connectiontimeout=200ms should leave plenty of time to set up the connection.
I think I found at least sort of a workaround or simple solution. mod_proxy seems to have a very long connectiontimeout by default (300 seconds). if you do not set it differently, it will take a long time until offline nodes are detected as being in "err" state.
By setting a short connectiontimeout and increasing the retry I could make it work better for me:
BalancerMember ajp://host1:8280 route=jvmRoute-8280 connectiontimeout=10 retry=600
This will ensure that failing connections are detected fairly quickly and Apache does not retry too often to reach failing servers. Unfortunately it seems Apache uses actual requests for checking the balance members and thus from time to time single requests may be slow when it tries to reach a server previously put into err-state. It seems there is no heartbeat or watchdog feature. For something like this other load balancing solutions bring such features, notably HAProxy
Read up on mod_proxy and mod_proxy_balancer for more details.
Additionally server-status via mod_status and balance manager via a page provided by mod_balancer have been a great help in diagnosing this!
It seems you have forgotten the ping tag (Actually it's called CPING - 100-Continue)
Like so:
<Proxy "balancer://www">
BalancerMember "http://192.168.0.100:80" max=128 ttl=300 retry=60 connectiontimeout=5 timeout=300 ping=2
BalancerMember "http://192.168.0.101:80" max=128 ttl=300 retry=60 connectiontimeout=5 timeout=300 ping=2
BalancerMember "http://192.168.0.102:80" max=128 ttl=300 retry=60 connectiontimeout=5 timeout=300 ping=2
BalancerMember "http://192.168.0.103:80" max=128 ttl=300 retry=60 connectiontimeout=5 timeout=300 ping=2
BalancerMember "http://192.168.0.104:80" max=128 ttl=300 retry=60 connectiontimeout=5 timeout=300 ping=2
BalancerMember "http://192.168.0.105:80" max=128 ttl=300 retry=60 connectiontimeout=5 timeout=300 ping=2
BalancerMember "http://192.168.0.106:80" max=128 ttl=300 retry=60 connectiontimeout=5 timeout=300 ping=2
SetEnv proxy-nokeepalive 1
</Proxy>
ProxyPass "/www/" "balancer://www/"
ProxyPassReverse "/www/" "balancer://www/"

Apache proxy load balancing backend server failure detection

Here's my scenario (designed by my predecessor):
Two Apache servers serving reverse proxy duty for a number of mixed backend web servers (Apache, IIS, Tomcat, etc.). There are some sites for which we have multiple backend web servers, and in those cases, we do something like:
<Proxy balancer://www.example.com>
BalancerMember http://192.168.1.40:80
BalancerMember http://192.168.1.41:80
</Proxy>
<VirtualHost *:80>
ServerName www.example.com:80
CustomLog /var/log/apache2/www.example.com.log combined
<Location />
Order allow,deny
Allow from all
ProxyPass balancer://www.example.com/
ProxyPassReverse balancer://www.example.com/
</Location>
</VirtualHost>
So in this example, I've got one site (www.example.com) in the proxy servers' configs, and that site is proxied to one or the other of the two backend servers, 192.168.1.40 and .41.
I'm evaluating this to make sure that we are fault tolerant on all of our web services (I've already put the two reverse proxy servers into a shared IP cluster for this reason), and I want to make sure that the load-balanced backend servers are fault tolerant as well. But I'm having trouble figuring out if backend failure detection (and the logic to avoid the failed backend server) is built into the mod_proxy_balancer module...
So if 192.168.202.40 goes down, will Apache detect this (I'll understand if it takes a failed request first) and automatically route all requests to the other backend, 192.168.202.41? Or will it continue to balance requests between the failed backend and the operational backend?
I've found some clues in the Apache documentation for mod_proxy and mod_proxy_balancer that seem to indicate that failure can be detected ("maxattempts = Maximum number of failover attempts before giving up.", "failonstatus = A single or comma-separated list of HTTP status codes. If set this will force the worker into error state when the backend returns any status code in the list."), but after a few days of searching, I've found nothing conclusive saying for sure that it will (or at least "should") detect backend failure and recovery.
I will say that most of the search results reference using the AJP protocol to pass the traffic to the backend servers, and this apparently does support failure detection-- but my backends are a mixture of Apache, IIS, Tomcat and others, and I am fairly sure that many of them don't support AJP. They are also a mixture of Windows 2k3/2k8 and Linux (mostly Ubuntu Lucid) boxes running various different applications with various different requirements, so add-on modules like Backhand and LVS aren't an option for me.
I've also tried to empirically test this feature, by creating a new test site like this:
<Proxy balancer://test.example.com>
BalancerMember http://192.168.1.40:80
BalancerMember http://192.168.1.200:80
</Proxy>
<VirtualHost *:80>
ServerName test.example.com:80
CustomLog /var/log/apache2/test.example.com.log combined
LogLevel debug
<Location />
Order allow,deny
Allow from all
ProxyPass balancer://test.example.com/
ProxyPassReverse balancer://test.example.com/
</Location>
</VirtualHost>
Where 192.168.1.200 is a bogus address that isn't running any web server, to simulate a backend failure. The test site was served up without a problem for a bunch of different client machines, but even with the LogLevel set to debug, I didn't see anything logged to indicate that it detected that one of the backend servers was down... And I'd like to make 100% sure that I can take our load-balanced backends down for maintenance (one at a time, of course) without affecting production sites.
http://httpd.apache.org/docs/2.4/mod/mod_proxy.html Section "BalancerMember parameters", property=retry:
If the connection pool worker to the backend server is in the error
state, Apache httpd will not forward any requests to that server until
the timeout expires. This enables [one] to shut down the backend
server for maintenance, and bring it back online later. A value of 0
means always retry workers in an error state with no timeout.
However there are other failure conditions that wouldn't be caught using mod_whatever, for example, IIS backend running an application which is down. IIS is up so a connection can be made and a page can be read, it's just that the page will always be 500 internal server error. Here you will have to use failonerror to catch it and force the worker into an error state.
In all cases once the worker is in an error state traffic will not be directed to it. I've been trying different ways of consuming that first failure and retrying it but there always seems to be cases where an error page makes it back to the client.
There is a property 'ping' in the 'BalancerMember parameters'
Reading the documentation it sounds like 'ping' set to 500ms will send a request before mod_proxy directs you to a BalancerMember. mod_proxy will wait 500ms for a response from a BalancerMember, and if mod_proxy doen't get a response it will but the BalancerMember into an error state.
I tired implementing this but it did not appear to help with directing to a live BalancerMember.
<Proxy balancer://APICluster>
BalancerMember https://api01 route=qa-api1 ttl=5 ping=500ms
BalancerMember https://api02 route=qa-api2 ttl=5 ping=500ms
ProxySet lbmethod=bybusyness stickysession=ROUTEID
</Proxy>
http://httpd.apache.org/docs/2.4/mod/mod_proxy.html
Ping property tells the webserver to "test" the connection to the backend before forwarding the request. For AJP, it causes mod_proxy_ajp to send a CPING request on the ajp13 connection (implemented on Tomcat 3.3.2+, 4.1.28+ and 5.0.13+). For HTTP, it causes mod_proxy_http to send a 100-Continue to the backend (only valid for HTTP/1.1 - for non HTTP/1.1 backends, this property has no effect). In both cases, the parameter is the delay in seconds to wait for the reply. This feature has been added to avoid problems with hung and busy backends. This will increase the network traffic during the normal operation which could be an issue, but it will lower the traffic in case some of the cluster nodes are down or busy. By adding a postfix of ms, the delay can be also set in milliseconds.