Apache delete/remove/undefine a proxy balancer definition - apache

I have an Apache configuration that needs to implement a balancer that uses a set of temporary upstream servers for a few months and then replace them with a permanent set. I am trying to design an approach that lets me deliver both configurations at install time and makes it easy to switch them programmatically later on. This needs to be done on about 40 servers that all have unique configurations.
What I've tried so far...
I added the following code to the httpd.conf file:
<proxy balancer://upstream>
balancermember http://permanentserver1:80
balancermember http://permanentserver2:80 status=+H
balancermember http://permanentserver3:80 status=+H
</proxy>
include conf\temp_upstream.conf
..and then inside the temp_upstream.conf file, I try to overwrite the definition of the balancer
<proxy balancer://upstream>>
balancermember http://temporaryserver1:80
balancermember http://temporaryserver2:80 status=+H
balancermember http://temporaryserver3:80 status=+H
</proxy>
…but it doesn't seem to work. The second balancer definition appears to be ignored (although it may be merged - I can't easily tell).
The reason I'm using this approach is so that I can just replace the temp_upstream.conf file with an empty file when it's time to perform the switch over - and then restart Apache.
Is there any way I can make this configuration work? Is there a way that I can undefine/delete a balancer that was defined earlier in the script so that the second definition is accepted? (I do know that I could pass a variable on the startup line and use IfDefine to conditionally process the right definition - but that would mean modifying the Apache startup command which I'd rather not do.)

I recently came upon a perfect solution to my problem.
I confirmed that the 2 definitions are merged in memory to make one larger definition.
I was able to make it work exactly like I wanted it to by adding "lbset=0" (the default) to each BalancerMember definition in the temporary configuration in the temp_upstream.conf file and "lbset=1" to the BalancerMember definitions in the permanent configuration in httpd.conf. The lbset=1 definitions are only used after all lbset=0 definitions have failed.

Related

Memcached creates three keys every 60 seconds

I have something occurring I don't quite understand and it may or may not be related to Memcached. My current setup is three tomcat nodes all on the same machine load balanced by Apache HTTP 2.4 with Lucee 4.5 running on each node. I am using Memcached in a virtual machine as a session store.
Everything seems to be working great. However I noticed when I was looking at the contents of the Memcached items that every 60 seconds like clockwork there are new keys added based on the number of nodes I am running. So if I have all three nodes running, every 60 seconds there will be three keys added to the cache with my initial session key still there like it should be. In the image below you will see that the very first stats cachedump 12 100 it lists an item, that is my initial session right after making my first request.
Because there are the same amount of keys added every minute that there are nodes running it makes me wonder if it is a setting in Apache or Tomcat that I have wrong or if what I am seeing is common for Memcached (doubtful). My guess would be Tomcat but I have nothing to go on and all of my searches for this lead no where so I am hoping someone here can point me in the right direction on solving this. It might not be that big of deal but it seems like unnecessary data is getting stored when it doesn't need to be.
Below is a snippet of the proxy area in my Apache configuration that deals with the load balancing
<Proxy balancer://nodes>
BalancerMember ajp://127.0.0.1:5009 loadfactor=5
BalancerMember ajp://127.0.0.1:6009 loadfactor=5
BalancerMember ajp://127.0.0.1:7009 loadfactor=5
# BalancerMember ajp://192.168.56.101:8009 route=node1 loadfactor=5
# BalancerMember ajp://192.168.56.102:8009 route=node2 loadfactor=5
# BalancerMember ajp://192.168.56.103:8009 route=node3 loadfactor=5
ProxySet lbmethod=byrequests
</Proxy>
ProxyPreserveHost On
ProxyPassMatch ^/(.*)?$ balancer://nodes/$1$2

How to properly configure Apache Httpd as Load Balancer where some hosts may be unavailable

I am using an Apache Httpd instance as proxy in front of multiple Java Tomcat instances. Apache acts as load balancer for the Tomcat instances.
The apache config basically looks like follows
<Proxy balancer://mycluster>
BalancerMember ajp://host1:8280 route=jvmRoute-8280
BalancerMember ajp://host2:8280 route=jvmRoute-8280
BalancerMember ajp://host3:8280 route=jvmRoute-8280
</Proxy>
<VirtualHost *:80>
ProxyPass / balancer://mycluster/
ProxyPassReverse / balancer://mycluster/
</VirtualHost>
This basically works when the AJP ports are configured in the Tomcat instances. Requests are sent to one of the hosts and the load is distributed across the Tomcat instances.
However I see very long delays that seem to be caused inside Httpd whenever one of the hosts is not available, i.e. it seems Apache does not remember that one of the hosts is not available and repeatedly tries to send requests also to the missing hosts instead of sending it to one of the available hosts and trying the failing host at some time later.
Is there a way to configure mod_proxy et.al. from Apache Httpd to support such a failover scenario, i.e. having multiple hosts and don't cause huge delays when one host fails? Preferably Apache should periodically check in the background which hosts are gone and not as them for any requests.
I did find HAProxy which seems to be more suited for this kind of thing, but I would prefer to stick with Apache for a number of unrelated reasons.
Update
In the meantime I found out that part of my problem was caused by clients which kept the connection open endlessly and thus no more connections/threads were available.
Thus I change the question to:
What configuration options would you use to minimize the effect of something like this? I.e. allow many open connections or close them quickly in this case? Otherwise this sounds like a very easy DOS-attack with my current config?
Clients will not keep the connection open endlessly. Check your Apache server-tuning.conf and look for the KeepAliveTimeout setting. Lower it to something sensible.
Your changes to connectiontimeout and retry are indeed what you have to do. I'd lower connectiontimeout though. 10 seconds is still ages. If the back end is in the same location why not set it in miliseconds? connectiontimeout=200ms should leave plenty of time to set up the connection.
I think I found at least sort of a workaround or simple solution. mod_proxy seems to have a very long connectiontimeout by default (300 seconds). if you do not set it differently, it will take a long time until offline nodes are detected as being in "err" state.
By setting a short connectiontimeout and increasing the retry I could make it work better for me:
BalancerMember ajp://host1:8280 route=jvmRoute-8280 connectiontimeout=10 retry=600
This will ensure that failing connections are detected fairly quickly and Apache does not retry too often to reach failing servers. Unfortunately it seems Apache uses actual requests for checking the balance members and thus from time to time single requests may be slow when it tries to reach a server previously put into err-state. It seems there is no heartbeat or watchdog feature. For something like this other load balancing solutions bring such features, notably HAProxy
Read up on mod_proxy and mod_proxy_balancer for more details.
Additionally server-status via mod_status and balance manager via a page provided by mod_balancer have been a great help in diagnosing this!
It seems you have forgotten the ping tag (Actually it's called CPING - 100-Continue)
Like so:
<Proxy "balancer://www">
BalancerMember "http://192.168.0.100:80" max=128 ttl=300 retry=60 connectiontimeout=5 timeout=300 ping=2
BalancerMember "http://192.168.0.101:80" max=128 ttl=300 retry=60 connectiontimeout=5 timeout=300 ping=2
BalancerMember "http://192.168.0.102:80" max=128 ttl=300 retry=60 connectiontimeout=5 timeout=300 ping=2
BalancerMember "http://192.168.0.103:80" max=128 ttl=300 retry=60 connectiontimeout=5 timeout=300 ping=2
BalancerMember "http://192.168.0.104:80" max=128 ttl=300 retry=60 connectiontimeout=5 timeout=300 ping=2
BalancerMember "http://192.168.0.105:80" max=128 ttl=300 retry=60 connectiontimeout=5 timeout=300 ping=2
BalancerMember "http://192.168.0.106:80" max=128 ttl=300 retry=60 connectiontimeout=5 timeout=300 ping=2
SetEnv proxy-nokeepalive 1
</Proxy>
ProxyPass "/www/" "balancer://www/"
ProxyPassReverse "/www/" "balancer://www/"

Providing unique directives to proxy block by virtual host

Say I have the following Proxy block in my main config:
<Proxy balancer://PrivateSSL/>
BalanceMember http://host:8080/ route=01 loadfactor=100
BalanceMember http://host:8080/ roout=02 loadfactor=100
ProxySet stickysession=ROUTEID
</Proxy>
Now, in a VirtualHost can I "enhance/spice/modify" that block like so:
<VirtualHost ip:port>
...
<Proxy balancer://PrivateSSL/>
RequestHeader set Host reverse-proxy-host
</Proxy>
</VirtualHost>
without having all the previously defined Proxy elements repeated?
I'm actually going to play with this, but the community might have a pattern that works better (maybe saying that is verboten, but I think others will benefit from the answer).
Testing locally is going to be a dog. But some RTFM helped out:
The configuration sections are applied in a very particular order. Since this can have important effects on how configuration directives are interpreted, it is important to understand how this works.
The order of merging is:
<Directory> (except regular expressions) and .htaccess done
simultaneously (with .htaccess, if allowed, overriding <Directory>)
<DirectoryMatch> (and <Directory ~>)
<Files> and <FilesMatch> done
simultaneously
<Location> and <LocationMatch> done simultaneously
<If>
...
When the request is served by mod_proxy, the <Proxy> container takes the place of the <Directory> container in the processing order.

Apache Timeout Configuration

We are using apache 2.2 version and we have three servers configured for load balancing purpose (like as below)
BalancerMember http://node1:port/ route=node1
BalancerMember xxxx://node2:xxxx/ route=node2
BalancerMember xxxx://node3:xxxx/ route=node3
However the backend application nodes configured in balancer member requires lot of processing time and hence we were facing issues related to timeout like as below
“The timeout specified has expired: proxy: error reading status line
from remote server ”
As I had a customised .conf file ,I had to add the below lines explicitly to avoid picking default timeout value from default http-default.conf file
<VirtualHost server:port>
Timeout 500
<Proxy balancer://xxxxx>
BalancerMember http://node1:port/ route=node1 timeout=500
</Proxy>
</VirtualHost>
So now my questions are:
Do I need to explicitly configure timeout value at both the levels as shown above,
a) Timeout 500 outside Proxy.
b) timeout=500 at BalancerMember level.
I read in internet that if the timeout of the Apache BalancerMember is
not configured the global Apache timout is inherited there. Please suggest..
Also please suggest the exact parameters needs to be tuned when huge
concurrent requests are anticipated ?
Thanks

How do I enable sticky load balancing based on session identifiers using apache mod_proxy_balancer

Our proxy configuration (in httpd.conf) to send requests to 2 JBoss instances are given below is based on mod_proxy_balancer
<Proxy balancer://mycluster>
Allow from all
BalancerMember http://192.168.1.2:9080
BalancerMember http://192.168.1.2:8080
</Proxy>
ProxyPass /app balancer://mycluster/app
ProxyPassReverse /app http://192.168.1.2:9080/app
ProxyPassReverse /app http://192.168.1.2:8080/app
How do I enable sticky load balancing based on session identifiers. Am I supposed to set the following flag as part of the Proxy declaration? It doesn't seem to take any effect. How would I debug to see if this is working fine.
SetEnv BALANCER_SESSION_STICKY JSESSIONID
The PHP sticky sessions article was an interesting read, and that lead me to look for a JBoss specific solution. The key is having the route appended to the session value in the jsessionid param/cookie. JBoss (actually tomcat) has builtin support for this.
Add jvmRoute="" to the config in each server.xml. Then change <attribute name="UseJK">false</attribute>in jboss-service.xml to 'true'.
The whole setup is described in Using mod_proxy with JBoss.