Connections from Apache to Tomcat using proxypass are not closed after each request - apache

I currently have apache2 configured so that requests on specific urls such as /myapp are directed to my internal tomcat server at tomcathost:8080/myapp.
All the requests to myapp to apache2 work as expected.
The problem that I'm facing is that whenever a request is sent to myapp trough apache2 it seems that apache2 is keeping the connection open to tomcat and after a while all the threads in tomcat are taken by apache and apparently not released.
Could somebody point me in the right direction to solve this issue ?
ProxyPass /myapp balancer://apps/myapp
ProxyPassReverse /myapp balancer://apps/myapp
<Proxy balancer://apps>
BalancerMember http://appserver01:8080 route=Node01 loadfactor=1
BalancerMember http://appserver02:8080 route=Node02 loadfactor=1
ProxySet lbmethod=byrequests
ProxySet stickysession=JSESSIONID|jsessionid
ProxySet nofailover=On
</Proxy>

You can set connectiontime out in server.xml so incase if any thread is open it get timedout.
connectionTimeout="120000"
Also you might use JConsole which is part of JDK installation to monitor threads via JMX for your tomcat to see if there is any loop of threads which causes increase in threads.

Related

server failover test issue from apache web server to jboss application

I have used the proxypass and reverse proxypass inside the httpd.conf in apache server to repoints requests to particular JVM in JBoss server.
I have 2 JBoss servers...during a failover test..one JBoss server is stopped abruptly...the apache cant identifies that the one JBoss server is stopped and hence cant redirect request to the other server.
Any help on this?
try
<Proxy balancer://BALANCER_NAME>
BalancerMember https://foo1.bar:443 ping=1 loadfactor=1
BalancerMember https://foo2.bar:443 status=+H ping=1
</Proxy>
ProxyPass / balancer://BALANCER_NAME/
ProxyPassReverse / balancer://BALANCER_NAME/
In this configuration all is sent to foo1 and foo2 is in hot standby (status=+H)

apache2 proxy balancer - passing server name

I'm using apache2 as a local proxy balancer between the web and a jboss machine.
i've used the following configuration:
<Proxy balancer://mycluster>
BalancerMember http://localhost:8080
</Proxy>
ProxyPass /test balancer://mycluster
if i call my machine with www.mymachine.com/test then the call is passed to JBoss, but in the request it seems that it was called with 'localhost'.
how can I make sure the correct server name is passed as well?
Aviad
all i needed to do is to add:
ProxyPreserveHost On

Endeca cluster load balancing

I have an Endeca cluster setup with 3 dgraph(1 EAC Central Server and 2 EAC agent only instances). I am trying to put an Apache mod_proxy load balancer for testing purpose before the MDEX engines(I am using presentation API to hit the MDEX engine(we are working on assembler API also)). We shall be having an F5(or Nginx, which one shall be better?) hardware load balancer when we'll shall do the actual deployment. My apache server is listening at port 5555, All my Dgraphs are running at port 15000 on three different host. I'm directing the all my queries to apache load balancer.
MDEX_HOST = localhost
MDEX_PORT = 5555
private static ENEConnection createConnection() {...}
And here is my Apache Load balancer configuration. Load balancer modules included in httpd.conf file mod_proxy, mod_proxy_balancer, mod_proxy_connect, mod_proxy_http, mod_negotiation. I have put the load balancer configuration in httpd-vhosts.conf file.
NameVirtualHost *:5555
<VirtualHost *:5555>
Header add Set-Cookie "ROUTEID=.%{BALANCER_WORKER_ROUTE}e; path=/"
env=BALANCER_ROUTE_CHANGED
ServerName localhost
ProxyPass / balancer://cluster/
<Proxy balancer://cluster>
BalancerMember http://172.16.26.129:15000 loadfactor=1 retry=0 route=1
BalancerMember http://172.16.26.210:15000 loadfactor=1 retry=0 route=2
BalancerMember http://172.16.27.87:15000 loadfactor=1 retry=0 route=3
Order Deny,Allow
Deny from none
Allow from all
ProxySet lbmethod=byrequests
ProxySet stickysession=ROUTEID
</Proxy>
</VirtualHost>
<Location /balancer-manager>
SetHandler balancer
</Location>
When I do a query(Type ahead service) I'm getting the following error..
** Error Fri Apr 10 20:05:53 IST 2015 1428676553858 /atg/rest/processor/RestActorManager Caused by (#6):com.endeca.navigation.ENEException: HTTP Error 404 - Navigation Engine not able to process request 'http://localhost:5555/search?terms=je&rank=0&offset=0&irversion=640'.
Can anyone please see my load balancer configuration what I'm doing wrong? Thanks

How to properly configure Apache Httpd as Load Balancer where some hosts may be unavailable

I am using an Apache Httpd instance as proxy in front of multiple Java Tomcat instances. Apache acts as load balancer for the Tomcat instances.
The apache config basically looks like follows
<Proxy balancer://mycluster>
BalancerMember ajp://host1:8280 route=jvmRoute-8280
BalancerMember ajp://host2:8280 route=jvmRoute-8280
BalancerMember ajp://host3:8280 route=jvmRoute-8280
</Proxy>
<VirtualHost *:80>
ProxyPass / balancer://mycluster/
ProxyPassReverse / balancer://mycluster/
</VirtualHost>
This basically works when the AJP ports are configured in the Tomcat instances. Requests are sent to one of the hosts and the load is distributed across the Tomcat instances.
However I see very long delays that seem to be caused inside Httpd whenever one of the hosts is not available, i.e. it seems Apache does not remember that one of the hosts is not available and repeatedly tries to send requests also to the missing hosts instead of sending it to one of the available hosts and trying the failing host at some time later.
Is there a way to configure mod_proxy et.al. from Apache Httpd to support such a failover scenario, i.e. having multiple hosts and don't cause huge delays when one host fails? Preferably Apache should periodically check in the background which hosts are gone and not as them for any requests.
I did find HAProxy which seems to be more suited for this kind of thing, but I would prefer to stick with Apache for a number of unrelated reasons.
Update
In the meantime I found out that part of my problem was caused by clients which kept the connection open endlessly and thus no more connections/threads were available.
Thus I change the question to:
What configuration options would you use to minimize the effect of something like this? I.e. allow many open connections or close them quickly in this case? Otherwise this sounds like a very easy DOS-attack with my current config?
Clients will not keep the connection open endlessly. Check your Apache server-tuning.conf and look for the KeepAliveTimeout setting. Lower it to something sensible.
Your changes to connectiontimeout and retry are indeed what you have to do. I'd lower connectiontimeout though. 10 seconds is still ages. If the back end is in the same location why not set it in miliseconds? connectiontimeout=200ms should leave plenty of time to set up the connection.
I think I found at least sort of a workaround or simple solution. mod_proxy seems to have a very long connectiontimeout by default (300 seconds). if you do not set it differently, it will take a long time until offline nodes are detected as being in "err" state.
By setting a short connectiontimeout and increasing the retry I could make it work better for me:
BalancerMember ajp://host1:8280 route=jvmRoute-8280 connectiontimeout=10 retry=600
This will ensure that failing connections are detected fairly quickly and Apache does not retry too often to reach failing servers. Unfortunately it seems Apache uses actual requests for checking the balance members and thus from time to time single requests may be slow when it tries to reach a server previously put into err-state. It seems there is no heartbeat or watchdog feature. For something like this other load balancing solutions bring such features, notably HAProxy
Read up on mod_proxy and mod_proxy_balancer for more details.
Additionally server-status via mod_status and balance manager via a page provided by mod_balancer have been a great help in diagnosing this!
It seems you have forgotten the ping tag (Actually it's called CPING - 100-Continue)
Like so:
<Proxy "balancer://www">
BalancerMember "http://192.168.0.100:80" max=128 ttl=300 retry=60 connectiontimeout=5 timeout=300 ping=2
BalancerMember "http://192.168.0.101:80" max=128 ttl=300 retry=60 connectiontimeout=5 timeout=300 ping=2
BalancerMember "http://192.168.0.102:80" max=128 ttl=300 retry=60 connectiontimeout=5 timeout=300 ping=2
BalancerMember "http://192.168.0.103:80" max=128 ttl=300 retry=60 connectiontimeout=5 timeout=300 ping=2
BalancerMember "http://192.168.0.104:80" max=128 ttl=300 retry=60 connectiontimeout=5 timeout=300 ping=2
BalancerMember "http://192.168.0.105:80" max=128 ttl=300 retry=60 connectiontimeout=5 timeout=300 ping=2
BalancerMember "http://192.168.0.106:80" max=128 ttl=300 retry=60 connectiontimeout=5 timeout=300 ping=2
SetEnv proxy-nokeepalive 1
</Proxy>
ProxyPass "/www/" "balancer://www/"
ProxyPassReverse "/www/" "balancer://www/"

mod_ajp_proxy configurations and session stickiness

All,
I have a jboss and apache setup hosting my .war file. I have enabled session stickiness to forward requests from apache to jboss . Assume i have 2 apache and 2 jboss instances.
Is the below setting correct ? Currently session stickyness is not working and each time request is appended with a new JSESSSION ID.
<Proxy balancer://cluster>
Order deny,allow
Allow from all
BalancerMember ajp://1.1.1.1:8010/testing keepalive=On loadfactor=1 ping=10 ttl=600
BalancerMember ajp://2.2.2.2:8010/testing keepalive=On loadfactor=1 ping=10 ttl=600
</Proxy>
ProxyPass /testing balancer://cluster timeout=60 stickysession=JSESSIONID nofailover=On
Do i need to add route variable to the balancer member configuration ??? and do i need to enable useJK flag in jboss.
YES, you need to add route to each balancer member.
route=member1
route=member2
That is how Apache knows which way to direct later requests. Look at your cookies in your browser.