How to get tomcat worker status from jkmanager - apache

We have 3 Machines
mod_jk with load balancer
first Worker on tomcat8
2nd Worker on tomcat8
everything works as expected but, when one of the tomcat is being shutting down the status page on the load balancer still shows that the state of this worker is OK/IDLE.
Any ideas how to force the status page to check the real status of the worker?
Related Materials
worker.properties
\### Define worker names
worker.list=status,loadbalancer
\### Declare Tomcat server 1
worker.worker1.port=8409
worker.worker1.host=centureapp1
worker.worker1.type=ajp13
worker.worker1.lbfactor=1
\### Declare Tomcat server 2
worker.worker2.port=8410
worker.worker2.host=centureapp2
worker.worker2.type=ajp13
worker.worker2.lbfactor=1
worker.loadbalancer.type=lb
worker.loadbalancer.balance_workers=worker1,worker2
worker.loadbalancer.sticky_session=1
worker.status.type=status
~

By default balancer maintenance runs every 60 seconds. So, you will see the state of this worker after 60 seconds.

Related

Openshift online v3 - Timeout when reading response headers from daemon process

I created an python api on openshift online with python image. If you request all the data, it takes more than 30 seconds to respond. The server gives a 504 gateway timeout http response. How do you configure the length a response can take? > I created an annotation on the route, this seems to set proxy timeout.
haproxy.router.openshift.io/timeout: 600s
Problem remains, I now got logging. It looks like the message comes from mod_wsgi.
I want to try alter the configuration of the httpd (mod_wsgi-express process) from request-timeout 60 to request-timeout 600. Where doe you configure this. I am using base image https://github.com/sclorg/s2i-python-container/tree/master/2.7
Logging:
Timeout when reading response headers from daemon process 'localhost:8080':/tmp/mod_wsgi-localhost:8080:1000430000/htdocs
Does someone know how to fix this error on openshift online
Next to alter timeout of haproxy of the route of my app
haproxy.router.openshift.io/timeout: 600s
I altered the request-timeout and socket-timeout in app.sh of my python application. So the mod_wsgi-express server is configured with a higher timeout
ARGS="$ARGS --request-timeout 600"
ARGS="$ARGS --socket-timeout 600"
My application now wait 10 minutes before cancelling a request

Apache JKmanager Activation Status is not updating

I have changed the Activation Status of JKManager node1 from Activate to Deactivate and once after access the application URL and login the status of the JKManager gets changed to activate status.And i couldn't find any errors in Apache logs.Does is there any other configuration required?
My application is using Server Version: Apache/2.2.15 (Win32) mod_jk/1.2.265,mod-jk and Jboss Application Server Version 6.And below is the configured worker.properties file
worker.list=workerlist
# Set properties for node1
worker.node1.type=ajp13
worker.node1.host=xxxx
worker.node1.port=xx
worker.node1.lbfactor=4
# Set properties for node2
worker.node2.type=ajp13
worker.node2.host=xxxx
worker.node2.port=xx
worker.node2.lbfactor=4
# Set properties for workerlist(lb)
worker.workerlist.type=lb
worker.workerlist.balance_workers=node1,node2
worker.workerlist.sticky_session=1
worker.list=jkstatus
worker.jkstatus.type=status
The issue is due to Jboss Application server(Server.xml) and Apache Server(V_host) are configured with same port and we have changed the Port of JBOSS in server.xml which resolved the issue.Thanks

apache2 processes stuck in sending reply - W

I am hosting multiple sites on a server with 7.5gb RAM. Using apache2 mpm_prefork.
Following command gives me a value of 200-300 in production
ps aux|grep -c 'apache2'
Using top i see only some hundred megabytes of RAM is free. Error log show nothing unusual. Is this much apache2 process normal?
MaxRequestWorkers is set to 512
Update:
Now i am using mod-status to check apache activity.
I have a row like this
Srv PID Acc M CPU SS Req Conn Child Slot Client VHost Request
0-0 29342 2/2/70 W 0.07 5702 0 3.0 0.00 1.67 XXX XXX /someurl
If i check again after sometime PID not changes and i get SS with greater value that previous time. M of this request is in 'W` sending reply state. So that means apache2 process locked in for that request?
On my VPS and root servers, the situation is partially similar. AFAIK the os tries to distribute most of the processing power/RAM to running processes and frees the resources for other processes as the need arises.

HaProxy failover based on http status

Is it possible to have HaProxy failover when it encounters a certain http-status codes?
I have the following generic haproxy code that works fine if the tomcat server itself stops/fails. However I would like to fail-over when http-status codes 502 Bad Gateway or 500 Internal Server Error are also encountered from tomcat. The following configuration will continue to send traffic even when 500, 404 status codes are encountered in any node.
backend db01_replication
mode http
bind 192.168.0.1:80
server app1 10.0.0.19:8080 check inter 10s rise 2 fall 2
server app2 10.0.0.11:8080 check inter 10s rise 2 fall 2
server app3 10.0.0.13:8080 check inter 10s rise 2 fall 2
Thanks In Advance
I found the following HaProxy http-check expect to resolve the load-balancing based on http status codes.
# Only accept status 200 as valid
http-check expect status 200
# Consider SQL errors as errors
http-check expect ! string SQL\ Error
# Consider all http status 5xx as errors
http-check expect ! rstatus ^5
In order to fail-over when a 500 error is encountered, the HaProxy configuration would look like:
backend App1_replication
mode http
bind 192.168.0.1:80
http-check expect ! rstatus ^5
server app1 10.0.0.19:8080 check inter 10s rise 2 fall 2
server app2 10.0.0.11:8080 check inter 10s rise 2 fall 2
server app3 10.0.0.13:8080 check inter 10s rise 2 fall 2
Source
https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#http-check%20expect

Apache + Tomcat with mod_jk: maxThread setting upon load balancing

I have Apache + Tomcat setup with mod_jk on 2 servers. Each server has its own Apache+Tomcat pair, and every request is being served by Tomcat load balancing workers on 2 servers.
I have a question about how Apache's maxClient and Tomcat's maxThread should be set.
The default numbers are,
Apache: maxClient=150, Tomcat: maxThread=200
In this configuration, if we have only 1 server setup, it would work just fine as Tomcat worker never receives the incoming connections more than 150 at once. However, if we are load balancing between 2 servers, could it be possible that Tomcat worker receives 150 + (some number from another server) and make the maxThread overflow as SEVERE: All threads (200) are currently busy?
If so, should I set Tomcat's maxThread=300 in this case?
Thanks
Setting maxThreads to 300 should be fine - there are no fixed rules. It depends on whether you see any connections being refused.
Increasing too much causes high memory consumption but production Tomcats are known to run with 750 threads. See here as well. http://java-monitor.com/forum/showthread.php?t=235
Have you actually got the SEVERE error? I've tested on our Tomcat 6.0.20 and it throws an INFO message when the maxThreads is crossed.
INFO: Maximum number of threads (200) created for connector with address null and port 8080
It does not refuse connections until the acceptCount value is crossed. The default is 100.
From the Tomcat docs http://tomcat.apache.org/tomcat-5.5-doc/config/http.html
The maximum queue length for incoming
connection requests when all possible
request processing threads are in use.
Any requests received when the queue
is full will be refused. The default
value is 100.
The way it works is
1) As the number of simultaneous requests increase, threads will be created up to the configured maximum (the value of the maxThreads attribute).
So in your case, the message "Maximum number of threads (200) created" will appear at this point. However requests will still be queued for service.
2) If still more simultaneous requests are received, they are queued up to the configured maximum (the value of the acceptCount attribute).
Thus a total of 300 requests can be accepted without failure. (assuming your acceptCount is at default of 100)
3) Crossing this number throws Connection Refused errors, until resources are available to process them.
So you should be fine until you hit step 3