Windows server 2008/glassfish/postgres proxy timeout HTTP idle after 120 seconds - apache

Environment:
Windows 2008 Server Edition
Netbeans 6.7.1
Glassfish 2.1
Apache 2.2.15 for win32
Original problem (almost fixed):
The HTTP/1.1 GET method to send data fails if I wait for more than 30 seconds.
What I did:
I added to the http.conf file of Apache these lines:
#
# Timeout: The number of seconds before receives and sends time out.
#
Timeout 9000
#
# KeepAlive: Whether or not to allow persistent connections (more than
# one request per connection). Set to "Off" to deactivate.
#
KeepAlive On
I went to the Glassfish panel (localhost:4848) and in Configuration > HTTP services and I put:
Timeout request: 9000 seconds (it was 30)
Standby time: -1 (it was 30 seconds)
Problem:
I am not able to put for glassfish a timeout bigger than 2 minutes to send a GET method.
I found this article about glassfish settings, but i'm not able to find WHERE I should put those parameters, and if they could work.
Can anybody help try to set this timeout to a higher limit?
New tried solution:
I went to the glassfish panel control, and to Configuration > Subprocesses > "Thread-pool-name" and changed the idle timeout from 120 seconds to 1200 seconds. Then I restarted the glassfish service (both from the administrative tools and from asadmin), but still it waits 120 seconds to go idle.
I even tried restarting the whole server, still no results. Maybe some setting in postgres? Or the connection of netbeans to postgres through glassfish?
New finding:
I've been searching on the internet and maybe it could be a proxy timeout, but i don't really now how to change it: can anybody help me please?

In the end it was the ProxyTimeout directive in the httpd.conf file of Apache.
Look at this.

Related

When will monit actually start or restart a service

Can someone please let me know on what basis monit decides that its time to restart an application? For instance, if I want monit to monitor my web application, what information should I provide to monit based on which it will restart?
thanks
Update:
I was able to kind of make it work using the following monit config
check host altamides with address web.dev1.ams
if failed port 80 with protocol http
then alert
However, I was wondering if I can use any absolute URL of my application. Something like http://foo:5453/test/url/1.html/
Can someone help me on that please?
Monit by himself will not restart any service, but you can provide to it the rules you want to perform it, you can do something like
check process couchdb with pidfile /usr/local/var/run/couchdb/couchdb.pid
start program = "/etc/init.d/couchdb start"
stop program = "/etc/init.d/couchdb stop"
if cpu > 60% for 2 cycles then alert
if cpu > 80% for 5 cycles then restart
if memory usage > 70% MB for 5 cycles then restart
check host mmonit.com with address mmonit.com
if failed port 80 protocol http then alert
if failed port 443 protocol https then alert
I figured the answer from monit help page
if failed
port 80
protocol http
request "/data/show?a=b&c=d"
then restart

getting lots of 408 status code in apache access log after migration from http to https

We are getting lots of 408 status code in apache access log and these are coming after migration from http to https .
Our web server is behind loadbalancer and we are using keepalive on and keepalivetimeout value is 15 sec.
Can someone please help to resolve this.
Same problem here, after migration from http to https. Do not panic, it is not a bug but a client feature ;)
I suppose that you find these log entries only in the logs of the default (or alphabetically first) apache ssl conf and that you have a low timeout (<20).
As of my tests these are clients establishing pre-connected/speculative sockets to your web server for fast next page/resource load.
Since they only establish the initial socket connection or handshake ( 150 bytes or few thousands) the connect to the ip and do not specify a vhost name, and got logged in the default/firs apache conf log.
After few secs from the initial connection they drop the socket if not needed or the use is for faster further request.
If your timeout is lower than these few secs you get the 408 if is higher apache doesn't bother.
So either you ignore them / add a different default conf for apache, or you rise the timeout having more apache processes busy waiting from the client to drop or use the socket.
see https://bugs.chromium.org/p/chromium/issues/detail?id=85229 for some related discussions

Why is Apache unable to serve static content with high concurrency on Windows?

When testing Apache 2.4.16 on windows 7,8,2012 there is a severe limitation when serving static content. Apache can't serve more than 700 concurrent requests for static content with Keep Alive OFF.
When you attempt to do that one of two things will happen:
You will be able to server few thousand requests at first and then the remaining requests will take up to 10 seconds to complete.
OR
You will receive a connection refused error
Test method:
siege -b -c700 -t10s -v http://10.0.0.31/10k.txt (10KB file)
OR
ab -c 700 -n 40000 http://10.0.0.31/10k.txt
However, when testing with Apache bench on the localhost (bypassing the network) Apache works fine and can serve 1000 concurrent requests for 10K static file.
Apache has ThreadsPerChild 7000 (increasing it to 14000 didn't make any difference)
MaxConnectionsPerChild 0
Stack parameters:
MaxUserPort = 65534
TcpTimedWaitDelay = 30
Server has over 60,000 ephemeral ports available starting with port 5,000 to port 65534
My load testing server is Linux on a separate server and sends requests over the network to Apache Windows server over 10Gb/s network.
There are no errors in the Apache log and nothing in the system logs. The tasklist doesn't show anything unusual.
netstat shows few thousand (5,000) of open TCP connections and then Apache stops responding. However when testing with lower concurrency of 300 then the OS can open 60,000 of TCP connections and Apache works fine.
Potential Conclusions:
At first I thought this is OS stack tuning problem but serving php file with the same concurrency works fine.
ab -c 700 -n 10000 http://10.0.0.31/phpinfo.php
Then I tried Nginx for windows on the same machine and Nginx served this without a problem.
ab -c 700 -n 10000 http://10.0.0.31/10k.txt
Nginx was able to serve much higher concurrency up to 2000 requests per second (static content) and the OS opened about 40,000 TCP connections.
So this looks to me like a bug or a limitation in the way Apache communicates with the TCP/IP stack on windows.
When trying to duplicate this problem make sure Keep Alive is OFF and test it over the network (not on localhost).
Any answers or comments on this subject will be greatly appreciated.
Thanks to covener's suggestion here is the answer.
Keep Alive was intentionally disabled to simulate a large number of users connecting from different IP addresses and spawning new TCP connections.
Setting AcceptFilter http to "none" together with turning off MultiViews improved the performance on static content and allowed Apache on windows to serve with concurrency of 2000 and beyond untill all ephemeral ports get exhausted.

Plone taking a long time to respond to byte-range request

We have two recently upgraded Plone 4.3.2 instances behind a haproxy load balancer which itself is behind Apache.
We limit each Plone instance to serving two concurrent requests using haproxy configuration.
We recently encountered an issue whereby a client sent 4 byte-range requests in quick succession for a PDF that each took between 6 and 8 minutes to get a response. This locked up all available requests for 6 minutes and so haproxy timed out other requests in the queue. The PDF is stored an ATFile object in Plone which I believe should have been migrated to blob storage in our recent upgrade.
My question is what steps should we take to prevent a similar scenario in the future?
I'm also interested in:
how to debug why the byte-range requests on an otherwise lightly loaded server should take so long to respond
how plone.app.blob deals with byte-range requests
is it possible to configure Apache such that byte-range requests are served from its cache but not from the back-end server
As requested here is the haproxy.cfg with superfluous configuration stripped out.
global
maxconn 450
spread-checks 3
defaults
log /dev/log local0
mode http
option http-server-close
option abortonclose
option redispatch
option httplog
timeout connect 7s
timeout client 300s
timeout queue 120s
timeout server 300s
listen cms 127.0.0.1:18181
id 3
balance leastconn
option httpchk
http-check send-state
timeout check 10s
acl cms_edit url_dom xxx.xxx.xxx.xxx
acl cms_not_ok nbsrv() lt 2
block if cms_edit cms_not_ok
server cms_instance1 app:18081 check downinter 10s maxconn 2 rise 1 slowstart 300s
server cms_instance2 app:18082 check downinter 10s maxconn 2 rise 1 slowstart 300s
You can install https://pypi.python.org/pypi/Products.LongRequestLogger and check its log file to see where the request gets stuck.
I've opted to disable byte-range requests to the back-end Zope server. I've added the following to the CMS listen section in haproxy.
reqidel ^Range:.*

Idle socket connection to Apache server timeout period

I open a socket connection to Apache server however I don't send any requests waiting for a specific time to do it. How long can i expect Apache to keep this idle socket connection alive?
Situation is that Apache server has limited resources and connections require to be allocated in advance before they all gone.
After request is sent server advertise its timeout policy:
KeepAlive: timeout=15,max=50
If consequent request is sent in longer then 15 seconds it gets 'server closed connection' error. So it does enforce the policy.
However, it seems that if no requests are sent after connection was opened Apache will not close it even for as long as 10 minutes.
Can someone shed some light on behavior of Apache in such situation.
According to Apache Core Features, TimeOut Directive the default timeout is 300 seconds but it's configurable.
For keep-alive connections (after the first request) the default timeout is 5 sec (see Apache Core Features, KeepAliveTimeout Directive). In Apache 2.0 the default value was 15 seconds. It's also configurable.
Furthermore, there is a mod_reqtimeout Apache Module which provides some fine-tuning settings.
I don't think that any of the mentioned values are available for http clients via http headers or any other forms. (Except the keep-alive value of course.)