I open a socket connection to Apache server however I don't send any requests waiting for a specific time to do it. How long can i expect Apache to keep this idle socket connection alive?
Situation is that Apache server has limited resources and connections require to be allocated in advance before they all gone.
After request is sent server advertise its timeout policy:
KeepAlive: timeout=15,max=50
If consequent request is sent in longer then 15 seconds it gets 'server closed connection' error. So it does enforce the policy.
However, it seems that if no requests are sent after connection was opened Apache will not close it even for as long as 10 minutes.
Can someone shed some light on behavior of Apache in such situation.
According to Apache Core Features, TimeOut Directive the default timeout is 300 seconds but it's configurable.
For keep-alive connections (after the first request) the default timeout is 5 sec (see Apache Core Features, KeepAliveTimeout Directive). In Apache 2.0 the default value was 15 seconds. It's also configurable.
Furthermore, there is a mod_reqtimeout Apache Module which provides some fine-tuning settings.
I don't think that any of the mentioned values are available for http clients via http headers or any other forms. (Except the keep-alive value of course.)
Related
Having Apache (2.4.29) configured as reverse proxy for WebSocket requests with mod-proxy-wstunnel:
ProxyPass "/myapp/ws" "ws://localhost:8080/myapp/ws"
For each active WebSocket client, I see an Apache worker that remains active (in "Sending Reply" status) as long as this client is kept alive even if there is no data being sent. In practice it means I cannot scale WebSocket clients because all available connections are consumed.
In /server-status there is one line like this for each client:
Srv PID Acc M CPU SS Req Conn Child Slot Client Protocol VHost Request
0-0 10219 0/43/43 _ 1.04 1828 984237 0.0 0.09 0.09 ::1 http/1.1 butler.openbravo.com:443 GET /myapp/ws/helloWs HTTP/1.1
Using different mpm configurations (tested with event, worker and prefork) has no effect on this.
I would need Apache to be able to reuse these workers when they are sitting idle (no transferring any data) in order to be able to scale it. Is it possible?
No, it's not currently possible to multiplex websockets connections this way.
In httpd trunk (2.5.x) there are experimental options to allow these connections to go asynchronous after idle for some time. But it is unlikely to be something that would be backported to 2.4.x and there is not really a stable 2.6.x on the horizon at the moment.
We have sync service(10.17.8.89) which syncs the data from the replicator service(10.17.4.184) over HTTP long polling. Both sync and replicator service is written using python requests lib and eventlet wisgi module.
Replicator service accepts request on port 6601 and is behind Apache httpd. Rewirte rule is used to proxy the request.
Rewrite rule looks like this
RewriteRule ^/sync.* http://localhost:6601%{REQUEST_URI}?%{QUERY_STRING} [P]
We use chunked encoding to transfer the data. Data is defined as length:data.
Every 2 seconds heartbeat '1:0' is sent from replicator to sync to keep the connection alive.
Apache timeout is configured to 20 seconds.
System looks like this
Apache drops the connections by sending the FIN packet to both replicator service and sync service at random interval say after 4 minutes and drops the connection even after sending heartbeat every 2 seconds.
packet capture from client to server
packet capture from server to client
packet capture from apache to replicator service
Can any one help us in resolving the issue of why Apache drops the connection.
We are getting lots of 408 status code in apache access log and these are coming after migration from http to https .
Our web server is behind loadbalancer and we are using keepalive on and keepalivetimeout value is 15 sec.
Can someone please help to resolve this.
Same problem here, after migration from http to https. Do not panic, it is not a bug but a client feature ;)
I suppose that you find these log entries only in the logs of the default (or alphabetically first) apache ssl conf and that you have a low timeout (<20).
As of my tests these are clients establishing pre-connected/speculative sockets to your web server for fast next page/resource load.
Since they only establish the initial socket connection or handshake ( 150 bytes or few thousands) the connect to the ip and do not specify a vhost name, and got logged in the default/firs apache conf log.
After few secs from the initial connection they drop the socket if not needed or the use is for faster further request.
If your timeout is lower than these few secs you get the 408 if is higher apache doesn't bother.
So either you ignore them / add a different default conf for apache, or you rise the timeout having more apache processes busy waiting from the client to drop or use the socket.
see https://bugs.chromium.org/p/chromium/issues/detail?id=85229 for some related discussions
We have two recently upgraded Plone 4.3.2 instances behind a haproxy load balancer which itself is behind Apache.
We limit each Plone instance to serving two concurrent requests using haproxy configuration.
We recently encountered an issue whereby a client sent 4 byte-range requests in quick succession for a PDF that each took between 6 and 8 minutes to get a response. This locked up all available requests for 6 minutes and so haproxy timed out other requests in the queue. The PDF is stored an ATFile object in Plone which I believe should have been migrated to blob storage in our recent upgrade.
My question is what steps should we take to prevent a similar scenario in the future?
I'm also interested in:
how to debug why the byte-range requests on an otherwise lightly loaded server should take so long to respond
how plone.app.blob deals with byte-range requests
is it possible to configure Apache such that byte-range requests are served from its cache but not from the back-end server
As requested here is the haproxy.cfg with superfluous configuration stripped out.
global
maxconn 450
spread-checks 3
defaults
log /dev/log local0
mode http
option http-server-close
option abortonclose
option redispatch
option httplog
timeout connect 7s
timeout client 300s
timeout queue 120s
timeout server 300s
listen cms 127.0.0.1:18181
id 3
balance leastconn
option httpchk
http-check send-state
timeout check 10s
acl cms_edit url_dom xxx.xxx.xxx.xxx
acl cms_not_ok nbsrv() lt 2
block if cms_edit cms_not_ok
server cms_instance1 app:18081 check downinter 10s maxconn 2 rise 1 slowstart 300s
server cms_instance2 app:18082 check downinter 10s maxconn 2 rise 1 slowstart 300s
You can install https://pypi.python.org/pypi/Products.LongRequestLogger and check its log file to see where the request gets stuck.
I've opted to disable byte-range requests to the back-end Zope server. I've added the following to the CMS listen section in haproxy.
reqidel ^Range:.*
Environment:
Windows 2008 Server Edition
Netbeans 6.7.1
Glassfish 2.1
Apache 2.2.15 for win32
Original problem (almost fixed):
The HTTP/1.1 GET method to send data fails if I wait for more than 30 seconds.
What I did:
I added to the http.conf file of Apache these lines:
#
# Timeout: The number of seconds before receives and sends time out.
#
Timeout 9000
#
# KeepAlive: Whether or not to allow persistent connections (more than
# one request per connection). Set to "Off" to deactivate.
#
KeepAlive On
I went to the Glassfish panel (localhost:4848) and in Configuration > HTTP services and I put:
Timeout request: 9000 seconds (it was 30)
Standby time: -1 (it was 30 seconds)
Problem:
I am not able to put for glassfish a timeout bigger than 2 minutes to send a GET method.
I found this article about glassfish settings, but i'm not able to find WHERE I should put those parameters, and if they could work.
Can anybody help try to set this timeout to a higher limit?
New tried solution:
I went to the glassfish panel control, and to Configuration > Subprocesses > "Thread-pool-name" and changed the idle timeout from 120 seconds to 1200 seconds. Then I restarted the glassfish service (both from the administrative tools and from asadmin), but still it waits 120 seconds to go idle.
I even tried restarting the whole server, still no results. Maybe some setting in postgres? Or the connection of netbeans to postgres through glassfish?
New finding:
I've been searching on the internet and maybe it could be a proxy timeout, but i don't really now how to change it: can anybody help me please?
In the end it was the ProxyTimeout directive in the httpd.conf file of Apache.
Look at this.