We have sync service(10.17.8.89) which syncs the data from the replicator service(10.17.4.184) over HTTP long polling. Both sync and replicator service is written using python requests lib and eventlet wisgi module.
Replicator service accepts request on port 6601 and is behind Apache httpd. Rewirte rule is used to proxy the request.
Rewrite rule looks like this
RewriteRule ^/sync.* http://localhost:6601%{REQUEST_URI}?%{QUERY_STRING} [P]
We use chunked encoding to transfer the data. Data is defined as length:data.
Every 2 seconds heartbeat '1:0' is sent from replicator to sync to keep the connection alive.
Apache timeout is configured to 20 seconds.
System looks like this
Apache drops the connections by sending the FIN packet to both replicator service and sync service at random interval say after 4 minutes and drops the connection even after sending heartbeat every 2 seconds.
packet capture from client to server
packet capture from server to client
packet capture from apache to replicator service
Can any one help us in resolving the issue of why Apache drops the connection.
Related
I would like to know how many TCP connections are created when WebSocket call is made from browser to apache http server to backend web service?
Does it create a separate TCP connection from the browser to apache http server and from apache to the web service?
When Apache is proxying websockets, there is 1 TCP connection between the client and Apache and 1 TCP connection between Apache and the backend.
Apache watches both connections for activity and forwards read from one onto the other.
This is the only way it can be in a layer 7 (Application Layer, HTTP) proxy. Something tunnelling at a much lower layer, like a NAT device or MAC forwarding IP sprayer could tunnel a single connection -- but not on the basis of anything higher up in the stack like headers.
The 2nd connection is observable with netstat.
The 2nd connection is opened when mod_proxy_wstunnel calls ap_proxy_connect_to_backend() which calls apr_socket_create() which calls the portable socket() routine. When recent releases of mod_proxy_http handle this tunneling automatically, simialr flow through ap_proxy_acquire_connection.
Having Apache (2.4.29) configured as reverse proxy for WebSocket requests with mod-proxy-wstunnel:
ProxyPass "/myapp/ws" "ws://localhost:8080/myapp/ws"
For each active WebSocket client, I see an Apache worker that remains active (in "Sending Reply" status) as long as this client is kept alive even if there is no data being sent. In practice it means I cannot scale WebSocket clients because all available connections are consumed.
In /server-status there is one line like this for each client:
Srv PID Acc M CPU SS Req Conn Child Slot Client Protocol VHost Request
0-0 10219 0/43/43 _ 1.04 1828 984237 0.0 0.09 0.09 ::1 http/1.1 butler.openbravo.com:443 GET /myapp/ws/helloWs HTTP/1.1
Using different mpm configurations (tested with event, worker and prefork) has no effect on this.
I would need Apache to be able to reuse these workers when they are sitting idle (no transferring any data) in order to be able to scale it. Is it possible?
No, it's not currently possible to multiplex websockets connections this way.
In httpd trunk (2.5.x) there are experimental options to allow these connections to go asynchronous after idle for some time. But it is unlikely to be something that would be backported to 2.4.x and there is not really a stable 2.6.x on the horizon at the moment.
We are getting lots of 408 status code in apache access log and these are coming after migration from http to https .
Our web server is behind loadbalancer and we are using keepalive on and keepalivetimeout value is 15 sec.
Can someone please help to resolve this.
Same problem here, after migration from http to https. Do not panic, it is not a bug but a client feature ;)
I suppose that you find these log entries only in the logs of the default (or alphabetically first) apache ssl conf and that you have a low timeout (<20).
As of my tests these are clients establishing pre-connected/speculative sockets to your web server for fast next page/resource load.
Since they only establish the initial socket connection or handshake ( 150 bytes or few thousands) the connect to the ip and do not specify a vhost name, and got logged in the default/firs apache conf log.
After few secs from the initial connection they drop the socket if not needed or the use is for faster further request.
If your timeout is lower than these few secs you get the 408 if is higher apache doesn't bother.
So either you ignore them / add a different default conf for apache, or you rise the timeout having more apache processes busy waiting from the client to drop or use the socket.
see https://bugs.chromium.org/p/chromium/issues/detail?id=85229 for some related discussions
I have the following model that i drew below:
I have a number of processes running on the server. I want nginx or apache to direct the incoming clients through port 80 to one of the server processes to handle the requests. However each connection also establishes a websocket connection to the same process. This is currently initiated from the client side within javascript. At the moment for testing purposes I pass the port within the html rendered on the client. The client then takes this port and estabilishes a websocket connection to the same port that handled its request.
Moving forward to an nginx or apache envionment would it be possible not to pass the port value to the client and have nginx or apache know where it directed the incoming client and use the same port for the websocket connection?
This would have the benefit on not opening all the server ports 8000, 8001, 8002 in the diagram below to the public.
I open a socket connection to Apache server however I don't send any requests waiting for a specific time to do it. How long can i expect Apache to keep this idle socket connection alive?
Situation is that Apache server has limited resources and connections require to be allocated in advance before they all gone.
After request is sent server advertise its timeout policy:
KeepAlive: timeout=15,max=50
If consequent request is sent in longer then 15 seconds it gets 'server closed connection' error. So it does enforce the policy.
However, it seems that if no requests are sent after connection was opened Apache will not close it even for as long as 10 minutes.
Can someone shed some light on behavior of Apache in such situation.
According to Apache Core Features, TimeOut Directive the default timeout is 300 seconds but it's configurable.
For keep-alive connections (after the first request) the default timeout is 5 sec (see Apache Core Features, KeepAliveTimeout Directive). In Apache 2.0 the default value was 15 seconds. It's also configurable.
Furthermore, there is a mod_reqtimeout Apache Module which provides some fine-tuning settings.
I don't think that any of the mentioned values are available for http clients via http headers or any other forms. (Except the keep-alive value of course.)