I'm using Amazon Elastic Beans Talk which created an EC2 instance for my php application
The application takes minutes to process some work but found that the http connection get closed after 60 second
although the php script keep processing (maximum time limit set to 0) but the http connection get closed so I recieve no output
as example google chrome say
Error 324 (net::ERR_EMPTY_RESPONSE): The server closed the connection without sending any data.
I edited the /etc/httpd/conf/httpd.conf file on the instance and changed the timeout from 60 to 300
then rebooted the instance but the connection still getting closed at 60
what may be the problem ?
Know the problem
that was not an apached problem but was "Elastic load balancer" which closing the http connection after 60 seconds of no data transfer
yes this is boring and amazon don't listen to calls to set this higher
https://forums.aws.amazon.com/thread.jspa?messageID=396594񠴲
Related
I recently upgraded our Tomcat server from 7.0.85 to 9.0.70. I am using Apache 2.4.
My Java application runs in a cluster, and it is expected that if the master node fails during a command, the secondary node will take the master role and finish the action.
I have a test that starts an action, performs a failover, and ensures that the secondary node completes the action.
The client sends the request and loops up to 8 times trying to get an answer from the server.
Before the upgrade, the client gets a read-timeout for the first 3/4 tries and then the secondary finishes the action, sends a 200 response, and the test passes. I can see in the Apache access log that the server is trying to send a 500 (internal error) response for the first tries, but I guess it takes too long and I get a read timeout before that.
After the upgrade, I am getting a read-timeout for the first try, but after that, the client receives the internal error response and stops trying. I can see that on the second try the Apache response is way faster than the first try and from the other tries (the 2,3,4 tries) before the upgrade.
I can see in the tcpdump that in the first try (both before and after the upgrade) the connection between the Apache and the Tomcat reaches the timeout. In the following tries the Tomcat sends the Apache a reset connection. The difference is, after the upgrade the Tomcat sends the reset connection immediately after the request, and before the upgrade, it takes a few seconds to send it.
My socket timeout is 20 seconds, the AJP timeout is 10 seconds (as it was before the upgrade). I am using the same configuration files as before the upgrade (except for some refactoring changes I had to do because of Tomcat changes). I tried changing the AJP timeout to 20 seconds, but it didn't help
Is this a configuration issue? Is there a way to “undo” this change?
HTTP2 has this multiplexing feature.
From this [answer](Put simply, multiplexing allows your Browser to fire off multiple requests at once on the same connection and receive the requests back in any order.) we get that:
Put simply, multiplexing allows your Browser to fire off multiple requests at once on the same connection and receive the requests back in any order.
Let's say I split my app into 50 small bundled files, to take advantage of the multiplex communication.
My server is an express app hosted in a Cloud Run instance.
Here is what Cloud Run says about concurrency:
By default Cloud Run container instances can receive many requests at the same time (up to a maximum of 250).
So, if 5 users hit my app at the same time, does it mean that my instance will be max'ed out for a brief moment?
Because each browser (from the 5 users) will make 50 requests (for the 50 small bundled files), resulting on a total of 250.
Does the fact that multiplex traffic occurs on over the same connection change any thing? How does it work?
Does it mean that my cloud run will perceive 5 connections and my express server will perceive 250 requests? I think I'm confused about the request expression in these 2 perspectives (the cloud run instance and the express server).
A "request" is :
the establishment of the connexion between the server and the client (the browser here)
The data transfert
The connexion close.
With streaming capacity of HTTP2 and websocket, the connexion can takes minutes (and up to 1 hour) and you can send data through the channel as you want. 1 connexion = 1 request, 5 connexions = 5 requests.
But keep in mind that keeping this connexion open and processing data in it consume resources on your backend and you can't have dozens of connexion that actively send/receive data, you will saturate your instance.
I am facing one critical issue. GlassFish 3.1.2(Oracle GlassFish final) server HTTP post methods are working only for first few minutes of GlassFish server restart. After some period of time HTTP connection takes 2 mins to initiate the request to the POST URL.
EDIT (from Comments): Java: jdk1.7 OS: Cent OS 7 64bit
After starting GlassFish server, I can access any URL using HttpUrlConnection. But after few minutes, It takes 2 minutes to initiate a request and get a response
For Example:
String msgURL = "someurl";
URL url = new URL(MsgURL);
HttpsURLConnection httpConnection = (HttpsURLConnection) url.openConnection();
if(null != httpsConn) {
System.out.println(httpsConn.getResponseCode()); }
If I restart the GlassFish server again, I am able to get response quickly. again after few minutes facing same slowness is sending reques
Can any one help me in this ?
I am not sure if you create always new connections. It may happen that all connections from the connection pool of the glassfish are in use, and you get a new connection after some of the connections times out. The timeout can explain the delay you are facing.
We are strugling with the infamous timeout of 60 seconds on ELB (https://forums.aws.amazon.com/thread.jspa?threadID=33427).
Our PHP application fails on a few ajax requests.
We'd like to simulate ELB's behaviour on our development/test machines so that we don't have to wait for the deployment on EC2 to discover the bugs...
Does anyone know if there is a way to tune Apache so that it closes the HTTP queries like ELB does ?
NB: this timeout only affects queries that do not send anything for 60 seconds, it's not a max request time...!
Thanks for any help !
This timeout has to do with a request taking more than 60 seconds, without sending any data as a response. Long HTTP requests like this should really be avoided. Even browsers may give up if they dont receive any response within a timeout period.
If you can't get rid of long HTTP requests and you need to simulate it. You can use an fcgi/php-fpm configuration of php and provide a timeout of 60 seconds for which the fcgi process will wait for PHP to respond.
I deployed a RoR3 application on Amazon EC2 using Rubber.
I have a slow request that takes about 1 minute, but it dies after 30 seconds with the error:
504 Gateway Time-out
The server didn't respond in time.
Does anyone know how to increase the timeout time?
Long running requests are generally not a good idea as you can see. Webservers will generally timeout after 30 seconds of no response from an FCGI application.
You will need to look at fcgi timeout vars for the webserver application your deployment is using.