I deployed a RoR3 application on Amazon EC2 using Rubber.
I have a slow request that takes about 1 minute, but it dies after 30 seconds with the error:
504 Gateway Time-out
The server didn't respond in time.
Does anyone know how to increase the timeout time?
Long running requests are generally not a good idea as you can see. Webservers will generally timeout after 30 seconds of no response from an FCGI application.
You will need to look at fcgi timeout vars for the webserver application your deployment is using.
Related
my application (Node.js) is using moleculer for microservices and redis as transporter. However, I find that the application will have this log Redis-pub client is disconnected every 10 minutes, then reconnect with the log Redis-pub client is connected after a few seconds. This is a problem because if a client send a moleculer action during this time, it will fail.
Any idea what is causing this? Let me know if more information is needed.
Azure Cache for Redis currently has a 10-minute idle timeout for connections, so the idle timeout setting in your client application should be less than 10 minutes. Most common client libraries have a configuration setting that allows client libraries to send Redis PING commands to a Redis server automatically and periodically. However, when using client libraries without this type of setting, customer applications themselves are responsible for keeping the connection alive.
More info: https://learn.microsoft.com/en-us/azure/azure-cache-for-redis/cache-best-practices-connection#idle-timeout
Consider the case where a load balancer is connected to 10 servers, each of which runs a single web server process that is single threaded. Handling a request takes 10 seconds.
Now the load balancer receives 10 requests in the span of 2 seconds. When an eleventh request comes in, what happens?
The load balancer realizes that all the servers are busy, and waits until one of them completes their task before forwarding the request to that server
The load balancer immediately forwards the request to one of the servers, despite it being busy.
I'm having trouble finding information on which of the two above scenarios is correct. Thanks for your help!
We are strugling with the infamous timeout of 60 seconds on ELB (https://forums.aws.amazon.com/thread.jspa?threadID=33427).
Our PHP application fails on a few ajax requests.
We'd like to simulate ELB's behaviour on our development/test machines so that we don't have to wait for the deployment on EC2 to discover the bugs...
Does anyone know if there is a way to tune Apache so that it closes the HTTP queries like ELB does ?
NB: this timeout only affects queries that do not send anything for 60 seconds, it's not a max request time...!
Thanks for any help !
This timeout has to do with a request taking more than 60 seconds, without sending any data as a response. Long HTTP requests like this should really be avoided. Even browsers may give up if they dont receive any response within a timeout period.
If you can't get rid of long HTTP requests and you need to simulate it. You can use an fcgi/php-fpm configuration of php and provide a timeout of 60 seconds for which the fcgi process will wait for PHP to respond.
I'm using Amazon Elastic Beans Talk which created an EC2 instance for my php application
The application takes minutes to process some work but found that the http connection get closed after 60 second
although the php script keep processing (maximum time limit set to 0) but the http connection get closed so I recieve no output
as example google chrome say
Error 324 (net::ERR_EMPTY_RESPONSE): The server closed the connection without sending any data.
I edited the /etc/httpd/conf/httpd.conf file on the instance and changed the timeout from 60 to 300
then rebooted the instance but the connection still getting closed at 60
what may be the problem ?
Know the problem
that was not an apached problem but was "Elastic load balancer" which closing the http connection after 60 seconds of no data transfer
yes this is boring and amazon don't listen to calls to set this higher
https://forums.aws.amazon.com/thread.jspa?messageID=396594񠴲
Can I configure Glassfish to drop any request that takes longer than 10 seconds to process?
Example:
I'm using Glassfish to host my web service. The thread pool is configured to have max 5 connections.
My service has a method that does this:
System.out.println("New request");
Thread.sleep(1000*1000);
I'm creating 5 requests to the service and I see 5 messages "New request" in the log. Then the server stop to respond for a looong time.
In live environment all requests must be processed in less than a second. If it takes more time to process then there is a problem with the request and I want Glassfish to drop such requests but stay alive and serve other requests.
Currently I'm using a workaround in the code. At the beginning of my web method I launch a separate thread for request processing with a timeout as it was suggested here: How to timeout a thread
I do not like this solution and still believe that there must be a configuration setting in the Glassfish to apply this logic to all requests, not to just one method.