Zend Framework application times out with strange message - apache

I was hoping someone could help me figure out why my application keeps timing out.
It is built using Zend Framework.
I'm trying to run a script that takes a few minutes to finish.
It works fine on my local machine (MAMP) but times out on the prod server (Ubuntu).
The relevant php.ini settings on both servers are:
max_execution_time = 600
max_input_time = 600
memory_limit = 512M
post_max_size = 8M
It should run for 10 minutes before timing out right?
On the Ubuntu server it'll only run for 1-2 minutes and then time out with this message printed in the middle of the browser:
"Backend server did not respond in time.
App server is too busy and cannot handle requests in time."
Pretty sure it's a Zend message but I can't find anything about it on the internet.
Thank you very much for any help.

That message looks like it's from mod_fastcgi.
When you run PHP under FastCGI (or you run anything under FastCGI), the PHP script is a separate entity from the web server. You've configured your PHP process to run for up to 10 minutes, but Apache/mod_fastcgi is configured to only wait some shorter period for your PHP script to start returning data.
The idea is to insulate the apache process from external processes that go off into the weeds never to return (eventually, apache would run out of listeners)
If you have access to the FastCGI configuration section of httpd.conf, check out the values for -appConnTimeout or -idle-timeout.
Unfortunately (or fortunately, if you're a sysadmin at some hosting company), you can't override these settings via .htaccess or even per virtualhost. At least, not according to the documentation

Turns out it was nginx running on the load balancer.
The error was coming from nginx which is running under scalr. Hence the "Backend server did not respond in time. App server is too busy and cannot handle requests in time." - a scalr error.
Once the proxy_read_timeout setting was raised in the nginx config the script stopped timing out.
Thanks for the scalr link above - pointed me in the right direction.

Related

Apache Access Log File Strange

I'm newbee on linux/server/apache..
I have an apache server runing on my machine, and I have a monitoring app "gotop" that I check from time to time. Even when there is a lot of traffic on my server the 4 core CPU are between 50~70 (not all of them at the same time, they switch between them..).
Today I checked gotop and saw all of them where about 100% with 85° temp!!
I stoped the apache and the CPUs went down of course then went to the access.log and saw an local ip which is the one of my router 192.168.1.1 making infinite requests!
As you can see in the picture.
What is happening? why my router is requesting all this??
Thank you for your help =)
It was a local machine which was downloading from the server.. Nevertheless, expecting another IP than the router one.

Nginx is Slower than Apache downloading main.bundle.js

I have an Angular2 app that I've been developing for a bit now. Locally I run an Nginx server but the deployment server is using Apache. To unify things I worked to move the deployment server to Nginx but I am getting extremely slow results with Nginx.
Apache loads in ~5 seconds (1.1MB transferred)
Nginx loads in 16-20 seconds (5MB transferred)
These are both on the same server pointing to the exact same directory. The actual size of main.bundle.js is 4470365 main.bundle.js so it seems Nginx is loading the entire file.
How is Apache able to download only 737K?
You can check for the features enabled in both the files with nginx and apache by clicking on the exact file in Inspect element Network Tab. Then go to Headers and then Response Headers as illustrated in the attached image.
Check if the gzip compression is enabled in any one of the server. That is the only reason for lesser file size.

Apache2: upload restarts over and over

We are using different upload scripts with Perl-Module CGI for our CMS and have not encountered such a problem for years.
Our customer's employees are not able to get a successful download.
No matter what kind or size of file, no matter which browser they use, no matter if they do it at work or log in from home.
If they try to use one of our system's upload pages the following happens:
The reload seems to work till approx. 94% are downloaded. Suddenly, the reload restarts and the same procedure happens over and over again.
A look in the error log shows this:
Apache2::RequestIO::read: (70007) The timeout specified has expired at (eval 207) line 5
The wierd thing is if i log in our customer's system using our VPN-Tunnel i never can reproduce the error (i can from home though).
I have googled without much success.
I checked the apache timeout setting which was at 300 seconds - which is more than generous.
I even checked the content length field for a value of 0 because i found a forum entry refering to a CGI bug that related to a content length field of 0.
Now, i am really stuck and running out of ideas.
Can you give me some new ones, please?
The apache server is version 2.2.16, the perl CGI module is version 3.43 .
We are using mod_perl.
We did know our customer didn't use any kind of load balancing.
Without letting anyone else know our customers infrastructure departement activated a load balancer. This way requests went to different servers and timed out.

Apache timeout in perl CGI tool

I am running a Perl CGI tool that executes a system command (Unix) which may run for a few seconds up to an hour.
After the script is finished, the tool should display the results log on the screen (in a browser).
The problem is that after about 5 minutes I get a timeout message "Gateway Time-out" - the system command continue to run but I'm unable to display to the user the results of the run.
In the Apache config file (httpd.conf): Timeout 300.
Is there a simple way ordering the Apache to increase the timeout only for a specific run?
I don't really want to change the Apache timeout permanently (or should I?) and not dramatically update the code (a lot of regression tests).
Thanks in advance.
Mike
Make the script generate some output every once in a while. The timeout is not for running the program to completion, but is a timeout while Apache is waiting for data. So if you manage to get your program to output something regularly while running, you will be fine.
Note that HTTP clients, i.e. browsers, also have their own timeout. If your browser does not get any new data from the web server five minutes (typically), the browser will declare a timeout and give up even if the server is still processing. If your long running processing gives some output every now and then, it will help against browser timeouts too!
For completeness:
Though the accepted answer is the best (it's variously known as KeepAlive packets in TCP/IP, or Tickle packets way back in appletalk days) you did ask if you can do dynamic Apache config.
An apache module could do this. Oh, but that's a pain to write in C.
Remember that mod_perl (and to some extent mod_python, though it's deprecated) do not only handlers but wrap the internal config in perl as well. You could write something complicated to increase the timeout in certain situations. But, this would be a bear to write and test, and you're better off doing what Krisku says.
There doesn't seem to be any way to specify a timeout on the <!--#include virtual=... --> directive, but if you use mod_cgid instead of mod_cgi then starting with Apache 2.4.10 there's a configurable timeout parameter available which you can specify in httpd.conf or .htaccess:
CGIDScriptTimeout nnns
...where nnn is the number of seconds that Apache will allow a cogitating CGI script to continue to run.
Caveat: If you use PHP with Apache, then your Apache is presumably configured in /etc/httpd/conf.modules.d/00-mpm.conf to use "prefork" MPM (because PHP requires it unless built with thread-safe flags), and the default Apache installation used mod_cgi with the prefork MPM, so you'll probably need to edit /etc/httpd/conf.modules.d/01-cgi.conf to tell Apache to use mod_cgid instead of mod_cgi.
Although the comment in 01-cgi.conf says, "mod_cgid should be used with a threaded MPM; mod_cgi with the prefork MPM," that doesn't seem to be correct, because mod_cgid seems to work fine with prefork MPM and PHP, for me, with Apache 2.4.46.
Although that doesn't give you complete control over server timeouts, you could specify a different CGIDScriptTimeout setting for a particular directory (e.g., put your slow .cgi files in the ./slowstuff/ folder).
(Of course, as krisku mentioned in the accepted answer, changing CGIDScriptTimeout won't solve the problem of the user's web browser timing out.)

PassengerPoolIdleTime being ignored by Passenger

I've followed the instructions outlined in this answer to prevent Passenger from shutting down my app after not being used for a few minutes. However, none of this has worked.
If I refresh my website (which is just served locally on my Mac on Apache) after about 1 minute, it takes it about 6 seconds to load. After that long load, the site is now fast and everything is good. If I let it sit for another minute, refreshing again takes another 6 seconds.
Here is my /etc/apache2/other/Passenger.conf file:
LoadModule passenger_module /Users/maq/.rvm/gems/ruby-2.0.0-p247/gems/passenger-4.0.14/buildout/apache2/mod_passenger.so
PassengerRoot /Users/maq/.rvm/gems/ruby-2.0.0-p247/gems/passenger-4.0.14
PassengerDefaultRuby /Users/maq/.rvm/wrappers/ruby-2.0.0-p247/ruby
PassengerSpawnMethod smart
PassengerPoolIdleTime 1000
RailsAppSpawnerIdleTime 0
PassengerMaxRequests 5000
PassengerMaxPoolSize 30
PassengerMinInstances 1
PassengerEnabled on
And I have restarted Apache after changing all these settings.
Any ideas what else it could be?
Update:
I tried going the cron job route, where I run a cron job every minute to access the web page and make sure it stays alive. Interestingly enough, this does not work either.
It accesses the web page properly, and I see in my logs that the page is being accessed every minute, however every time I try to access it in the browser after a minute or so of user-generated activity, there is that 6 second load up. What can this be?
Note: I am using Rails 4.0.
It turns out the cause of my issues was not Passenger, but Apache and DNS.
It's a Mac OSX issue, and you can find out more about the problem/solution here:
http://clauswitt.com/fixing-slow-dns-queries-in-os-x-lion.html
Basically, if you have an entry in your /etc/hosts file called:
127.0.0.1 railsapp.local
you need to add its IPv6 counterpart so that the system doesn't go performing a remote DNS query:
fe80::1%lo0 railsapp.local