Apache streaming timeout - apache

I have a web server with Apache 2.4 VC11 (http://www.apachelounge.com/download/) running on Win 7.
On my server I have some mp3's. I can queue them up in Winamp and the 1st one will begin streaming. However, after it plays for about 20-30 minutes it stops streaming, almost as if some time limit has been reached. I can reselect the song and drag the position to where it stopped playing and it will continue to play for about 20-30 minutes and stop again.
Is there a setting in the apache configuration I need to add/change to increase this limit?
Thanks!

The files are anywhere from 50 to 120 minutes.
I set EnableSendfile to On in httpd.conf and that appears to have fixed it.

Related

Apache timeout in perl CGI tool

I am running a Perl CGI tool that executes a system command (Unix) which may run for a few seconds up to an hour.
After the script is finished, the tool should display the results log on the screen (in a browser).
The problem is that after about 5 minutes I get a timeout message "Gateway Time-out" - the system command continue to run but I'm unable to display to the user the results of the run.
In the Apache config file (httpd.conf): Timeout 300.
Is there a simple way ordering the Apache to increase the timeout only for a specific run?
I don't really want to change the Apache timeout permanently (or should I?) and not dramatically update the code (a lot of regression tests).
Thanks in advance.
Mike
Make the script generate some output every once in a while. The timeout is not for running the program to completion, but is a timeout while Apache is waiting for data. So if you manage to get your program to output something regularly while running, you will be fine.
Note that HTTP clients, i.e. browsers, also have their own timeout. If your browser does not get any new data from the web server five minutes (typically), the browser will declare a timeout and give up even if the server is still processing. If your long running processing gives some output every now and then, it will help against browser timeouts too!
For completeness:
Though the accepted answer is the best (it's variously known as KeepAlive packets in TCP/IP, or Tickle packets way back in appletalk days) you did ask if you can do dynamic Apache config.
An apache module could do this. Oh, but that's a pain to write in C.
Remember that mod_perl (and to some extent mod_python, though it's deprecated) do not only handlers but wrap the internal config in perl as well. You could write something complicated to increase the timeout in certain situations. But, this would be a bear to write and test, and you're better off doing what Krisku says.
There doesn't seem to be any way to specify a timeout on the <!--#include virtual=... --> directive, but if you use mod_cgid instead of mod_cgi then starting with Apache 2.4.10 there's a configurable timeout parameter available which you can specify in httpd.conf or .htaccess:
CGIDScriptTimeout nnns
...where nnn is the number of seconds that Apache will allow a cogitating CGI script to continue to run.
Caveat: If you use PHP with Apache, then your Apache is presumably configured in /etc/httpd/conf.modules.d/00-mpm.conf to use "prefork" MPM (because PHP requires it unless built with thread-safe flags), and the default Apache installation used mod_cgi with the prefork MPM, so you'll probably need to edit /etc/httpd/conf.modules.d/01-cgi.conf to tell Apache to use mod_cgid instead of mod_cgi.
Although the comment in 01-cgi.conf says, "mod_cgid should be used with a threaded MPM; mod_cgi with the prefork MPM," that doesn't seem to be correct, because mod_cgid seems to work fine with prefork MPM and PHP, for me, with Apache 2.4.46.
Although that doesn't give you complete control over server timeouts, you could specify a different CGIDScriptTimeout setting for a particular directory (e.g., put your slow .cgi files in the ./slowstuff/ folder).
(Of course, as krisku mentioned in the accepted answer, changing CGIDScriptTimeout won't solve the problem of the user's web browser timing out.)

Automatically Kill Apache Process if it Use 100%

I am using CentOS 5 + Webmin and Apache server. Sometimes it happen that some Apache process leave open connection and it use 100%. That can increase load average for +1. If sometimes happen more then one, there is a problem with load average, it's increased for 1-2-3.
Is it possible to add automatically kill Apache PID if it use 100% ? Or if is live longer then some time?
After manually terminate that PID, everything is back to normal, I would just like to find automatically way to terminate it in case it happen.
You should use monit or some similar solution, that allows you to monitor a process and take an action when CPU or memory usage is above any threshold.

PassengerPoolIdleTime being ignored by Passenger

I've followed the instructions outlined in this answer to prevent Passenger from shutting down my app after not being used for a few minutes. However, none of this has worked.
If I refresh my website (which is just served locally on my Mac on Apache) after about 1 minute, it takes it about 6 seconds to load. After that long load, the site is now fast and everything is good. If I let it sit for another minute, refreshing again takes another 6 seconds.
Here is my /etc/apache2/other/Passenger.conf file:
LoadModule passenger_module /Users/maq/.rvm/gems/ruby-2.0.0-p247/gems/passenger-4.0.14/buildout/apache2/mod_passenger.so
PassengerRoot /Users/maq/.rvm/gems/ruby-2.0.0-p247/gems/passenger-4.0.14
PassengerDefaultRuby /Users/maq/.rvm/wrappers/ruby-2.0.0-p247/ruby
PassengerSpawnMethod smart
PassengerPoolIdleTime 1000
RailsAppSpawnerIdleTime 0
PassengerMaxRequests 5000
PassengerMaxPoolSize 30
PassengerMinInstances 1
PassengerEnabled on
And I have restarted Apache after changing all these settings.
Any ideas what else it could be?
Update:
I tried going the cron job route, where I run a cron job every minute to access the web page and make sure it stays alive. Interestingly enough, this does not work either.
It accesses the web page properly, and I see in my logs that the page is being accessed every minute, however every time I try to access it in the browser after a minute or so of user-generated activity, there is that 6 second load up. What can this be?
Note: I am using Rails 4.0.
It turns out the cause of my issues was not Passenger, but Apache and DNS.
It's a Mac OSX issue, and you can find out more about the problem/solution here:
http://clauswitt.com/fixing-slow-dns-queries-in-os-x-lion.html
Basically, if you have an entry in your /etc/hosts file called:
127.0.0.1 railsapp.local
you need to add its IPv6 counterpart so that the system doesn't go performing a remote DNS query:
fe80::1%lo0 railsapp.local

WAMP limit log files size

Is there any functionality in wamp to limit the access and error log to 20 MB? i.e. it should be truncated after 20 MB.
usually (on linux) the log are managed by another system (for example a log rotate), it's not apache which manage them itself - if you understand french someone did logrotate.bat and shared it on the wampserver forum - thus you can use a log rotate system of windows too
Hope this could help

Zend Framework application times out with strange message

I was hoping someone could help me figure out why my application keeps timing out.
It is built using Zend Framework.
I'm trying to run a script that takes a few minutes to finish.
It works fine on my local machine (MAMP) but times out on the prod server (Ubuntu).
The relevant php.ini settings on both servers are:
max_execution_time = 600
max_input_time = 600
memory_limit = 512M
post_max_size = 8M
It should run for 10 minutes before timing out right?
On the Ubuntu server it'll only run for 1-2 minutes and then time out with this message printed in the middle of the browser:
"Backend server did not respond in time.
App server is too busy and cannot handle requests in time."
Pretty sure it's a Zend message but I can't find anything about it on the internet.
Thank you very much for any help.
That message looks like it's from mod_fastcgi.
When you run PHP under FastCGI (or you run anything under FastCGI), the PHP script is a separate entity from the web server. You've configured your PHP process to run for up to 10 minutes, but Apache/mod_fastcgi is configured to only wait some shorter period for your PHP script to start returning data.
The idea is to insulate the apache process from external processes that go off into the weeds never to return (eventually, apache would run out of listeners)
If you have access to the FastCGI configuration section of httpd.conf, check out the values for -appConnTimeout or -idle-timeout.
Unfortunately (or fortunately, if you're a sysadmin at some hosting company), you can't override these settings via .htaccess or even per virtualhost. At least, not according to the documentation
Turns out it was nginx running on the load balancer.
The error was coming from nginx which is running under scalr. Hence the "Backend server did not respond in time. App server is too busy and cannot handle requests in time." - a scalr error.
Once the proxy_read_timeout setting was raised in the nginx config the script stopped timing out.
Thanks for the scalr link above - pointed me in the right direction.