Apache timeout in perl CGI tool - apache

I am running a Perl CGI tool that executes a system command (Unix) which may run for a few seconds up to an hour.
After the script is finished, the tool should display the results log on the screen (in a browser).
The problem is that after about 5 minutes I get a timeout message "Gateway Time-out" - the system command continue to run but I'm unable to display to the user the results of the run.
In the Apache config file (httpd.conf): Timeout 300.
Is there a simple way ordering the Apache to increase the timeout only for a specific run?
I don't really want to change the Apache timeout permanently (or should I?) and not dramatically update the code (a lot of regression tests).
Thanks in advance.
Mike

Make the script generate some output every once in a while. The timeout is not for running the program to completion, but is a timeout while Apache is waiting for data. So if you manage to get your program to output something regularly while running, you will be fine.
Note that HTTP clients, i.e. browsers, also have their own timeout. If your browser does not get any new data from the web server five minutes (typically), the browser will declare a timeout and give up even if the server is still processing. If your long running processing gives some output every now and then, it will help against browser timeouts too!

For completeness:
Though the accepted answer is the best (it's variously known as KeepAlive packets in TCP/IP, or Tickle packets way back in appletalk days) you did ask if you can do dynamic Apache config.
An apache module could do this. Oh, but that's a pain to write in C.
Remember that mod_perl (and to some extent mod_python, though it's deprecated) do not only handlers but wrap the internal config in perl as well. You could write something complicated to increase the timeout in certain situations. But, this would be a bear to write and test, and you're better off doing what Krisku says.

There doesn't seem to be any way to specify a timeout on the <!--#include virtual=... --> directive, but if you use mod_cgid instead of mod_cgi then starting with Apache 2.4.10 there's a configurable timeout parameter available which you can specify in httpd.conf or .htaccess:
CGIDScriptTimeout nnns
...where nnn is the number of seconds that Apache will allow a cogitating CGI script to continue to run.
Caveat: If you use PHP with Apache, then your Apache is presumably configured in /etc/httpd/conf.modules.d/00-mpm.conf to use "prefork" MPM (because PHP requires it unless built with thread-safe flags), and the default Apache installation used mod_cgi with the prefork MPM, so you'll probably need to edit /etc/httpd/conf.modules.d/01-cgi.conf to tell Apache to use mod_cgid instead of mod_cgi.
Although the comment in 01-cgi.conf says, "mod_cgid should be used with a threaded MPM; mod_cgi with the prefork MPM," that doesn't seem to be correct, because mod_cgid seems to work fine with prefork MPM and PHP, for me, with Apache 2.4.46.
Although that doesn't give you complete control over server timeouts, you could specify a different CGIDScriptTimeout setting for a particular directory (e.g., put your slow .cgi files in the ./slowstuff/ folder).
(Of course, as krisku mentioned in the accepted answer, changing CGIDScriptTimeout won't solve the problem of the user's web browser timing out.)

Related

Solution to execute CGI scripts

I have some CGI scripts which are used internal to the organization. Due to some standard changes, the server/system team has commented the below line from httpd.conf (apache config file) that support the CGI scripts. Because of this change, the existing CGI scripts are affected and unable to execute them on browser.
### LoadModule cgid_module modules/mod_cgid.so
Is there a way to overcome this situation and make the scripts work as is.
NOTE : The above commented line cannot be uncommented/enabled.
Ways of circumventing your company's policy, from best to worse:
Load mod_cgi anyway.
Load mod_fcgi, and convert your CGI script into a Fast CGI daemon. It's a lot of work, but you'll can get faster code out of it!
Load your own module that does exactly the same thing as mod_cgi. mod_cgi is open source, so it should be easy to just rename it.
Load mod_fcgi, and write a Fast CGI daemon that executes your script.
Install a second apache web server with mod_cgi enabled. Link to it directly or use mod_proxy on the original server.
Write your own web server. Link to it directly or use mod_proxy on the original server.
Last time you asked this question, you talked about using mod_perl instead. The standard way to run CGI program unchanged (for some value of "unchanged") under mod_perl is by using ModPerl::Registry. Did you try that? How did it go?
Another alternative would be to convert your programs to use PSGI. You could try using Plack::App::Wrap::CGI or CGI::Emulate::PSGI. Using Plack would free you from any deployment restrictions. You could run the code under mod-perl or even as a separate service behind a proxy server.
But I can't help marvelling at how ridiculous this whole situation is. Your company has CGI programs that it (presumably) relies on to run part of its business. And they've just decided to turn off support for them. You need to find out why this decision has been made and try to buy some time in order to convert to an alternative technology.

Configure Access-Control-Allow-Origin for monit

I am trying to grab json data from monit and display it on a status page for management to see the current status of a handful of processes. This info would be displayed in Confluence running on the same machine but since Confluence (apache) and monit are running on different ports it is considered to be cross domain.
I know I can write a server side process to serve this data but that seems to be overkill and would actually take longer that it took to set monit up in the first place :)
The simplest solution is to configure monit's headers (Access-Control-Allow-Origin) to allow the other server. Does anyone know how to do this? I suspect there is a way since M/Monit would run into the same issue. I have tried some blind attempts on the "httpd... allow" lines but it complains about the syntax with x.x.x.x:port or using keyword "port" in that location.
ok... going to answer my own question (sort of).
First, I think I may have asked the question wrong. I don't deal with a lot of cross domain issues. Sorry about that.
But here is what I did to get to the monit info from the other servers: pretty simple using proxies in apache where the main server is:
ProxyPass /monit http://localhost:2812
ProxyPassReverse /monit http://mainserver/monit
ProxyPass /monit2 http://server2:2812
ProxyPassReverse /monit2 http://mainserver/monit2
I did this for each of the servers and tested that I can get to either the monit web interface or to the _status?format=json sub pages. I can now call them using ajax on our main web page.
This also has the benefit that I can lock down the monit access control to just the main server but have the info show on a more visible page. :)
I don't think you would need a proxy to just display monit's api or http info. It depends on how you have your network and dns configured. If you'd like to use only localhost, then that might be necessary. But, monit does have a facility to use global host ip access using allow directives in it's own config rc file

Zend Framework application times out with strange message

I was hoping someone could help me figure out why my application keeps timing out.
It is built using Zend Framework.
I'm trying to run a script that takes a few minutes to finish.
It works fine on my local machine (MAMP) but times out on the prod server (Ubuntu).
The relevant php.ini settings on both servers are:
max_execution_time = 600
max_input_time = 600
memory_limit = 512M
post_max_size = 8M
It should run for 10 minutes before timing out right?
On the Ubuntu server it'll only run for 1-2 minutes and then time out with this message printed in the middle of the browser:
"Backend server did not respond in time.
App server is too busy and cannot handle requests in time."
Pretty sure it's a Zend message but I can't find anything about it on the internet.
Thank you very much for any help.
That message looks like it's from mod_fastcgi.
When you run PHP under FastCGI (or you run anything under FastCGI), the PHP script is a separate entity from the web server. You've configured your PHP process to run for up to 10 minutes, but Apache/mod_fastcgi is configured to only wait some shorter period for your PHP script to start returning data.
The idea is to insulate the apache process from external processes that go off into the weeds never to return (eventually, apache would run out of listeners)
If you have access to the FastCGI configuration section of httpd.conf, check out the values for -appConnTimeout or -idle-timeout.
Unfortunately (or fortunately, if you're a sysadmin at some hosting company), you can't override these settings via .htaccess or even per virtualhost. At least, not according to the documentation
Turns out it was nginx running on the load balancer.
The error was coming from nginx which is running under scalr. Hence the "Backend server did not respond in time. App server is too busy and cannot handle requests in time." - a scalr error.
Once the proxy_read_timeout setting was raised in the nginx config the script stopped timing out.
Thanks for the scalr link above - pointed me in the right direction.

Lamp with mod_fastcgi

I am building a cgi application, and now I would like it to be like an application that stands and parses each connection, with this, I can have all session variables saved in memory instead of saving them to file(or anyother place) and loading them again on a new connection
I am using lamp within a linux vmware but I can't seem to find how to install the module for it to work and what to change in the httpd.conf. I tried to compile the module, but I couldn't because my apache isn't a regular instalation, its a lamp already built one, and it seems that the mod needs the apache directory to be compiled. I saw some coding examples out there, so I guess is not that hard once its runing ok with Apache
Can you help me with this please?
Thanks,
Joe
Using FastCGI just means that you spawn a number of processes which will handle requests they get from the actual webserver instead of the webserver spawning a new process whenever a request arrives.
Use something like memcached if you want to keep stuff like sessions in memory.

How can I test a comet ajax site on a single host and work around browser simultaneous connection limit?

I am using the comet long-polling technique with apache, php, jquery.
I've got a basic comet update running and it works great. I'm now attempting to build a more complex comet script, and I want a better way to debug.
My comet scripts use $.ajax() with a long timeout, and the server side just sleeps until it either runs up to the timeout or has an event to send to the client. The comet requests go to a different subdomain than the main ajax requests.
For normal pages I edit and test on a linux laptop. I've got apache, mysql, and php with a test database and mirror image of the site. I can edit, save, and see the changes with no upload step. For the comet stuff I've been having to upload to a server to test. This requires me to set up a few fake servers, but mostly it requires me to upload changed files for each test. I've got a mostly automatic upload script, but it's still too slow.
The problem testing locally is the long timeout. The browser won't open another connection to the same server while the comet request is still open. I don't have a subdomain locally so I have all the requests going to the same server so they basically block each other.
I've tried a number of things to make this work and none really do it. I tried first to change my browser setting for number of simultaneous connections. This didn't work in firefox on linux, and I didn't find anything about changing this limit on other browsers.
I tried setting my hosts file to give me two names that map to my ip address. Then I tried configuring VirtualHost conf directives in apache, but that didn't work. I think because apache is looking for an actual dns server to tell it the hostname, not just my /etc/hosts file. Maybe I can run a local dns server to fool apache into thinking my box has two names, but that just seems like a real long way around this problem.
So, does anyone have an idea of how to make this work on one ip address/host?
I'm new to the comet thing, so maybe I've just got the wrong idea about something. Maybe this isn't even possible. Either way, it's time to just ask if this is already a solved problem.
It really should be possible to use /etc/hosts to fool Apache. It certainly does work on Ubuntu Hardy with Apache 2.2.
Try to give different hostname to you local address. Simply add a line like this to /etc/hosts:
127.0.0.1 a.example.com b.example.com c.example.com d.example.com
(Note: use a tab after IP)
Validate this with a ping
ping a.example.com
In you apache configuration, you may use a wildcard alias together with a named virtual host:
<VirtualHost *:80>
ServerName example.com
ServerAlias *.example.com
## snip ##
<VirtualHost>
Instead of using example.com, you might want to use something that's under your control. I use local subdomain of our company's domain (i.e. something.local.molindo.at).
Now you can use different subdomains for your test, each with its own limitation on concurrent connections.
You may need to restart your browser to get this working.
I have made something similar and my hosting gives my max queries limit reached which actually should not happen. But I have read that if my php code is in infinite loop.. ie the sleep mode the hosting detects it and makes db connection user as to be using more queries than allowed. That is alot to presume but I have found a solution to that with same speculations.