Automatically Kill Apache Process if it Use 100% - apache

I am using CentOS 5 + Webmin and Apache server. Sometimes it happen that some Apache process leave open connection and it use 100%. That can increase load average for +1. If sometimes happen more then one, there is a problem with load average, it's increased for 1-2-3.
Is it possible to add automatically kill Apache PID if it use 100% ? Or if is live longer then some time?
After manually terminate that PID, everything is back to normal, I would just like to find automatically way to terminate it in case it happen.

You should use monit or some similar solution, that allows you to monitor a process and take an action when CPU or memory usage is above any threshold.

Related

Apache2: upload restarts over and over

We are using different upload scripts with Perl-Module CGI for our CMS and have not encountered such a problem for years.
Our customer's employees are not able to get a successful download.
No matter what kind or size of file, no matter which browser they use, no matter if they do it at work or log in from home.
If they try to use one of our system's upload pages the following happens:
The reload seems to work till approx. 94% are downloaded. Suddenly, the reload restarts and the same procedure happens over and over again.
A look in the error log shows this:
Apache2::RequestIO::read: (70007) The timeout specified has expired at (eval 207) line 5
The wierd thing is if i log in our customer's system using our VPN-Tunnel i never can reproduce the error (i can from home though).
I have googled without much success.
I checked the apache timeout setting which was at 300 seconds - which is more than generous.
I even checked the content length field for a value of 0 because i found a forum entry refering to a CGI bug that related to a content length field of 0.
Now, i am really stuck and running out of ideas.
Can you give me some new ones, please?
The apache server is version 2.2.16, the perl CGI module is version 3.43 .
We are using mod_perl.
We did know our customer didn't use any kind of load balancing.
Without letting anyone else know our customers infrastructure departement activated a load balancer. This way requests went to different servers and timed out.

Cannot allocate memory: fork: Unable to fork new process : server apache, how found source and fix it?

I have this error in error.log of my apache server:
[error] (12)Cannot allocate memory: fork: Unable to fork new process
I don't know where to start to find the problem!?
How know how many fork process are started?
How know what script is running in each fork process?
How know memory cost for each fork process?
Other idea to find a solution?
This error occur regularly. I restart server and problem is fixed, but it comes back shortly after, so I need to find a better solution.
Error : "Unable to fork: Cannot allocate memory" while loggin to VPS,
You usually get that error when your VPS runs out of resources especially RAM.
At this moment what you can do is restart the VPS to get the RAM usage down to temporarily login.
I had same problem to fix it there is 2 options:
1- move from micro instances to small and this was the change that solved the problem (micro instances on amazon tend to have large cpu steal time)
2- tune the mysql database server configuration and my apache configuration to use a lot less memory.
3- some sugest that it caused by insufficient swap file space. Without it, the system has to refuse fork operations even if it has sufficient free RAM
tuning guide for a low memory situation such as this one: http://www.narga.net/optimizing-apachephpmysql-low-memory-server/ (But don't use the suggestion of MyISAM tables - horrible...)
this 2 options will make the problem much much less happening .. I am still looking for better solution to close the process that are done and kill the ones that hang in there .
In my case, it was my apache log are too big and not enouth space are free on disk...
Have to think about archiving log !

Apache timeout in perl CGI tool

I am running a Perl CGI tool that executes a system command (Unix) which may run for a few seconds up to an hour.
After the script is finished, the tool should display the results log on the screen (in a browser).
The problem is that after about 5 minutes I get a timeout message "Gateway Time-out" - the system command continue to run but I'm unable to display to the user the results of the run.
In the Apache config file (httpd.conf): Timeout 300.
Is there a simple way ordering the Apache to increase the timeout only for a specific run?
I don't really want to change the Apache timeout permanently (or should I?) and not dramatically update the code (a lot of regression tests).
Thanks in advance.
Mike
Make the script generate some output every once in a while. The timeout is not for running the program to completion, but is a timeout while Apache is waiting for data. So if you manage to get your program to output something regularly while running, you will be fine.
Note that HTTP clients, i.e. browsers, also have their own timeout. If your browser does not get any new data from the web server five minutes (typically), the browser will declare a timeout and give up even if the server is still processing. If your long running processing gives some output every now and then, it will help against browser timeouts too!
For completeness:
Though the accepted answer is the best (it's variously known as KeepAlive packets in TCP/IP, or Tickle packets way back in appletalk days) you did ask if you can do dynamic Apache config.
An apache module could do this. Oh, but that's a pain to write in C.
Remember that mod_perl (and to some extent mod_python, though it's deprecated) do not only handlers but wrap the internal config in perl as well. You could write something complicated to increase the timeout in certain situations. But, this would be a bear to write and test, and you're better off doing what Krisku says.
There doesn't seem to be any way to specify a timeout on the <!--#include virtual=... --> directive, but if you use mod_cgid instead of mod_cgi then starting with Apache 2.4.10 there's a configurable timeout parameter available which you can specify in httpd.conf or .htaccess:
CGIDScriptTimeout nnns
...where nnn is the number of seconds that Apache will allow a cogitating CGI script to continue to run.
Caveat: If you use PHP with Apache, then your Apache is presumably configured in /etc/httpd/conf.modules.d/00-mpm.conf to use "prefork" MPM (because PHP requires it unless built with thread-safe flags), and the default Apache installation used mod_cgi with the prefork MPM, so you'll probably need to edit /etc/httpd/conf.modules.d/01-cgi.conf to tell Apache to use mod_cgid instead of mod_cgi.
Although the comment in 01-cgi.conf says, "mod_cgid should be used with a threaded MPM; mod_cgi with the prefork MPM," that doesn't seem to be correct, because mod_cgid seems to work fine with prefork MPM and PHP, for me, with Apache 2.4.46.
Although that doesn't give you complete control over server timeouts, you could specify a different CGIDScriptTimeout setting for a particular directory (e.g., put your slow .cgi files in the ./slowstuff/ folder).
(Of course, as krisku mentioned in the accepted answer, changing CGIDScriptTimeout won't solve the problem of the user's web browser timing out.)

Server timeout when re-assembling the uploaded file

I am running a simple server app to receive uploads from a fine-uploader web client. It is based on the fine-uploader Java example and is running in Tomcat6 with Apache sitting in front of it and using ProxyPass to route the requests. I am running into an occasional problem where the upload gets to 100% but ultimately fails. In the server logs, as well as on the client, I can see that Apache is timing out on the proxy with a 502 error.
After trying and seeing this myself, I realized the problem occurs with really large files. The Java server app was taking longer than 30 seconds to reassemble the chunks into a single file and so Apache would kill the connection and stop waiting. I have increased Apache Timeout to 300 seconds which should largely correct the problem but the potential remains.
Any ideas on other ways to handle this so that the connection between Apache and Tomcat is not killed while the app is assembling the chunks on the server? I am currently using 2 MB chunks and was thinking maybe I should use a larger chunk size. Perhaps with fewer chunks to assemble the server code could do it faster. I could test that but unless the speedup is dramatic it seems like the potential for problems remain and will just be waiting for a large enough upload to come along to trigger them.
It seems like you have two options:
Remove the timeout in Apache.
Delegate the chunk-combination effort to a separate thread, and return a response to the request as soon as possible.
With the latter approach, you will not be able to let Fine Uploader know if the chunk combination operation failed, but perhaps you can perform a few quick sanity checks before responding, such as determining if all chunks are accessible.
There's nothing Fine Uploader can do here, the issue is server side. After Fine Uploader sends the request, its job is done until your server responds.
As you mentioned, it may be reasonable to increase the chunk size or make other changes to speed up the chunk combination operation to lessen the chance of a timeout (if #1 or #2 above are not desirable).

Apache: simultaneous connections to single script

How does Apache (most popular version nowadays, i guess) handle a connection to a script when this script is already being executed for another connection?
My guess has always been - upon receipt of a request to a script, script's contents are copied-to-memory/compiled/executed, and IF during this process there's another request to this script - same things happen (assuming Apache does not lock the script file, and simply gives another share of memory/cpu for another compilation/memory-storage/execution)
Or is there a queuing/waiting mechanism involved?
Assuming this additional connection is afforded enough memory, cpu, and does not pass maximum connections setting.
The quickly (and easy) answer is every request is processes by a new process.
Apache listens in some port and for each request create a new process that handles that request. That means no shared memory.
Also take a look to processes with "ps" command, you will see one "http" process for each request.
Take a look here for more complex working: http://httpd.apache.org/docs/2.0/mod/worker.html
and look at google too :) http://docstore.mik.ua/orelly/weblinux2/apache/ch01_02.htm