Linux loop script: Cannot allocate memory - apache

I face a huge problem with a script on linux.
I work on an apache2 server and I have to execute a php script every seconds to update the database (yes, every second, I'm sure).
To do that, I created this script :
#!/bin/bash
while [ -f "MONFICHIER" ]
do
php fichier.php >> log.txt
sleep 1
done
exit 0
This script runs for a while and stops and I get this error message : "fork: Cannot allocate memory"
Actually everything works fine and after a while, a plenty of defunct processes are generated and it is because these processes that the memory is full.
About the php file it is a index.php file to the CodeIgniter Framework with parameters for the fonction to call. Finally it updates the database by checking data.
I'm sorry but I can't provide the source code (confidential) but the function is fast (less than a second)
Did any body have this problem?
Thanks!

Either your PHP script takes more than one second to execute and you have a lot of parallel php invocation, or it takes a lot of memory. Either way, nobody can help you with only the source of the shell script.

Related

Execute a shell command outside of a sandbox while in a sandbox

I'm using singularity to run python in an environnement deprived of python. I'm also running a mysql instance as explained by the IOWA state university (running an instance of mysql, and closing it when done).
For clarity, I'm using a bash script to open mysql, then do what i have to do (a python script) and close mysql, and it works fine. But Python's only way to stop if an error occured is sys.exit([value]) and this not only stops the python script, but also the bash script that ran it. This makes it impossible for me to manage the errors and close the instance of mysql if the python script exits.
My question is : Is there a way for me to execute a 'singularity instance stop mysql' while being in the python sandbox. Something to tell singularity "hey, this command here must be used on the host !" ?
I keep searching but can't find anything.
I only tried to execute it with subprocess like any other command, but it returned an error message because I don't have this instance inside the python sandbox. I don't even have singularity in this sandbox.
For any clarifications, just ask me, I'm trying to be clear but I'm pretty sure it's not very clear.
Thanks a lot !
Generally speaking, it would be a big security issue if a process could be initiated from inside a container (docker or singularity) but run in the host OS's namespace.
If the bash script is exiting on the python failure, it sounds like you're using set -e or #!/bin/bash -e. This causes the script to abort if any command returns non-zero. It's commonly recommended for safer processing, but can cause problems like this at times. To bypass that for the python step you can modify your script:
# start mysql, do some stuff
set +x # disable abort on non-zero return
python my_script.py
set -x # re-enable abort on non-zero
# shut down mysql, do other stuff

cgi-bin returning internal server error due to compilation failure

I moved my script to a new server with almost identical configuration (apache/centos) but the cgi-bin has been failing to work ever since. For past one week I have googled every possible solution and isolated the error by executing script from command line. Output i get is as follows for a simple test file:
[root /var/foo/public_html/cgi-bin]# perl -dd /var/foo/public_html/cgi-bin/test.cgi
Loading DB routines from perl5db.pl version 1.32
Editor support available.
Enter h or `h h' for help, or `man perldebug' for more help.
main::(/var/foo/public_html/cgi-bin/test.cgi:2):
2: print "Content-type: text/plain\n\n";
Unknown error
Compilation failed in require at /usr/local/share/perl5/Term/ReadLine/Perl.pm line 63.
at /usr/local/share/perl5/Term/ReadLine/Perl.pm line 63
Term::ReadLine::Perl::new('Term::ReadLine', 'perldb', 'GLOB(0x18ac160)', 'GLOB(0x182ce68)') called at /usr/share/perl5/perl5db.pl line 6073
DB::setterm called at /usr/share/perl5/perl5db.pl line 2237
DB::DB called at /var/foo/public_html/cgi-bin/test.cgi line 2
Attempt to reload Term/ReadLine/readline.pm aborted.
Compilation failed in require at /usr/local/share/perl5/Term/ReadLine/Perl.pm line 63.
END failed--call queue aborted at /var/foo/public_html/cgi-bin/test.cgi line 63.
at /var/foo/public_html/cgi-bin/test.cgi line 63
[root /var/foo/public_html/cgi-bin]#
The code of the test file I am using is:
#!/usr/local/bin/perl
print "Content-type: text/plain\n\n";
print "testing...\n";
I have checked the path to perl, perl version etc etc and everything seems to be ok. However the script is not exceuting and gives a 500 internal server error. I am running php5 with dso handler and susEXEC on. susEXEC logs does not say anything except that the cgi script has been called. This problem is completely baffling me and my little experience with cgi/perl is not helping. Can anyone point me in a right direction to solve this?
As someone commented already check the permissions and also try to run the file from the console.
A likely problem is that when you switched servers the path to perl changed and your shebang line is wrong. A common technique to avoid this is to use #!/usr/bin/env perl instead.
When you recieve a 500 error apache should also log something in the error log (your vhost config might define a custom error log instead the default one so check that if you're having trouble finding the error message.
Also there is no reason to run your script under the Perl Debugger, unless your goal is to experiment with the Perl Debugger (and with no variable defined it is pointless as an example). My advice is don't use the Perl Debugger. A lot of experienced Perl programmers, (probably a big majority) never or rarely use it.
I solved this eventually, posting it for the sake of posterity:
On moving server I mounted the filesystem on a different partition as the home partition ran out of memory. I forgot to give exec permissions to the new mountpoint, that's why the cgi scripts were not executing.

Strange apache behaviour when lauching an external binary called by a perl script

I am currently setting up a web service powered by apache and running on CENTOS 6.4.
This service uses perl scripts (cgi-bin) launching in particular external homemade fortran compiled binaries.
Here is the issue: when I boot my server, everything goes well except that one of my binary crashes systematically (with a kernel segfault) when called by my perl scripts.
If I restart manually the httpd service (at the command line: service httpd restart), the issue is totally fixed.
I examined apache/system logs and nothing suspicious can be found.
It appears that the problem occurs only when httpd is launched by /etc/rc[0-6].d startup directives. I tried to change the launch order of http (S85httpd by default) to any other position without success.
To summarize, my web service is only functional (with no external binary crash) when httpd is launched at the command line once the server has fully booted up!
[EDIT] This issue is now resolved:
My fortran binary handles very large arrays and complex functions requiring an unlimited stack size.
Despite that the stack size limit was defined on a system-wide basis (in /etc/security/limits.conf), for any reason it appears that the "apache/perl/fortran binary" ensemble was not aware of that (causing my binary to crash each time it was called).
At the contrary, when I manually restarted apache at the shell prompt, the stacksize limit was correctly passed (.bashrc with 'ulimit -S -s unlimited').
As a workaround, I used BSD::Resource module (http://metacpan.org/pod/BSD::Resource) to define stacksize directly in my perl script by using e.g. setrlimit(RLIMIT_STACK, $softlimit, $hardlimit);
Thus, this new stack size limit is now directly passed from my perl script to my binary.
I've run into similar problems before. Maybe one way to solve this is to put the binary on a 'delayed start', so that it starts after everything else on your system is running. One way to do this is to put an at job in your /etc/rc.local script, to start the binary in X minutes.

jsch ChannelExec run a .sh script with nohup "lose" some commands

I hava a .sh script which glues many other scripts, called by jsch ChannelExec from a windows application.
Channel channel = session.openChannel("exec");
((ChannelExec) channel).setCommand("/foo/bar/foobar.sh");
channel.connect();
if I run the command like "nohup /foo/bar/foobar.sh >> /path/to/foo.log &", all the long term jobs(database operations, file processing etc.) seems to get lost.
checking the log file, only find those echo stuffs(before and after a long term operation, calculate running time etc.).
I checked the permissions, $PATH, add source /etc/profile to my .sh yet none of these works.
but when I run the command normally (sync run, print all echo outputs to my java client on windows),all the things goes well.
it's a really specific prob.
Hope someone with experience can help me out.
thank in advance.
Han
Solved!
A different issue that often arises in
this situation is that ssh is refusing
to log off ("hangs"), since it refuses
to lose any data from/to the
background job(s).[6][7] This problem
can also be overcome by redirecting
all three I/O streams.
from
http://en.wikipedia.org/wiki/Nohup
My prob is, psql and pg_bulkload print their outputs to err stream.
In my script, I didn't redirect err streams.
Everything went fine by also redirecting err streams to the same log file.
nohup foo.sh > log.log 2>&1 &
Thanks to Atsuhiko Yamanaka, he created a great JSch library, and PaĆ­lo Ebermann for the documentation.

valgrind on server process

hi i am new to valgrind. I know how to run valgrind on executable files from command line. But how do you run valgrind on server processes like apache/myqld/traffic server etc ..
I want to run valgrind on traffic server (http://incubator.apache.org/projects/trafficserver.html) to detect some memory leaks taking place in the plugin I have written.
Any suggestions ?
thanks,
pigol
You have to start the server under Valgrind's control. Simply take the server's normal start command, and prepend it with valgrind.
Valgrind will attach to every process your main "server" process spawns. When each thread or process ends, Valgrind will output its analysis, so I'd recommend piping that to a file (not sure if it comes out on stderr or stdout.)
If your usual start command is /usr/local/mysql/bin/mysqld, start the server instead with valgrind /usr/local/mysql/bin/mysqld.
If you usually start the service with a script (like /etc/init.d/mysql start) you'll probably need to look inside the script for the actual command the script executes, and run that instead of the script.
Don't forget to pass the --leak-check=full option to valgrind to get the memory leak report.