are ERRORLEVEL returned by executables reliable for error handling in batch scripts - apache

This is a general topic to discuss on ERRORLEVEL returned by executables in Windows. I am writing a script to install Apache HTTP server as a windows service.
httpd -k install
The httpd executable returns
ERRORLEVEL 1 when there is an error in config file.
ERRORLEVEL 2 when there is already a service installed.
ERRORLEVEL 0 for success.
Is it ok to handle errors based on the ERRORLEVEL returned by the executables in general? Do we need to relly on other means to handle return codes of commands that we run in our batch scripts

every programmer is free to handle exitcodes (returned by %errorlevel%) to his own gust. There is a general agreement, that 0 means "success", while any non-zero value means "an error of some sort". But you can't rely on that.
A good example is ping. If you ping a non-existent IP within your own network, it returns
Answer from <localhost>: destination unreachable.
You might think, that's unsuccessful and therefore expect an non-zero errorlevel, but obvioulsy the programmer thought "well, there is an answer, so it's successful"
Some internal commands don't touch the errorlevel, some will always return zero, some will always return 1 (robocopy, thanks aschipfl).
Executables do set the errorlevel (thanks Aacini), but how depends on the programmer. Some always zero, most of them actually usable errorlevels.
So sorry to have to tell you: you can't rely on general rules for %errorlevel%. You have to check it out for each and every single command/executable (either by documentation (if you are very lucky) or by trying)

Related

Execute a shell command outside of a sandbox while in a sandbox

I'm using singularity to run python in an environnement deprived of python. I'm also running a mysql instance as explained by the IOWA state university (running an instance of mysql, and closing it when done).
For clarity, I'm using a bash script to open mysql, then do what i have to do (a python script) and close mysql, and it works fine. But Python's only way to stop if an error occured is sys.exit([value]) and this not only stops the python script, but also the bash script that ran it. This makes it impossible for me to manage the errors and close the instance of mysql if the python script exits.
My question is : Is there a way for me to execute a 'singularity instance stop mysql' while being in the python sandbox. Something to tell singularity "hey, this command here must be used on the host !" ?
I keep searching but can't find anything.
I only tried to execute it with subprocess like any other command, but it returned an error message because I don't have this instance inside the python sandbox. I don't even have singularity in this sandbox.
For any clarifications, just ask me, I'm trying to be clear but I'm pretty sure it's not very clear.
Thanks a lot !
Generally speaking, it would be a big security issue if a process could be initiated from inside a container (docker or singularity) but run in the host OS's namespace.
If the bash script is exiting on the python failure, it sounds like you're using set -e or #!/bin/bash -e. This causes the script to abort if any command returns non-zero. It's commonly recommended for safer processing, but can cause problems like this at times. To bypass that for the python step you can modify your script:
# start mysql, do some stuff
set +x # disable abort on non-zero return
python my_script.py
set -x # re-enable abort on non-zero
# shut down mysql, do other stuff

SMPP check through MONIT

I am new in linux and I need help to check SMPP binds of kannel through MONIT whether they are online or dead.
Currently in a script file named XYZ.sh using (curl --silent http://localhost:xyz/status?password=abc | grep SMPP| grep -v online) and writing in Monit as below :
check file XYZ with path /root/script/XYZ.sh if match "dead" after 5 cycles then alert
It is not working,please guide as i am very upset.
You are using the wrong check. You should use process instead of file when using custom scripts.
Also process is not able to check content so you have to return a different exit code according to the expected behaviour. So you will have to update your script to return an exit code according to your expectations
check program XYZ with path /root/script/XYZ.sh if status != 0 for 5 cycles then alert
Documentation: https://mmonit.com/monit/documentation/monit.html#Program

cgi-bin returning internal server error due to compilation failure

I moved my script to a new server with almost identical configuration (apache/centos) but the cgi-bin has been failing to work ever since. For past one week I have googled every possible solution and isolated the error by executing script from command line. Output i get is as follows for a simple test file:
[root /var/foo/public_html/cgi-bin]# perl -dd /var/foo/public_html/cgi-bin/test.cgi
Loading DB routines from perl5db.pl version 1.32
Editor support available.
Enter h or `h h' for help, or `man perldebug' for more help.
main::(/var/foo/public_html/cgi-bin/test.cgi:2):
2: print "Content-type: text/plain\n\n";
Unknown error
Compilation failed in require at /usr/local/share/perl5/Term/ReadLine/Perl.pm line 63.
at /usr/local/share/perl5/Term/ReadLine/Perl.pm line 63
Term::ReadLine::Perl::new('Term::ReadLine', 'perldb', 'GLOB(0x18ac160)', 'GLOB(0x182ce68)') called at /usr/share/perl5/perl5db.pl line 6073
DB::setterm called at /usr/share/perl5/perl5db.pl line 2237
DB::DB called at /var/foo/public_html/cgi-bin/test.cgi line 2
Attempt to reload Term/ReadLine/readline.pm aborted.
Compilation failed in require at /usr/local/share/perl5/Term/ReadLine/Perl.pm line 63.
END failed--call queue aborted at /var/foo/public_html/cgi-bin/test.cgi line 63.
at /var/foo/public_html/cgi-bin/test.cgi line 63
[root /var/foo/public_html/cgi-bin]#
The code of the test file I am using is:
#!/usr/local/bin/perl
print "Content-type: text/plain\n\n";
print "testing...\n";
I have checked the path to perl, perl version etc etc and everything seems to be ok. However the script is not exceuting and gives a 500 internal server error. I am running php5 with dso handler and susEXEC on. susEXEC logs does not say anything except that the cgi script has been called. This problem is completely baffling me and my little experience with cgi/perl is not helping. Can anyone point me in a right direction to solve this?
As someone commented already check the permissions and also try to run the file from the console.
A likely problem is that when you switched servers the path to perl changed and your shebang line is wrong. A common technique to avoid this is to use #!/usr/bin/env perl instead.
When you recieve a 500 error apache should also log something in the error log (your vhost config might define a custom error log instead the default one so check that if you're having trouble finding the error message.
Also there is no reason to run your script under the Perl Debugger, unless your goal is to experiment with the Perl Debugger (and with no variable defined it is pointless as an example). My advice is don't use the Perl Debugger. A lot of experienced Perl programmers, (probably a big majority) never or rarely use it.
I solved this eventually, posting it for the sake of posterity:
On moving server I mounted the filesystem on a different partition as the home partition ran out of memory. I forgot to give exec permissions to the new mountpoint, that's why the cgi scripts were not executing.

Strange apache behaviour when lauching an external binary called by a perl script

I am currently setting up a web service powered by apache and running on CENTOS 6.4.
This service uses perl scripts (cgi-bin) launching in particular external homemade fortran compiled binaries.
Here is the issue: when I boot my server, everything goes well except that one of my binary crashes systematically (with a kernel segfault) when called by my perl scripts.
If I restart manually the httpd service (at the command line: service httpd restart), the issue is totally fixed.
I examined apache/system logs and nothing suspicious can be found.
It appears that the problem occurs only when httpd is launched by /etc/rc[0-6].d startup directives. I tried to change the launch order of http (S85httpd by default) to any other position without success.
To summarize, my web service is only functional (with no external binary crash) when httpd is launched at the command line once the server has fully booted up!
[EDIT] This issue is now resolved:
My fortran binary handles very large arrays and complex functions requiring an unlimited stack size.
Despite that the stack size limit was defined on a system-wide basis (in /etc/security/limits.conf), for any reason it appears that the "apache/perl/fortran binary" ensemble was not aware of that (causing my binary to crash each time it was called).
At the contrary, when I manually restarted apache at the shell prompt, the stacksize limit was correctly passed (.bashrc with 'ulimit -S -s unlimited').
As a workaround, I used BSD::Resource module (http://metacpan.org/pod/BSD::Resource) to define stacksize directly in my perl script by using e.g. setrlimit(RLIMIT_STACK, $softlimit, $hardlimit);
Thus, this new stack size limit is now directly passed from my perl script to my binary.
I've run into similar problems before. Maybe one way to solve this is to put the binary on a 'delayed start', so that it starts after everything else on your system is running. One way to do this is to put an at job in your /etc/rc.local script, to start the binary in X minutes.

Multiple Job (j3)

I am trying to run a GNU make file with multiple jobs.
When I try executing ' make.exe -r -j3', the receive the following to errors:
make.exe: Do not specify -j or --jobs if sh.exe is not available.
make.exe: Resetting make for single job mode.
Do I have to add ' $(SH) -c' somewhere in the makefile? If so, where?
The error message suggests that make cannot find sh.exe. The file names indicate you are probably on CygWin. I would investigate setting the PATH to include the location of sh.exe, or defining the value of SHELL to the name (or, even, full path) of your shell.
Are you running this on Windows (more specifically, in the "windows" shell?). If you are, you might want to read this:
http://www.gnu.org/software/make/manual/make.html#Parallel
more specifically:
On MS-DOS, the ā€˜-jā€™ option has no effect, since that system doesn't support multi-processing.
Once again, assuming you're running on windows, you should get MinGW or CygWin