I am trying to build a simple web service that runs on an ubuntu machine with apache and mod_perl2. The service runs mpirun and returns the output of the call.
I invoke a call of the apache response handler via the web browser. The problem is that the mpirun command seems to hang.
Important:
This problem occurs on a server running Ubuntu (12.04.4) with apache, mod_perl and openmpi. When running it on my mac (Macos 10.9.3), it works fine and mpirun returns. On both machines, openmpi is installed in the same version (1.6.5)
Here my mod_perl handler:
package MyHandler;
use Apache2::Const '-compile' => 'OK';
sub handler {
my $command = "mpirun -np 4 echo test";
my $out = qx($command);
print $out;
return Apache2::Const::OK;
}
1;
The mpirun job does not seem to finish. A ps aux | grep mpirun gives me this:
www-data 24023 0.0 0.1 23600 2424 ? S 13:02 0:00 mpirun -np 4 echo test
When I do a kilall -9 mpirun, the service comes back with the result.
No errors are written to the apache error log.
Here is what I tried/tested:
made sure that the command mpirun -np 4 echo test generates the correct output when run as user www-data
tried to invoke mpirun in different ways: Using IPC::Run and IPC::Run3, as suggested by Sergei, I also tried using pipes, but everytime mpirun dies not finish.
tried to call the handler directly via a perl script and not via the browser: mpirun finishes and the handler prints the desired output.
compared the outputs of ompi_info --param mpi all on both machines, mac and ubuntu, but found no differences
Any idea why mpirun would hang in my situation or any idea how I could debug this?
Edit
I tried to use Apache2::SubProcess as suggested by hrunting. Here my code following the simple example from the link:
package MyHandler;
use Apache2::SubProcess ();
use Apache2::Const '-compile' => 'OK';
use Apache2::Request;
use Config;
use constant PERLIO_IS_ENABLED => $Config{useperlio};
sub handler {
my $r = shift;
my $command = "mpirun -np 4 echo test";
my ($in_fh, $out_fh, $err_fh) = $r->spawn_proc_prog($command);
$r->content_type('text/plain');
my $output = read_data($out_fh);
my $error = read_data($err_fh);
print "output : $output \n";
print "error : $error \n";
return Apache2::Const::OK;
}
# helper function to work w/ and w/o perlio-enabled Perl
sub read_data {
my ($fh) = #_;
my $data;
if (PERLIO_IS_ENABLED || IO::Select->new($fh)->can_read(10)) {
$data = <$fh>;
}
return defined $data ? $data : '';
}
1;
This does not work for me. When calling the handler from the browser, I get the output:
output :
error :
and ps aux tells me that mpirun is not running.
Any further ideas of how I could debug this and get mpirun to work with my configuration?
Look at Apache2::SubProcess. When you're running external processes within a mod_perl handler, Apache memory, I/O and process management come into play. Remember, your code is running within Apache itself and is subject to the Apache environment. The Apache2::SubProcess module is designed to make exec()- and system()-style calls work properly under within Apache.
Note that the module documentation outlines caveats for dealing with different Perl configurations.
Try IPC::Run or IPC::Run3 to run your command.
Capture::Tiny works for me. I'm not sure it'll work well under mod_perl (it may interact badly with the file handles for the request and response) but it works fine as a regular script:
use Capture::Tiny 'capture';
my ( $stdout, $stderr, $exit ) = capture {
system( qw(mpirun -np 4 echo test) );
};
print "stdout: $stdout\n";
print "stderr: $stderr\n";
print "exit: $exit\n";
Prints:
stdout: test
test
test
test
stderr:
exit: 0
Related
This is similar to Starting synergy automatically on RHEL/CentOS
However this doesn't seem to be working anymore.
What I basically want to do is execute a program when the greeter is shown. THis has been working before by adding it to the /etc/gdm/Init/Default script.
However right now the script doesn't seem to be called anymore (tested with a 'logger' call).
SElinux is in permissive mode. The script is executable. synergyc is specified with the full path.
The below resolves the issue. So to make synergyc always running at the GDM greeter use the PostSession script below and put the /usr/share/gdm/greeter/autostart/synergyc.desktop file into place:
[Desktop Entry]
Type=Application
Name=Synergy Client
Exec=synergyc 192.168.1.110
X-GNOME-AutoRestart=true
/etc/gdm/PostSession/Default:
#!/bin/sh
# Kill old process
/usr/bin/killall synergyc
while [ $(pgrep -x synergyc) ]; do sleep 0.1; done
# Get the xauthority file GDM uses, setup DISPLAY var and start synergyc again
xauthfile=$(ps aux |grep Xauth | grep '^gdm' | grep -oP '\-auth \K[\w/]+')
export DISPLAY=:0
export XAUTHORITY=${xauthfile}
/usr/bin/synergyc 192.168.1.110
exit 0
I have a Centos 7 server with cPanel and I'm working on a Telegram bot for my business needs. The bot should be able to run a terminal command with os.system or subprocess.Popen, however both options do not work when configured through a webhook + wsgi process.
I tested both with bot.polling method and they worked as a charm, however after I switched to webhook method served by flask and wsgi, both stopped working for me. I have tried the following:
mycommand = "python3.6 GoReport.py --id 31-33 --format word"
os.chdir('dir_to_run_command_from')
os.system(mycommand)
and the following one:
mycommand = "python3.6 GoReport.py --id 31-33 --format word"
subprocess.Popen(mycommand, cwd="dir_to_run_command_from", shell=True)
Both options simply do nothing right now. I tried to print them both and received 0 as a response. I wonder if the issue is caused by permissions or something.
I expect both options to work through webhook + wsgi as good as they work through bot.polling method.
I think I got it wrong. Your script writes a report to a specific directory. You do not need a result in your application route.
I wrote a small test application called tryout. It runs in a virtual environment.
$ mkdir tryout
$ cd tryout
$ python3 -m venv tryout
$ source tryout/bin/activate
$ export FLASK_APP=tryout/app
$ export FLASK_ENV=development
$ flask run
Directory structure:
/tryout
/app/*
/bin/*
/include/*
/lib/*
/subdir/*
Application:
# /tryout/app/__init__.py
import sys, os
from flask import Flask
def create_app(env=os.getenv('FLASK_ENV', 'development')):
app = Flask(__name__)
#app.route('/run-script')
def run_script():
import subprocess
cmd = 'python script.py'
cwd = 'subdir'
ret = subprocess.check_output(cmd, cwd=cwd, shell=True)
print(ret)
return ret, 200
return app
app = create_app()
Script:
# /subdir/script.py
import os, sys
def main():
with open('report.txt', 'w+') as fp:
fp.write('Info\n')
sys.stdout.write('It works!')
if __name__ == '__main__':
main()
It works!
A new file named "report.log" is written into the "subdir"-directory.
In Browser appears "It works!".
Hope I could help you or I have no real idea of what you want to do.
If you want to run an external script from inside flask, you could use subprocess to run the script from the command line. This is the right solution.
#app.route('/run-script')
def run_script():
cmd = '<your command here!>'
result = subprocess.check_output(cmd, cwd='<your wordir>', shell=True)
return render_template('results.html', **locals())
Have fun!
#Bogdan Kozlowskyi
Is it possible to pipe on the command line? Do you need to return a result to the user?
cmd = 'first_cmd | tee report.log'
result = subprocess.check_output(cmd, cwd='<your wordir>', shell=True)
Perhaps you should look for shell commands like '>>', '>' and 'tee'.
Seems to be a user-groups permission problem (execute and write).
I have grunt tasks setup to do some functional tests with CasperJS on my local machine. Everything is working fine.
I'd like to know if there is a way to keep running the tests until it fails? Or run through the tests a certain number of times?
In powershell you can "cd" to the directory and use the following one-liner:
do { grunt } while (1 -eq 1}
Here is the equivalent in Bash:
while [ 1 -eq 1 ]; do grunt done
This should run grunt over and over in an infinite loop and you can stop with ctrl + c. If you want it to stop as soon as it fails you will need a script (I am providing the Bash as I am not very familiar with powershell):
#! /bin/sh
result=0
while [ $result -eq 0 ]; do
grunt || result=1
done
I have a created a script to check to see if my glassfish server is running (installed on a freebsd system), if it isn't, the script attempts to kill the java process to ensure it's not hung, and then issues the asadmin start-domain command
If this script runs from the command line it is successful 100% of the time. When it is run from the cron tab, every line runs except the asadmin start-domain line - it does not seem to execute or at least does not complete, i.e. the server is not running after this script runs.
For anyone not familiar with glassfish or the asadmin utility used to start the server, it is my understanding that a forked process is used. could this be causing a problem via cron?
Again, in all my tests today, the script runs to completion when run from the command line. Once it's executed through the cron, it does not complete... what would be different running this from the crontab???
thanks in advance for any help... i'm pulling my hair out trying to make this work!
#!/bin/bash
JAVA_HOME=/usr/local/diablo-jdk1.6.0/; export JAVA_HOME
timevar=`date +%d-%m-%Y_%H.%M.%S`
process_name='java'
get_contents=`cat urls.txt`
for i in $get_contents
do
echo checking $i
statuscode=$(curl --connect-timeout 10 --write-out %{http_code} --silent --output /dev/null $i)
case $statuscode in
200)
echo "$timevar $i $statuscode okay" >> /usr/home/user1/logfile.txt
;;
*)
echo "$timevar $i $statuscode bad" >> /usr/home/user1/logfile.txt
echo "Status $statuscode found" | mail -s "Check of $i failed" some.address#gmail.com
process_id=`ps acx | grep -i $process_name | awk {'print $1'}`
if [ -z "$process_id" ]
then
echo "java wasn't found in the process list"
else
echo "Killing java, currently process $process_id"
kill -9 $process_id
fi
/usr/home/user1/glassfish3/bin/asadmin start-domain domain1
;;
esac
done
Also, just for completeness, here is the entry in the cron tab:
*/2 * * * * /usr/home/user1/server.check.sh >> /usr/home/user1/cron.log
Ok... found the answer to this on another site, but I thought I'd add the answer in here for future reference.
The problem was the PATH!! even though java_home was set, java itself wasn't in the path for the cron daemon.
A quick test to see what path is available to your cron, add this line:
*/2 * * * * env > /usr/home/user1/env.output
From what I can gather, the PATH initially available to cron is pretty minimal. Since java was in /usr/local/bin, i added that to the path right in the crontab and kaboom! it worked!
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
*/2 * * * * /usr/home/user1/server.check.sh >> /usr/home/user1/cron.log
I'm trying to make a push notification work on my debian vps (apace2, mysql).
I use a php script from this tutorial (http://www.raywenderlich.com/3525/apple-push-notification-services-tutorial-part-2).
Basically, the script is put in an infintive loop, that check a mysql table for new records every couple of seconds. The tutorial says it should be run as a background process.
// This script should be run as a background process on the server. It checks
// every few seconds for new messages in the database table push_queue and
// sends them to the Apple Push Notification Service.
//
// Usage: php push.php development &
So I have four questions.
How do I start the script from the terminal? What should I type? The script location on the server is:
/var/www/development_folder/scripts/push2/push.php
How can I kill it if I need to (without having to restart apace)?
Since the push notification is essential, I need a way to check if the script is running.
The code (from the tutorial) calls a function is something goes wrong:
function fatalError($message)
{
writeToLog('Exiting with fatal error: ' . $message);
exit;
}
Maybe I can put something in there to restart the script? But It would also be nice to have a cron job or something that check every 5 minute or so if the script is running, and start it if it doens't.
4 - Can I make the script automatically start after a apace or mysql restart? If the server crash or something else happens that need a apace restart?
Thanks a lot in advance
You could run the script with the following command:
nohup php /var/www/development_folder/scripts/push2/push.php > /dev/null &
The nohup means that that the command should not quit (it ignores hangup signal) when you e.g. close your terminal window. If you don't care about this you could just start the process with "php /var/www/development_folder/scripts/push2/push.php &" instead. PS! nohup logs the script output to a file called nohup.out as default, if you do not want this, just add > /dev/null as I've done here. The & at the end means that the proccess will run in the background.
I would only recommend starting the push script like this while you test your code. The script should be run as a daemon at system-startup instead (see 4.) if it's important that it runs all the time.
Just type
ps ax | grep push.php
and you will get the processid (pid). It will look something like this:
4530 pts/3 S 0:00 php /var/www/development_folder/scripts/push2/push.php
The pid is the first number you'll see. You can then run the following command to kill the script:
kill -9 4530
If you run ps ax | grep push.php again the process should now be gone.
I would recommend that you make a cronjob that checks if the php-script is running, and if not, starts it. You could do this with ps ax and grep checks inside your shell script. Something like this should do it:
if ! ps ax | grep -v grep | grep 'push.php' > /dev/null
then
nohup php /var/www/development_folder/scripts/push2/push.php > /dev/null &
else
echo "push-script is already running"
fi
If you want the script to start up after booting up the system you could make a file in /etc/init.d (e.g. /etc.init.d/mypushscript with something like this inside:
php /var/www/development_folder/scripts/push2/push.php
(You should probably have alot more in this file)
You would also need to run the following commands:
chmod +x /etc/init.d/mypushscript
update-rc.d mypushscript defaults
to make the script start at boot-time. I have not tested this so please do more research before making your own init script!