I have grunt tasks setup to do some functional tests with CasperJS on my local machine. Everything is working fine.
I'd like to know if there is a way to keep running the tests until it fails? Or run through the tests a certain number of times?
In powershell you can "cd" to the directory and use the following one-liner:
do { grunt } while (1 -eq 1}
Here is the equivalent in Bash:
while [ 1 -eq 1 ]; do grunt done
This should run grunt over and over in an infinite loop and you can stop with ctrl + c. If you want it to stop as soon as it fails you will need a script (I am providing the Bash as I am not very familiar with powershell):
#! /bin/sh
result=0
while [ $result -eq 0 ]; do
grunt || result=1
done
Related
I'd like to run a post script in NPM RUN that would know if the script failed.
Right now, the behaviour of a post script is that it runs if the main script succeeded. So I can do something on success.
However, I want to also run the script on failure. How do I do that?
I too looked for an answer to this question and the best I could do is use a bash script to always run the post command. Here is my npmbash.
#!/usr/bin/env bash
#
# Use this to catch control-c and cleanup. Won't run on Windows except in git bash or similar.
#
# Pass args you'd normally pass to npm - eg. npm run dev
# - if the args contain 'dev' or 'test' then the mongodb will be started up and shut down
# after it completes or control-c is pressed.
isDevOrTest=false
setIsDevOrTest() {
for i in "$#";
do
if [[ $i =~ "dev|test" ]]
then
echo "has dev or test"
isDevOrTest=true
fi
done
}
# Trap control-c
trap 'ctrl_c' INT
function ctrl_c() {
shutdown
exit 0
}
function shutdown() {
echo shutdown isDevOrTest: $isDevOrTest
if [[ $isDevOrTest ]]
then
echo Its dev or test so shut down mongo
mongo admin --eval 'db.shutdownServer()'
fi
}
function startup() {
echo startup
}
function run() {
echo npm "$#"
npm "$#"
}
setIsDevOrTest "$#"
startup
run "$#"
shutdown
To help keep as much in npm you could execute the post script you've defined to run after the main script (eg. poststart to the start):
function shutdown() {
npm run poststart
}
But you'd need to make sure in the case of success that you only run the postscript once (eg. poststart by default in npm runs after start - similarly prestart runs before it - https://docs.npmjs.com/misc/scripts). Should be as simple as:
function shutdown() {
if [[ "$?" != "0" ]]
then
# Only run if previous command was not successful
npm run poststart
fi
}
Run with:
npmbash run dev
I have a Travis CI project which builds an iOS app then starts Appium and runs tests with Appium/Mocha.
The problem is that even though the Mocha tests fail and throw exception, the shell script which runs them via Gulp still exits with 0 and the build is deemed passing.
How can I make the build break/fail when the Mocha tests fail?
Here is how I managed to make this work:
Instead of running the Mocha tests via Gulp, run them directly from the shell script
Save the output to mocha.log besides displaying on stdout
./node_modules/.bin/mocha --reporter spec "appium/hybrid/*uat.js" 2>&1 | tee mocha.log
Check mocha.log for the string " failing" and exit with 1 if found
.
if grep -q " failing" mocha.log; then
exit 1
fi
The exit 1 will make the Travis build fail.
I am trying to build a simple web service that runs on an ubuntu machine with apache and mod_perl2. The service runs mpirun and returns the output of the call.
I invoke a call of the apache response handler via the web browser. The problem is that the mpirun command seems to hang.
Important:
This problem occurs on a server running Ubuntu (12.04.4) with apache, mod_perl and openmpi. When running it on my mac (Macos 10.9.3), it works fine and mpirun returns. On both machines, openmpi is installed in the same version (1.6.5)
Here my mod_perl handler:
package MyHandler;
use Apache2::Const '-compile' => 'OK';
sub handler {
my $command = "mpirun -np 4 echo test";
my $out = qx($command);
print $out;
return Apache2::Const::OK;
}
1;
The mpirun job does not seem to finish. A ps aux | grep mpirun gives me this:
www-data 24023 0.0 0.1 23600 2424 ? S 13:02 0:00 mpirun -np 4 echo test
When I do a kilall -9 mpirun, the service comes back with the result.
No errors are written to the apache error log.
Here is what I tried/tested:
made sure that the command mpirun -np 4 echo test generates the correct output when run as user www-data
tried to invoke mpirun in different ways: Using IPC::Run and IPC::Run3, as suggested by Sergei, I also tried using pipes, but everytime mpirun dies not finish.
tried to call the handler directly via a perl script and not via the browser: mpirun finishes and the handler prints the desired output.
compared the outputs of ompi_info --param mpi all on both machines, mac and ubuntu, but found no differences
Any idea why mpirun would hang in my situation or any idea how I could debug this?
Edit
I tried to use Apache2::SubProcess as suggested by hrunting. Here my code following the simple example from the link:
package MyHandler;
use Apache2::SubProcess ();
use Apache2::Const '-compile' => 'OK';
use Apache2::Request;
use Config;
use constant PERLIO_IS_ENABLED => $Config{useperlio};
sub handler {
my $r = shift;
my $command = "mpirun -np 4 echo test";
my ($in_fh, $out_fh, $err_fh) = $r->spawn_proc_prog($command);
$r->content_type('text/plain');
my $output = read_data($out_fh);
my $error = read_data($err_fh);
print "output : $output \n";
print "error : $error \n";
return Apache2::Const::OK;
}
# helper function to work w/ and w/o perlio-enabled Perl
sub read_data {
my ($fh) = #_;
my $data;
if (PERLIO_IS_ENABLED || IO::Select->new($fh)->can_read(10)) {
$data = <$fh>;
}
return defined $data ? $data : '';
}
1;
This does not work for me. When calling the handler from the browser, I get the output:
output :
error :
and ps aux tells me that mpirun is not running.
Any further ideas of how I could debug this and get mpirun to work with my configuration?
Look at Apache2::SubProcess. When you're running external processes within a mod_perl handler, Apache memory, I/O and process management come into play. Remember, your code is running within Apache itself and is subject to the Apache environment. The Apache2::SubProcess module is designed to make exec()- and system()-style calls work properly under within Apache.
Note that the module documentation outlines caveats for dealing with different Perl configurations.
Try IPC::Run or IPC::Run3 to run your command.
Capture::Tiny works for me. I'm not sure it'll work well under mod_perl (it may interact badly with the file handles for the request and response) but it works fine as a regular script:
use Capture::Tiny 'capture';
my ( $stdout, $stderr, $exit ) = capture {
system( qw(mpirun -np 4 echo test) );
};
print "stdout: $stdout\n";
print "stderr: $stderr\n";
print "exit: $exit\n";
Prints:
stdout: test
test
test
test
stderr:
exit: 0
I have a created a script to check to see if my glassfish server is running (installed on a freebsd system), if it isn't, the script attempts to kill the java process to ensure it's not hung, and then issues the asadmin start-domain command
If this script runs from the command line it is successful 100% of the time. When it is run from the cron tab, every line runs except the asadmin start-domain line - it does not seem to execute or at least does not complete, i.e. the server is not running after this script runs.
For anyone not familiar with glassfish or the asadmin utility used to start the server, it is my understanding that a forked process is used. could this be causing a problem via cron?
Again, in all my tests today, the script runs to completion when run from the command line. Once it's executed through the cron, it does not complete... what would be different running this from the crontab???
thanks in advance for any help... i'm pulling my hair out trying to make this work!
#!/bin/bash
JAVA_HOME=/usr/local/diablo-jdk1.6.0/; export JAVA_HOME
timevar=`date +%d-%m-%Y_%H.%M.%S`
process_name='java'
get_contents=`cat urls.txt`
for i in $get_contents
do
echo checking $i
statuscode=$(curl --connect-timeout 10 --write-out %{http_code} --silent --output /dev/null $i)
case $statuscode in
200)
echo "$timevar $i $statuscode okay" >> /usr/home/user1/logfile.txt
;;
*)
echo "$timevar $i $statuscode bad" >> /usr/home/user1/logfile.txt
echo "Status $statuscode found" | mail -s "Check of $i failed" some.address#gmail.com
process_id=`ps acx | grep -i $process_name | awk {'print $1'}`
if [ -z "$process_id" ]
then
echo "java wasn't found in the process list"
else
echo "Killing java, currently process $process_id"
kill -9 $process_id
fi
/usr/home/user1/glassfish3/bin/asadmin start-domain domain1
;;
esac
done
Also, just for completeness, here is the entry in the cron tab:
*/2 * * * * /usr/home/user1/server.check.sh >> /usr/home/user1/cron.log
Ok... found the answer to this on another site, but I thought I'd add the answer in here for future reference.
The problem was the PATH!! even though java_home was set, java itself wasn't in the path for the cron daemon.
A quick test to see what path is available to your cron, add this line:
*/2 * * * * env > /usr/home/user1/env.output
From what I can gather, the PATH initially available to cron is pretty minimal. Since java was in /usr/local/bin, i added that to the path right in the crontab and kaboom! it worked!
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
*/2 * * * * /usr/home/user1/server.check.sh >> /usr/home/user1/cron.log
Is there a way to automatically run tests, when a file in the app is changed? In rails there is a gem called guard. How can one achieve the same in nodejs?
Not sure if this would work for tests, but Nodemon (https://github.com/remy/nodemon) looks like what you want.
Install Jasmine and run
jasmine-node <dir> --autotest
Try this
touch /tmp/nt; while [ true ]; do if [ find . -newer /tmp/nt -type f
| grep -v app/cache | wc -l -gt 0 ]; then phpunit; touch /tmp/nt; fi;
sleep 5; done
I'm using it to autostart phpunit. Replace phpunit with the command to run tests
replace sleep 5 with sleep 1 if you wish to check every second (depends on the size of your files)