At the moment if I run ./phpunit -c ../app I might get output like:
PHPUnit 3.7.88 by Sebastian Begmann.
Configuration read from /var/www/site/app/Symfony/app/phpunit.xml
FFFSS....
Time 7.9 seconds, Memory: 55.00Mb
There were 4 failures:
.. lists the failures
FAILURES!
Tests: 9, Assertions: 64, Failures: 4, Skipped: 2.
This is good in some cases, like if I want to run the tests myself. But for some cases (automated testing), I just want to run the tests and know whether they all passed or not (maybe send an email if there were failures).
So my question, is there a simple command I can use like ./phpunit -c ../app --short which will just return whether all tests passed or not.
Thanks
Redirect the command output to /dev/null and check the command exit code:
./phpunit -c ../app >/dev/null 2>&1
if [ $? -eq 0 ]; then
echo "TESTS PASSED!"
fi
Related
In tests for ping from iputils certain tests should fail for non-root but pass for root. Thus I need a detection whether user running tests is root or not. Current code:
run_as_root = false
r = run_command('id', '-u')
if r.stdout().strip().to_int() == 0
message('running as root')
run_as_root = true
else
message('running as normal user')
endif
...
test(name, cmd, args : args, should_fail : not run_as_root)
is not working, because test is done during build phase:
$ meson builddir
The Meson build system
Version: 0.59.4
...
Program xsltproc found: YES (/usr/bin/xsltproc)
Message: running as normal user
and not for running tests because root user is not properly detected:
# cd builddir/ && meson test
[21/21] Linking target ninfod/ninfod
1/36 arping -V OK 0.03s
...
32/36 ping -c1 -i0.001 127.0.0.1 UNEXPECTEDPASS 0.02s
>>> ./builddir/ping/ping -c1 -i0.001 127.0.0.1
33/36 ping -c1 -i0.001 __1 UNEXPECTEDPASS 0.02s
What to do to evaluate user when running tests?
This is really a case for skipping rather than expected failure. It would be easy to wrap your tests in a small shell or python script that checks the effective UID and returns the magic exit code 77 (which meson interprets as skip)
Something like:
#!/bin/bash
if [ "$(id -u)" -ne 0 ]; then
echo "User does not have root, cannot run"
exit 77
fi
exec "$#"
This will cause meson test to return a status of SKIP if the tests are not run as root.
I'm using a monitoring system called PRTG to monitor our environment. PRTG KB suggested using a script to monitor processes called SSH Script. The script needs to be stored in /var/prtg/scripts.
I found a script that someone used for PRTG:
#!/bin/sh
pgrep wrapper $1 2>&1 1>/dev/null
if [ $? -ne 0 ]; then
echo "1:$?:$1 Down"
else
echo "0:$?:OK"
fi
However, PRTG is returning the following error code within the Web GUI:
Response not well-formed: "pgrep: only one pattern can be provided Try `pgrep --help' for more information. 1:0:Wrapper Down "
However, when I run the script on the Linux server it prints out:
0:1:OK
So my question would be what would be the best script to use to tell PRTG that a process is "Down" or "UP"?
###################
Editing for further clarification:
I changed the script up and it works awesome on the command line...but it appears that the issue is with how PRTG is reading the output. Apparently it's not in the correct format. So here's my script:
#!/bin/bash
SERVICE="wrapper"
if pgrep -x "$SERVICE" >/dev/null
then
echo "$SERVICE is running"
else
echo "$SERVICE stopped"
fi
This is what PRTG is erroring out with:
Response not well-formed: "wrapper is running "
So...PRTG is saying that the sensor I'm using wants the script output into this format:
The returned data for standard SSH Script sensors must be in the following
format:
returncode:value:message
Value has to be a 64-bit integer or float. It is used as the resulting value
for this sensor (for example, bytes, milliseconds) and stored in the
database. The message can be any string (maximum length: 2000 characters).
The SSH script returncode has to be one of the following values:
VALUE DESCRIPTION
0 OK
1 WARNING
2 System Error
3 Protocol Error
4 Content Error
So I guess now...the question is how do I get that script to output what PRTG wants to see?
From the changes you made to the script, is shall look like below, and called like this example: servicecheck.sh sshd
#!/bin/sh
pgrep -x $1 2>&1 1>/dev/null
if [ $? -ne 0 ]; then
echo "1:$?:$1 Down"
else
echo "0:$?:OK"
fi
I'm trying to debug a CI pipeline and want to create a custom logger stage that dumps a bunch of information about the environment in which the pipeline is running.
I tried adding this:
stages:
- logger
logger-commands:
stage: logger
allow_failure: true
script:
- echo 'Examining environment'
- echo PWD=$(pwd) Using image ${CI_JOB_IMAGE}
- git --version
- echo --------------------------------------------------------------------------------
- env
- echo --------------------------------------------------------------------------------
- npm --version
- node --version
- echo java -version
- mvn --version
- kanico --version
- echo --------------------------------------------------------------------------------
The problem is that the Java command is failing because java isn't installed. The error says:
/bin/sh: eval: line 217: java: not found
I know I could remove the line java -version, but I'm trying to come up with a canned logger that I could use in all my CI-Pipelines, so it would include: Java, Maven, Node, npm, python, and whatever else I want to include and I realize that some of those commands will fail because some of the commands are not found.
Searching for the above solution got me close.
GitLab CI: How to continue job even when script fails - Which did help. By adding allow_failure: true I found that even if the logger job failed the remaining stages would run (which is desirable). The answer also suggests a syntax to wrap commands in which is:
./script_that_fails.sh > /dev/null 2>&1 || FAILED=true
if [ $FAILED ]
then ./do_something.sh
fi
So that is helpful, but my question is this.
Is there anything built into gitlab's CI-pipeline syntax (or bash syntax) that allows all commands in a given step to run even if one command fails?
Is it possible to allow for a script in a CI/CD job to fail? - suggests adding the UNIX bash OR syntax as shown below:
- npm --version || echo nmp failed
- node --version || echo node failed
- echo java -version || echo java failed
That is a little cleaner (syntax) but I'm trying to make it simpler.
The answers already mentioned are good, but I was looking for something simpler so I wrote the following bash script. The script always returns a zero exit code so the CI-pipeline always thinks the command was successful.
If the command did fail, the command is printed along with the non-zero exit code.
# File: runit
#!/bin/sh
"$#"
EXITCODE=$?
if [ $EXITCODE -ne 0 ]
then
echo "CMD: $#"
echo "Ignored exit code ($EXITCODE)"
fi
exit 0
Testing it as follows:
./runit ls "/bad dir"
echo "ExitCode = $?"
Gives this output:
ls: cannot access /bad dir: No such file or directory
CMD: ls /bad dir
Ignored exit code (2)
ExitCode=0
Notice even though the command failed the ExitCode=0 shows what the ci-pipeline will see.
To use it in the pipeline, I have to have that shell script available. I'll research how to include it, but it must be in the CI runner job. For example,
stages:
- logger-safe
logger-safe-commands:
stage: logger-safe
allow_failure: true
script:
- ./runit npm --version
- ./runit java -version
- ./runit mvn --version
I don't like this solution because it requires extra file in the repo but this is in the spirit of what I'm looking for. So far the simplest built in solution is:
- some_command || echo command failed $?
Dry runs are a super important functionality of workflow languages. What I am looking at is mostly what would be executed if I run the command and this is exactly what one see when running make -n.
However analogical functionality snakemake -n prints something like
Building DAG of jobs...
rule produce_output:
output: my_output
jobid: 0
wildcards: var=something
Job counts:
count jobs
1 produce_output
1
The log contains kind of everything else than commands that get executed. Is there a way how to get command from snakemake?
snakemake -p --quiet -n
-p for print shell commands
-n for dry run
--quiet for removing the rest
EDIT 2019-Jan
This solution seems broken for lasts versions of snakemake
snakemake -p -n
Avoid the --quiet reported in the #eric-c answer, at least in some situations the combination on -p -n -q does not print the command executed without -n.
I have a created a script to check to see if my glassfish server is running (installed on a freebsd system), if it isn't, the script attempts to kill the java process to ensure it's not hung, and then issues the asadmin start-domain command
If this script runs from the command line it is successful 100% of the time. When it is run from the cron tab, every line runs except the asadmin start-domain line - it does not seem to execute or at least does not complete, i.e. the server is not running after this script runs.
For anyone not familiar with glassfish or the asadmin utility used to start the server, it is my understanding that a forked process is used. could this be causing a problem via cron?
Again, in all my tests today, the script runs to completion when run from the command line. Once it's executed through the cron, it does not complete... what would be different running this from the crontab???
thanks in advance for any help... i'm pulling my hair out trying to make this work!
#!/bin/bash
JAVA_HOME=/usr/local/diablo-jdk1.6.0/; export JAVA_HOME
timevar=`date +%d-%m-%Y_%H.%M.%S`
process_name='java'
get_contents=`cat urls.txt`
for i in $get_contents
do
echo checking $i
statuscode=$(curl --connect-timeout 10 --write-out %{http_code} --silent --output /dev/null $i)
case $statuscode in
200)
echo "$timevar $i $statuscode okay" >> /usr/home/user1/logfile.txt
;;
*)
echo "$timevar $i $statuscode bad" >> /usr/home/user1/logfile.txt
echo "Status $statuscode found" | mail -s "Check of $i failed" some.address#gmail.com
process_id=`ps acx | grep -i $process_name | awk {'print $1'}`
if [ -z "$process_id" ]
then
echo "java wasn't found in the process list"
else
echo "Killing java, currently process $process_id"
kill -9 $process_id
fi
/usr/home/user1/glassfish3/bin/asadmin start-domain domain1
;;
esac
done
Also, just for completeness, here is the entry in the cron tab:
*/2 * * * * /usr/home/user1/server.check.sh >> /usr/home/user1/cron.log
Ok... found the answer to this on another site, but I thought I'd add the answer in here for future reference.
The problem was the PATH!! even though java_home was set, java itself wasn't in the path for the cron daemon.
A quick test to see what path is available to your cron, add this line:
*/2 * * * * env > /usr/home/user1/env.output
From what I can gather, the PATH initially available to cron is pretty minimal. Since java was in /usr/local/bin, i added that to the path right in the crontab and kaboom! it worked!
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
*/2 * * * * /usr/home/user1/server.check.sh >> /usr/home/user1/cron.log