meson: determine running tests as root - testing

In tests for ping from iputils certain tests should fail for non-root but pass for root. Thus I need a detection whether user running tests is root or not. Current code:
run_as_root = false
r = run_command('id', '-u')
if r.stdout().strip().to_int() == 0
message('running as root')
run_as_root = true
else
message('running as normal user')
endif
...
test(name, cmd, args : args, should_fail : not run_as_root)
is not working, because test is done during build phase:
$ meson builddir
The Meson build system
Version: 0.59.4
...
Program xsltproc found: YES (/usr/bin/xsltproc)
Message: running as normal user
and not for running tests because root user is not properly detected:
# cd builddir/ && meson test
[21/21] Linking target ninfod/ninfod
1/36 arping -V OK 0.03s
...
32/36 ping -c1 -i0.001 127.0.0.1 UNEXPECTEDPASS 0.02s
>>> ./builddir/ping/ping -c1 -i0.001 127.0.0.1
33/36 ping -c1 -i0.001 __1 UNEXPECTEDPASS 0.02s
What to do to evaluate user when running tests?

This is really a case for skipping rather than expected failure. It would be easy to wrap your tests in a small shell or python script that checks the effective UID and returns the magic exit code 77 (which meson interprets as skip)
Something like:
#!/bin/bash
if [ "$(id -u)" -ne 0 ]; then
echo "User does not have root, cannot run"
exit 77
fi
exec "$#"
This will cause meson test to return a status of SKIP if the tests are not run as root.

Related

Running PMD in GitLab CI Script doesn't work unless echo command is added after the script runs

This is an interesting issue. I have a GitLab project, and I've created a .gitlab-ci.yml to run a PMD that will scan my code after every commit. The ci.yml file looks like this:
image: "node:latest"
stages:
- preliminary-testing
apex-code-scan:
stage: preliminary-testing
allow_failure: false
script:
- install_java
- install_pmd
artifacts:
paths:
- pmd-reports/
####################################################
# Helper Methods
####################################################
.sfdx_helpers: &sfdx_helpers |
function install_java() {
local JAVA_VERSION=11
local JAVA_INSTALLATION=openjdk-$JAVA_VERSION-jdk
echo "Installing ${JAVA_INSTALLATION}"
apt update && apt -y install $JAVA_INSTALLATION
}
function install_pmd() {
local PMD_VERSION=6.52.0
local RULESET_PATH=ruleset.xml
local OUTPUT_DIRECTORY=pmd-reports
local SOURCE_DIRECTORY=force-app
local URL=https://github.com/pmd/pmd/releases/download/pmd_releases%2F$PMD_VERSION/pmd-bin-$PMD_VERSION.zip
# Here I would download and unzip the PMD source code. But for now I have the PMD source already in my project for testing purposes
# apt update && apt -y install unzip
# wget $URL
# unzip -o pmd-bin-$PMD_VERSION.zip
# rm pmd-bin-$PMD_VERSION.zip
echo "Installed PMD!"
mkdir -p $OUTPUT_DIRECTORY
echo "Going to run PMD!"
ls
echo "Start"
pmd-bin-$PMD_VERSION/bin/run.sh pmd -d $SOURCE_DIRECTORY -R $RULESET_PATH -f xslt -P xsltFilename=pmd_report.xsl -r $OUTPUT_DIRECTORY/pmd-apex.html
echo "Done"
rm -r pmd-bin-$PMD_VERSION
echo "Remove pmd"
}
before_script:
- *sfdx_helpers
When I try to run this pipeline, it will fail after starting the PMD:
However, if I make a small change to the PMD's .sh file and add an echo command at the very end. Then the pipeline succeeds:
PMD /bin/run.sh before (doesn't work):
...
java ${HEAPSIZE} ${PMD_JAVA_OPTS} $(jre_specific_vm_options) -cp "${classpath}" "${CLASSNAME}" "$#"
PMD /bin/run.sh after (does work):
...
java ${HEAPSIZE} ${PMD_JAVA_OPTS} $(jre_specific_vm_options) -cp "${classpath}" "${CLASSNAME}" "$#"
echo "Done1" // This is the last line in the file
I don't have the slightest idea why this is the case. Does anyone know why adding this echo command at the end of the .sh file would cause the pipeline to succeed? I could keep it as is with the echo command, but I would like to understand why it is behaving this way. I don't want to be that guy that just leaves a comment saying Hey don't touch this line of code, I don't know why, but without it the whole thing fails. Thank you!
PMD exits with a specific exit code depending whether it found some violations or not, see https://pmd.github.io/latest/pmd_userdocs_cli_reference.html#exit-status
I guess, your PMD run finds some violations, and PMD exits with exit code 4 - which is not a success exit code.
In general, this is used to make the CI build fail, in case any PMD violations are present - forcing to fix the violations before you get a green build.
If that is not what you want, e.g. you only want to report the violations but not fail the build, then you need to add the following command line option:
--fail-on-violation false
Then PMD will exit with exit code 0, even when there are violations.
So it appears that the java command that the PMD runs for some reason returns a non-zero exit code (even though the script is successful). Because I was adding an echo command at the end of that bash script, the last line in the script returned a success exit code, which is why the GitLab CI pipeline succeeded when the echo command was there.
In order to work around the non-zero exit code being returned by the java PMD command, I have changed this line in my .gitlab-ci.yml file to catch the non-zero exit code and proceed.
function install_pmd() {
// ... For brevity I'm just including the line that was changed in this method
pmd-bin-$PMD_VERSION/bin/run.sh pmd -d $SOURCE_DIRECTORY -R $RULESET_PATH -f xslt -P xsltFilename=pmd_report.xsl -r $OUTPUT_DIRECTORY/pmd-apex.html || echo "PMD Returned Exit Code"
// ...
}

Gitlab CI job fails even if the script/command is successful

I have a CI stage with the following command, which has to be executed remotely and checks if the mentioned file exists, if yes it creates a backup for it.
script: |
ssh ${USER}#${HOST} '([ -f "${PATH}/test_1.txt" ] && cp -v "${PATH}/test_1.txt" ${PATH}/test_1_$CI_COMMIT_TIMESTAMP.txt)'
The issue is, this job always fails whether the file exists or not with the following output:
ssh user#hostname '([ -f /etc/file/path/test_1.txt ] && cp -v /etc/file/path/test_1.txt /etc/file/path/test_1_$CI_COMMIT_TIMESTAMP.txt)'
Cleaning up project directory and file based variables
ERROR: Job failed: exit status 1
Running the same command manually, just works fine. So,
How can I make sure that this job succeeds as long as command logic is executed successfully and only fail incase there are some genuine failures?
There is no way for the job to know if the command you ran remotely worked or not. It can only know if the ssh instruction worked or not. You can force it to always succeed by appending || true to any instruction.
However, if you want to see and save the output of your remote instruction, you can do something like this:
ssh user#host command 2>&1 | tee ssh-session.log

CI-pipeline ignore any commands that fail in a given step

I'm trying to debug a CI pipeline and want to create a custom logger stage that dumps a bunch of information about the environment in which the pipeline is running.
I tried adding this:
stages:
- logger
logger-commands:
stage: logger
allow_failure: true
script:
- echo 'Examining environment'
- echo PWD=$(pwd) Using image ${CI_JOB_IMAGE}
- git --version
- echo --------------------------------------------------------------------------------
- env
- echo --------------------------------------------------------------------------------
- npm --version
- node --version
- echo java -version
- mvn --version
- kanico --version
- echo --------------------------------------------------------------------------------
The problem is that the Java command is failing because java isn't installed. The error says:
/bin/sh: eval: line 217: java: not found
I know I could remove the line java -version, but I'm trying to come up with a canned logger that I could use in all my CI-Pipelines, so it would include: Java, Maven, Node, npm, python, and whatever else I want to include and I realize that some of those commands will fail because some of the commands are not found.
Searching for the above solution got me close.
GitLab CI: How to continue job even when script fails - Which did help. By adding allow_failure: true I found that even if the logger job failed the remaining stages would run (which is desirable). The answer also suggests a syntax to wrap commands in which is:
./script_that_fails.sh > /dev/null 2>&1 || FAILED=true
if [ $FAILED ]
then ./do_something.sh
fi
So that is helpful, but my question is this.
Is there anything built into gitlab's CI-pipeline syntax (or bash syntax) that allows all commands in a given step to run even if one command fails?
Is it possible to allow for a script in a CI/CD job to fail? - suggests adding the UNIX bash OR syntax as shown below:
- npm --version || echo nmp failed
- node --version || echo node failed
- echo java -version || echo java failed
That is a little cleaner (syntax) but I'm trying to make it simpler.
The answers already mentioned are good, but I was looking for something simpler so I wrote the following bash script. The script always returns a zero exit code so the CI-pipeline always thinks the command was successful.
If the command did fail, the command is printed along with the non-zero exit code.
# File: runit
#!/bin/sh
"$#"
EXITCODE=$?
if [ $EXITCODE -ne 0 ]
then
echo "CMD: $#"
echo "Ignored exit code ($EXITCODE)"
fi
exit 0
Testing it as follows:
./runit ls "/bad dir"
echo "ExitCode = $?"
Gives this output:
ls: cannot access /bad dir: No such file or directory
CMD: ls /bad dir
Ignored exit code (2)
ExitCode=0
Notice even though the command failed the ExitCode=0 shows what the ci-pipeline will see.
To use it in the pipeline, I have to have that shell script available. I'll research how to include it, but it must be in the CI runner job. For example,
stages:
- logger-safe
logger-safe-commands:
stage: logger-safe
allow_failure: true
script:
- ./runit npm --version
- ./runit java -version
- ./runit mvn --version
I don't like this solution because it requires extra file in the repo but this is in the spirit of what I'm looking for. So far the simplest built in solution is:
- some_command || echo command failed $?

How to break Travis CI build if Appium/Mocha tests fail?

I have a Travis CI project which builds an iOS app then starts Appium and runs tests with Appium/Mocha.
The problem is that even though the Mocha tests fail and throw exception, the shell script which runs them via Gulp still exits with 0 and the build is deemed passing.
How can I make the build break/fail when the Mocha tests fail?
Here is how I managed to make this work:
Instead of running the Mocha tests via Gulp, run them directly from the shell script
Save the output to mocha.log besides displaying on stdout
./node_modules/.bin/mocha --reporter spec "appium/hybrid/*uat.js" 2>&1 | tee mocha.log
Check mocha.log for the string " failing" and exit with 1 if found
.
if grep -q " failing" mocha.log; then
exit 1
fi
The exit 1 will make the Travis build fail.

PhpUnit simple command to show less results

At the moment if I run ./phpunit -c ../app I might get output like:
PHPUnit 3.7.88 by Sebastian Begmann.
Configuration read from /var/www/site/app/Symfony/app/phpunit.xml
FFFSS....
Time 7.9 seconds, Memory: 55.00Mb
There were 4 failures:
.. lists the failures
FAILURES!
Tests: 9, Assertions: 64, Failures: 4, Skipped: 2.
This is good in some cases, like if I want to run the tests myself. But for some cases (automated testing), I just want to run the tests and know whether they all passed or not (maybe send an email if there were failures).
So my question, is there a simple command I can use like ./phpunit -c ../app --short which will just return whether all tests passed or not.
Thanks
Redirect the command output to /dev/null and check the command exit code:
./phpunit -c ../app >/dev/null 2>&1
if [ $? -eq 0 ]; then
echo "TESTS PASSED!"
fi