CI-pipeline ignore any commands that fail in a given step - gitlab-ci

I'm trying to debug a CI pipeline and want to create a custom logger stage that dumps a bunch of information about the environment in which the pipeline is running.
I tried adding this:
stages:
- logger
logger-commands:
stage: logger
allow_failure: true
script:
- echo 'Examining environment'
- echo PWD=$(pwd) Using image ${CI_JOB_IMAGE}
- git --version
- echo --------------------------------------------------------------------------------
- env
- echo --------------------------------------------------------------------------------
- npm --version
- node --version
- echo java -version
- mvn --version
- kanico --version
- echo --------------------------------------------------------------------------------
The problem is that the Java command is failing because java isn't installed. The error says:
/bin/sh: eval: line 217: java: not found
I know I could remove the line java -version, but I'm trying to come up with a canned logger that I could use in all my CI-Pipelines, so it would include: Java, Maven, Node, npm, python, and whatever else I want to include and I realize that some of those commands will fail because some of the commands are not found.
Searching for the above solution got me close.
GitLab CI: How to continue job even when script fails - Which did help. By adding allow_failure: true I found that even if the logger job failed the remaining stages would run (which is desirable). The answer also suggests a syntax to wrap commands in which is:
./script_that_fails.sh > /dev/null 2>&1 || FAILED=true
if [ $FAILED ]
then ./do_something.sh
fi
So that is helpful, but my question is this.
Is there anything built into gitlab's CI-pipeline syntax (or bash syntax) that allows all commands in a given step to run even if one command fails?
Is it possible to allow for a script in a CI/CD job to fail? - suggests adding the UNIX bash OR syntax as shown below:
- npm --version || echo nmp failed
- node --version || echo node failed
- echo java -version || echo java failed
That is a little cleaner (syntax) but I'm trying to make it simpler.

The answers already mentioned are good, but I was looking for something simpler so I wrote the following bash script. The script always returns a zero exit code so the CI-pipeline always thinks the command was successful.
If the command did fail, the command is printed along with the non-zero exit code.
# File: runit
#!/bin/sh
"$#"
EXITCODE=$?
if [ $EXITCODE -ne 0 ]
then
echo "CMD: $#"
echo "Ignored exit code ($EXITCODE)"
fi
exit 0
Testing it as follows:
./runit ls "/bad dir"
echo "ExitCode = $?"
Gives this output:
ls: cannot access /bad dir: No such file or directory
CMD: ls /bad dir
Ignored exit code (2)
ExitCode=0
Notice even though the command failed the ExitCode=0 shows what the ci-pipeline will see.
To use it in the pipeline, I have to have that shell script available. I'll research how to include it, but it must be in the CI runner job. For example,
stages:
- logger-safe
logger-safe-commands:
stage: logger-safe
allow_failure: true
script:
- ./runit npm --version
- ./runit java -version
- ./runit mvn --version
I don't like this solution because it requires extra file in the repo but this is in the spirit of what I'm looking for. So far the simplest built in solution is:
- some_command || echo command failed $?

Related

Running PMD in GitLab CI Script doesn't work unless echo command is added after the script runs

This is an interesting issue. I have a GitLab project, and I've created a .gitlab-ci.yml to run a PMD that will scan my code after every commit. The ci.yml file looks like this:
image: "node:latest"
stages:
- preliminary-testing
apex-code-scan:
stage: preliminary-testing
allow_failure: false
script:
- install_java
- install_pmd
artifacts:
paths:
- pmd-reports/
####################################################
# Helper Methods
####################################################
.sfdx_helpers: &sfdx_helpers |
function install_java() {
local JAVA_VERSION=11
local JAVA_INSTALLATION=openjdk-$JAVA_VERSION-jdk
echo "Installing ${JAVA_INSTALLATION}"
apt update && apt -y install $JAVA_INSTALLATION
}
function install_pmd() {
local PMD_VERSION=6.52.0
local RULESET_PATH=ruleset.xml
local OUTPUT_DIRECTORY=pmd-reports
local SOURCE_DIRECTORY=force-app
local URL=https://github.com/pmd/pmd/releases/download/pmd_releases%2F$PMD_VERSION/pmd-bin-$PMD_VERSION.zip
# Here I would download and unzip the PMD source code. But for now I have the PMD source already in my project for testing purposes
# apt update && apt -y install unzip
# wget $URL
# unzip -o pmd-bin-$PMD_VERSION.zip
# rm pmd-bin-$PMD_VERSION.zip
echo "Installed PMD!"
mkdir -p $OUTPUT_DIRECTORY
echo "Going to run PMD!"
ls
echo "Start"
pmd-bin-$PMD_VERSION/bin/run.sh pmd -d $SOURCE_DIRECTORY -R $RULESET_PATH -f xslt -P xsltFilename=pmd_report.xsl -r $OUTPUT_DIRECTORY/pmd-apex.html
echo "Done"
rm -r pmd-bin-$PMD_VERSION
echo "Remove pmd"
}
before_script:
- *sfdx_helpers
When I try to run this pipeline, it will fail after starting the PMD:
However, if I make a small change to the PMD's .sh file and add an echo command at the very end. Then the pipeline succeeds:
PMD /bin/run.sh before (doesn't work):
...
java ${HEAPSIZE} ${PMD_JAVA_OPTS} $(jre_specific_vm_options) -cp "${classpath}" "${CLASSNAME}" "$#"
PMD /bin/run.sh after (does work):
...
java ${HEAPSIZE} ${PMD_JAVA_OPTS} $(jre_specific_vm_options) -cp "${classpath}" "${CLASSNAME}" "$#"
echo "Done1" // This is the last line in the file
I don't have the slightest idea why this is the case. Does anyone know why adding this echo command at the end of the .sh file would cause the pipeline to succeed? I could keep it as is with the echo command, but I would like to understand why it is behaving this way. I don't want to be that guy that just leaves a comment saying Hey don't touch this line of code, I don't know why, but without it the whole thing fails. Thank you!
PMD exits with a specific exit code depending whether it found some violations or not, see https://pmd.github.io/latest/pmd_userdocs_cli_reference.html#exit-status
I guess, your PMD run finds some violations, and PMD exits with exit code 4 - which is not a success exit code.
In general, this is used to make the CI build fail, in case any PMD violations are present - forcing to fix the violations before you get a green build.
If that is not what you want, e.g. you only want to report the violations but not fail the build, then you need to add the following command line option:
--fail-on-violation false
Then PMD will exit with exit code 0, even when there are violations.
So it appears that the java command that the PMD runs for some reason returns a non-zero exit code (even though the script is successful). Because I was adding an echo command at the end of that bash script, the last line in the script returned a success exit code, which is why the GitLab CI pipeline succeeded when the echo command was there.
In order to work around the non-zero exit code being returned by the java PMD command, I have changed this line in my .gitlab-ci.yml file to catch the non-zero exit code and proceed.
function install_pmd() {
// ... For brevity I'm just including the line that was changed in this method
pmd-bin-$PMD_VERSION/bin/run.sh pmd -d $SOURCE_DIRECTORY -R $RULESET_PATH -f xslt -P xsltFilename=pmd_report.xsl -r $OUTPUT_DIRECTORY/pmd-apex.html || echo "PMD Returned Exit Code"
// ...
}

Append the package.json version number to my build artifact in aws-codebuild

I really dont know if this is a simple (must be), common or complex task.
I have a buildspec.yml file in my codebuild project, and i am trying to append the version written in package.json file to the output artifact.
I have already seen a lot of tutorials that teach how to append the date (not really useful to me), and others that tell me to execute a version.sh file with this
echo $(sed -nr 's/^\s*"version": "([0-9]{1,}.[0-9]{1,}.*)",$/\1/p' package.json)
and set it in a variable (it doesn't work).
i'm ending up with a build folder called: "my-project-$(version.sh)"
codebuild environment uses ubuntu and nodejs
Update (solved):
my version.sh file:
#!/usr/bin/env bash
echo $(sed -nr 's/^\s*\"version": "([0-9]{1,}\.[0-9]{1,}.*)",$/\1/p' package.json)
Then, i just found out 2 things:
Allow access to your version.sh file:
git update-index --add --chmod=+x version.sh
Declare a variable in any phase in buildspec, i dit in in build phase (just to make sure repository is already copied in environment)
TAGG=$($CODEBUILD_SRC_DIR/version.sh)
then reference it in artifact versioned name:
artifacts:
files:
- '**/*'
name: workover-frontend-$TAG
As result, my build artifact's name: myproject-1.0.0
In my case this script do not want to fetch data from package.json. On my local machine it working great but on AWS doesn't. I had to use chmod in different way, because i got message that i don't have right permissions. My buildspec:
version: 0.2
env:
variables:
latestTag: ""
phases:
pre_build:
commands:
- "echo sed version"
- sed --version
build:
commands:
- chmod +x version.sh
- latestTag=$($CODEBUILD_SRC_DIR/version.sh)
- "echo $latestTag"
artifacts:
files:
- '**/*'
discard-paths: yes
And results in console:
CodeBuild
I also have to mark that when i paste only for example echo 222 into version.sh file i got right answer in CodeBuild console.

Setting Environment Variable in gitlab-ci.yml in script

Please help me in converting below written GitHub Action to Gitlab CI script. I am new to scripting in Gitlab.
From Github documentation I could understand that the below written line is for setting the value of environment variable. But I couldn't find any resource for setting environment variable in Gitlab.
run: >
DISPLAY=:0 xvfb-run -s '-screen 0 1024x768x24' julia --project=monorepo -e 'using Pkg; Pkg.test("GLMakie", coverage=true)'
&& echo "TESTS_SUCCESSFUL=true" >> $GITHUB_ENV
There are multiple ways to set environment variables, and it depends on what you want to achieve:
use it within the same job
use it in another job
Use it within the same job
In Bash or other Shells you can set an environment variable via export - in your case it would look like:
job:
script:
- DISPLAY=:0 xvfb-run -s '-screen 0 1024x768x24' julia --project=monorepo -e 'using Pkg; Pkg.test("GLMakie", coverage=true)' && export TESTS_SUCCESSFUL=true
- echo $TESTS_SUCCESSFUL #verification that it is set and can be used within the same job
Use it within another job
To handover variables to another job you need to define an artifact:report:dotenv. It is a file which can contain a list of key-value-pairs which will be injected as Environment variable in the follow up jobs.
The structure of the file looks like:
KEY1=VALUE1
KEY2=VALUE2
and the definition in the .gitlab-ci.yml looks like
job:
# ...
artifacts:
reports:
dotenv: <path to file>
and in your case this would look like
job:
script:
- DISPLAY=:0 xvfb-run -s '-screen 0 1024x768x24' julia --project=monorepo -e 'using Pkg; Pkg.test("GLMakie", coverage=true)' && echo "TESTS_SUCCESSFUL=true" >> build.env
artifacts:
reports:
dotenv: build.env
job2:
needs: ["job"]
script:
- echo $TESTS_SUCCESSFUL
see https://docs.gitlab.com/ee/ci/variables/#pass-an-environment-variable-to-another-job for further information.

Gitlab CI exit 1 even if it is successful

I have a step on my gitlabci that runs php code sniff. I used custom base image for this step.
This step exit with code 1 and failed the step.
I checked this with starting a container with my docker image. phpcs command is working like charm inside of base image.
It seems like gitlab-ci throw this code even if job is succeded.
this is the output from gitlab-ci.
I checkout artifacts file row number and cli commands(inside docker container) row number. They are the same.
I could allow failure but this error is strange.
if [[ -f "phpstan.txt" && -s "phpstan.txt" ]]; then echo "exist and not empty";
I tried to allow failure inside bash script. I write a small custom control as above and place it after phpcs command inside my .gitlab-ci.yml. But job is failed before this script.
Gitlab version : v11.9.1
Docker image : custom based on php:7.2
My gitlab CI step :
phpcs:
stage: analysis
script:
- phpcs --standard=PSR2 --extensions=php --severity=5 -s src | tee phpcs.txt
artifacts:
when: always
expire_in: 1 week
paths:
- phpcs.txt
I think this is not about phpcs. I have a similar step (like phpcs) is named phpstan also an analsis mecahinism. It throws exactly same error on same line of script

Issues with gitlab-ci stages

I've been working on setting up an automated RPM build and I'd like to perform a simple test on the SPEC file before proceeding with any build steps. The problem I am having is that the job always seems to jump to the deploy stage. Here is the relevant snippet from my .gitlab-ci.yml:
stages:
- test
- build
- deploy
job1:
stage: test
script:
# Test the SPEC file
- su - newbuild -c "rpmbuild --nobuild -vv ~/rpmbuild/SPECS/package.SPEC"
stage: build
script:
# Install our required packages
- yum -y install openssl-devel freetype-devel fontconfig-devel libicu-devel sqlite-devel libpng-devel libjpeg-devel ruby
# Initialize the submodules to build
- git submodule update --init
# build the RPM
- su - newbuild -c "rpmbuild -ba --target=`uname -m` -vv ~/rpmbuild/SPECS/package.SPEC"
stage: deploy
script:
# move the RPM/SRPM
- mkdir -pv $BUILD_DIR/$RELEASEVER/{SRPMS,x86_64}
- 'for f in $WORK_DIR/rpmbuild/RPMS/x86_64/*; do cp -v "$f" $BUILD_DIR/$RELEASEVER/x86_64; done'
- 'for f in $WORK_DIR/rpmbuild/SRPMS/*; do cp -v "$f" $BUILD_DIR/$RELEASEVER/SRPMS; done'
# create the repo
- createrepo -dvp $BUILD_DIR/$RELEASEVER
# update latest
- 'if [ $CI_BUILD_REF_NAME == "master" ]; then rm $PROJECT_DIR/latest; ln -sv $(basename $BUILD_DIR) $PROJECT_DIR/latest; fi'
- 'if [ $CI_BUILD_REF_NAME == "devel" ]; then rm $PROJECT_DIR/latest-dev; ln -sv $(basename $BUILD_DIR) $PROJECT_DIR/latest-dev; fi'
tags:
- repos
I've not found any questions or online documentation to properly explain this to me so any help is appreciated!
You have all stages in one job which does not work. You need to split it up into individual jobs for the three different stages.
Quote from the documentation:
First all jobs of build are executed in parallel.
If all jobs of build succeeds, the test jobs are executed in parallel.
If all jobs of test succeeds, the deploy jobs are executed in parallel.
If all jobs of deploy succeeds, the commit is marked as success.
If any of the previous jobs fails, the commit is marked as failed and no jobs of further stage are executed.
Something like this should work:
stages:
- test
- build
- deploy
do_things_on_stage_test:
script:
- do things
stage: test
do_things_on_stage_build:
script:
- do things
stage: build
do_things_on_stage_deploy:
script:
- do things
stage: deploy
I think you assume that the stages are build on top of each other, which is not the case. If one of your stages needs something like pre-installed packages, you have to add a before_script directive. Think of the stages as in: test-if-build-succeeds, test-if-depoy-succeeds, etc.