Continue running scripts even after exit code 1 - npm

I'm trying to run Cypress test in Gitlab. Below is the sample script. After executing 'npm run Cypress', if there is any test case fail, it exits with 'exit code 1' and next two commands won't run.
Is there a way I can execute next two commands. Next two commands, generates consolidated Jnuit and HTML report.
script:
- cd ./cypress
- npm ci
- npm run Cypress
- npm run mochawesome
- npm run junit:merge
I have tried below mentioned solution but no luck.
script:
- cd ./cypress
- npm ci
- npm run Cypress || exit 0
- npm run mochawesome
- npm run junit:merge
script:
- cd ./cypress
- npm ci
- npm run Cypress
after_script:
- npm run mochawesome
- npm run junit:merge
output Image:

One way would be instead of mentioning the exit code, which seems to be dynamic you can directly echo something after the || operator.
npm run Cypress || echo \"The previous command has some errors..Continuing\"

Using the after_script approach actually should work fine as you can see from this minimal example:
# .gitlab-ci.yml
test:
image: alpine
script:
- echo "Hello after_script!" > test.txt
- exit 1
after_script:
- cat test.txt
Output:
$ echo "Hello after_script!" > test.txt
$ exit 1
Running after_script
Running after script...
$ cat test.txt
Hello after_script!
Cleaning up file based variables
ERROR: Job failed: exit code 1

Also, you can consider using set +e and set -e to disable/enable exit on error.

Related

Running PMD in GitLab CI Script doesn't work unless echo command is added after the script runs

This is an interesting issue. I have a GitLab project, and I've created a .gitlab-ci.yml to run a PMD that will scan my code after every commit. The ci.yml file looks like this:
image: "node:latest"
stages:
- preliminary-testing
apex-code-scan:
stage: preliminary-testing
allow_failure: false
script:
- install_java
- install_pmd
artifacts:
paths:
- pmd-reports/
####################################################
# Helper Methods
####################################################
.sfdx_helpers: &sfdx_helpers |
function install_java() {
local JAVA_VERSION=11
local JAVA_INSTALLATION=openjdk-$JAVA_VERSION-jdk
echo "Installing ${JAVA_INSTALLATION}"
apt update && apt -y install $JAVA_INSTALLATION
}
function install_pmd() {
local PMD_VERSION=6.52.0
local RULESET_PATH=ruleset.xml
local OUTPUT_DIRECTORY=pmd-reports
local SOURCE_DIRECTORY=force-app
local URL=https://github.com/pmd/pmd/releases/download/pmd_releases%2F$PMD_VERSION/pmd-bin-$PMD_VERSION.zip
# Here I would download and unzip the PMD source code. But for now I have the PMD source already in my project for testing purposes
# apt update && apt -y install unzip
# wget $URL
# unzip -o pmd-bin-$PMD_VERSION.zip
# rm pmd-bin-$PMD_VERSION.zip
echo "Installed PMD!"
mkdir -p $OUTPUT_DIRECTORY
echo "Going to run PMD!"
ls
echo "Start"
pmd-bin-$PMD_VERSION/bin/run.sh pmd -d $SOURCE_DIRECTORY -R $RULESET_PATH -f xslt -P xsltFilename=pmd_report.xsl -r $OUTPUT_DIRECTORY/pmd-apex.html
echo "Done"
rm -r pmd-bin-$PMD_VERSION
echo "Remove pmd"
}
before_script:
- *sfdx_helpers
When I try to run this pipeline, it will fail after starting the PMD:
However, if I make a small change to the PMD's .sh file and add an echo command at the very end. Then the pipeline succeeds:
PMD /bin/run.sh before (doesn't work):
...
java ${HEAPSIZE} ${PMD_JAVA_OPTS} $(jre_specific_vm_options) -cp "${classpath}" "${CLASSNAME}" "$#"
PMD /bin/run.sh after (does work):
...
java ${HEAPSIZE} ${PMD_JAVA_OPTS} $(jre_specific_vm_options) -cp "${classpath}" "${CLASSNAME}" "$#"
echo "Done1" // This is the last line in the file
I don't have the slightest idea why this is the case. Does anyone know why adding this echo command at the end of the .sh file would cause the pipeline to succeed? I could keep it as is with the echo command, but I would like to understand why it is behaving this way. I don't want to be that guy that just leaves a comment saying Hey don't touch this line of code, I don't know why, but without it the whole thing fails. Thank you!
PMD exits with a specific exit code depending whether it found some violations or not, see https://pmd.github.io/latest/pmd_userdocs_cli_reference.html#exit-status
I guess, your PMD run finds some violations, and PMD exits with exit code 4 - which is not a success exit code.
In general, this is used to make the CI build fail, in case any PMD violations are present - forcing to fix the violations before you get a green build.
If that is not what you want, e.g. you only want to report the violations but not fail the build, then you need to add the following command line option:
--fail-on-violation false
Then PMD will exit with exit code 0, even when there are violations.
So it appears that the java command that the PMD runs for some reason returns a non-zero exit code (even though the script is successful). Because I was adding an echo command at the end of that bash script, the last line in the script returned a success exit code, which is why the GitLab CI pipeline succeeded when the echo command was there.
In order to work around the non-zero exit code being returned by the java PMD command, I have changed this line in my .gitlab-ci.yml file to catch the non-zero exit code and proceed.
function install_pmd() {
// ... For brevity I'm just including the line that was changed in this method
pmd-bin-$PMD_VERSION/bin/run.sh pmd -d $SOURCE_DIRECTORY -R $RULESET_PATH -f xslt -P xsltFilename=pmd_report.xsl -r $OUTPUT_DIRECTORY/pmd-apex.html || echo "PMD Returned Exit Code"
// ...
}

Error in .gitlab-ci while trying to package

So I'm trying to make conditions in my .gitlab-ci.yml if there is no package then, npm publish to pack the library in the GitLab registry
I gave my pipeline the permission to the registry and the npm access token, but I still get the unauthorized error
this is the part of the .gitlab-ci.yml where I create the .npmrc file and set the configurations.
script:
- |
if [[ ! -f .npmrc ]]; then
echo 'No .nmprc found! Creating one now.'
echo "#${CI_PROJECT_ROOT_NAMESPACE}:registry=${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/packages/npm/">.npmrc
echo "//${CI_SERVER_HOST}/api/v4/projects/${CI_PROJECT_ID}/packages/npm/:_authToken=${CI_JOB_TOKEN}">>.npmrc
echo "//registry.npmjs.org/:_authToken=${NPM_ACCESS_TOKEN}">>.npmrc
echo "Created the following .npmrc:"; cat .npmrc
fi
The pipeline get me this when I try to find if there is a package with the name of : $NPM_PACKAGE_NAME
#scope:registry = "https://gitlab.example.com/api/v4/projects/project_id/packages/npm/"
//gitlab.example.com/api/v4/projects/projet_id/packages/npm/:_authToken = (protected)
//registry.npmjs.org/:_authToken = (protected)
; "cli" config from command line options
long = true
$ npm config set always-auth true
$ echo $(npm view "${NPM_PACKAGE_NAME}" )
npm ERR! code E401
npm ERR! 401 Unauthorized - GET https://gitlab.example.com/api/v4/projects/project_id/packages/npm/#scope%2fmy-package
Where :
- NPM_PACKAGE_NAME=$(node -p "require('./my-package/package.json').name")
Instead of echo to the .npmrc, you could try the npm commands directly
npm config set -- '//gitlab.example.com/api/v4/projects/<your_project_id>/packages/npm/:_authToken' "${NPM_TOKEN}"
npm config set -- '//gitlab.example.com/api/v4/packages/npm/:_authToken' "${NPM_TOKEN}"
That way, you are sure the .npmrc is properly updated.
Could you try to create the .npmrc in your home folder instead of locally?
We do this in our pipeline and it works without any problem:
publish:
stage: publish
script:
- echo "#<scope>:registry=https://${CI_SERVER_HOST}/api/v4/projects/${REGISTRY_PROJECT_ID}/packages/npm/" > ~/.npmrc
- echo "//${CI_SERVER_HOST}/api/v4/projects/${REGISTRY_PROJECT_ID}/packages/npm/:_authToken=${CI_JOB_TOKEN}" >> ~/.npmrc
- npm version --no-git-tag-version "$(<.next-version)" --allow-same-version
- npm publish --tag ${NPM_TAG_NAME}
As you see, other than the npm version and npm publish commands, the only difference is the .npmrc file location.

CI-pipeline ignore any commands that fail in a given step

I'm trying to debug a CI pipeline and want to create a custom logger stage that dumps a bunch of information about the environment in which the pipeline is running.
I tried adding this:
stages:
- logger
logger-commands:
stage: logger
allow_failure: true
script:
- echo 'Examining environment'
- echo PWD=$(pwd) Using image ${CI_JOB_IMAGE}
- git --version
- echo --------------------------------------------------------------------------------
- env
- echo --------------------------------------------------------------------------------
- npm --version
- node --version
- echo java -version
- mvn --version
- kanico --version
- echo --------------------------------------------------------------------------------
The problem is that the Java command is failing because java isn't installed. The error says:
/bin/sh: eval: line 217: java: not found
I know I could remove the line java -version, but I'm trying to come up with a canned logger that I could use in all my CI-Pipelines, so it would include: Java, Maven, Node, npm, python, and whatever else I want to include and I realize that some of those commands will fail because some of the commands are not found.
Searching for the above solution got me close.
GitLab CI: How to continue job even when script fails - Which did help. By adding allow_failure: true I found that even if the logger job failed the remaining stages would run (which is desirable). The answer also suggests a syntax to wrap commands in which is:
./script_that_fails.sh > /dev/null 2>&1 || FAILED=true
if [ $FAILED ]
then ./do_something.sh
fi
So that is helpful, but my question is this.
Is there anything built into gitlab's CI-pipeline syntax (or bash syntax) that allows all commands in a given step to run even if one command fails?
Is it possible to allow for a script in a CI/CD job to fail? - suggests adding the UNIX bash OR syntax as shown below:
- npm --version || echo nmp failed
- node --version || echo node failed
- echo java -version || echo java failed
That is a little cleaner (syntax) but I'm trying to make it simpler.
The answers already mentioned are good, but I was looking for something simpler so I wrote the following bash script. The script always returns a zero exit code so the CI-pipeline always thinks the command was successful.
If the command did fail, the command is printed along with the non-zero exit code.
# File: runit
#!/bin/sh
"$#"
EXITCODE=$?
if [ $EXITCODE -ne 0 ]
then
echo "CMD: $#"
echo "Ignored exit code ($EXITCODE)"
fi
exit 0
Testing it as follows:
./runit ls "/bad dir"
echo "ExitCode = $?"
Gives this output:
ls: cannot access /bad dir: No such file or directory
CMD: ls /bad dir
Ignored exit code (2)
ExitCode=0
Notice even though the command failed the ExitCode=0 shows what the ci-pipeline will see.
To use it in the pipeline, I have to have that shell script available. I'll research how to include it, but it must be in the CI runner job. For example,
stages:
- logger-safe
logger-safe-commands:
stage: logger-safe
allow_failure: true
script:
- ./runit npm --version
- ./runit java -version
- ./runit mvn --version
I don't like this solution because it requires extra file in the repo but this is in the spirit of what I'm looking for. So far the simplest built in solution is:
- some_command || echo command failed $?

How to access log files created by GCP cloud build steps?

My Cloud build fails with a timeout on npm test, and no useful information is sent to stdout. A complete log can be found in a file, but I couldn't find a way to ssh in the cloud build environment.
Already have image: node
> project-client#v3.0.14 test
> jest
ts-jest[versions] (WARN) Version 4.1.0-beta of typescript installed has not been tested with ts-jest. If you're experiencing issues, consider using a supported version (>=3.8.0 <5.0.0-0). Please do not report issues in ts-jest if you are using unsupported versions.
npm ERR! path /workspace/v3/client
npm ERR! command failed
npm ERR! signal SIGTERM
npm ERR! command sh -c jest
npm ERR! A complete log of this run can be found in:
npm ERR! /builder/home/.npm/_logs/2020-11-09T07_56_23_573Z-debug.log
Since I have no problem running the tests locally, I'd like to see the content of that 2020-11-09T07_56_23_573Z-debug.log file to hopefully get a hint at what might be wrong.
Is there a way to retrieve the file content ?
Ssh in a cloud build environment?
Get npm to print the complete log to stdout?
Some way to save the log file artifact to cloud storage ?
I had a similar issue with error management on Gitlab CI and my workaround is inspired from there.
The trick is to embed your command in something that exit with a return code 0. Here an example
- name: node
entrypoint: "bash"
args:
- "-c"
- |
RETURN_CODE=$$(jtest > log.stdout 2>log.stderr;echo $${?})
cat log.stdout
cat log.stderr
if [ $${RETURN_CODE} -gt 0 ];
then
#Do what you want in case of error, like a cat of the files in the _logs dir
# Break the build: exit 1
else
#Do what you want in case of success. Nothing to continue to the next step
fi
Some explanations:
echo $${?}: the double $ is to indicate to Cloud Build to not use substitution variables but to ignore it and let Linux command being interpreted.
The $? allows you to get the exit code of the previous command
Then you test the exit code, if > 0, you can perform actions. At the end, I recommend to break the build to not continue with erroneous sources.
You can parse the log.stderr file to get useful info in it (the log file for example)

Gitlab-ci runner hangs after cypress tests

I am using gitlab-ci to tests a react application with cypress.
The test seems to pass but it hangs after executing cypress run command.
Thus, the test fails because of the timeout.
My service is the following
cypress:
image: cypress/base:10
script:
- serve -s build -l 3000 & yarn wait-on http://localhost:3000
- yarn cypress:run
And in my package.json
{
...
"scripts": {
"cypress:run": "cypress run --spec 'cypress/integration/**/*spec.js' --record false --config video=false"
},
...
}
This is the end of gitlab-ci runner's log:
✔ All specs passed! 01:01 11 11 - - -
Done in 73.82s.
ERROR: Job failed: execution took longer than 20m0s seconds
this issue occurs when a background task is running in the runner
To fix this I put in an or condition on the cypress:run step and kill the process if the result is not a success
there is another kill statement added in the step below also in case -parallel is used and multiple steps are running
Something like this
script:
# start the server in the background
- npx serve -s build -p 3001 &
# run Cypress tests in parallel
- yarn cypress:run || (ps -ef | grep [s]erve| awk '{print $2}' | xargs kill -9 )
- (ps -ef | grep [s]erve| awk '{print $2}' | xargs kill -9 ) || exit 0
I don't know if this can help someone, but it is worth to try updating node.
I had the same problem when using cypress 11.1.0 and node:16.17.1-slim docker image. I do not experience hang ups with node:16.18.1-slim anymore.
P.S. Along with node update I've updated Chrome from 106 to 107, so can't be sure what actually made the trick, just wanted to share with possible solution.