Running PMD in GitLab CI Script doesn't work unless echo command is added after the script runs - gitlab-ci

This is an interesting issue. I have a GitLab project, and I've created a .gitlab-ci.yml to run a PMD that will scan my code after every commit. The ci.yml file looks like this:
image: "node:latest"
stages:
- preliminary-testing
apex-code-scan:
stage: preliminary-testing
allow_failure: false
script:
- install_java
- install_pmd
artifacts:
paths:
- pmd-reports/
####################################################
# Helper Methods
####################################################
.sfdx_helpers: &sfdx_helpers |
function install_java() {
local JAVA_VERSION=11
local JAVA_INSTALLATION=openjdk-$JAVA_VERSION-jdk
echo "Installing ${JAVA_INSTALLATION}"
apt update && apt -y install $JAVA_INSTALLATION
}
function install_pmd() {
local PMD_VERSION=6.52.0
local RULESET_PATH=ruleset.xml
local OUTPUT_DIRECTORY=pmd-reports
local SOURCE_DIRECTORY=force-app
local URL=https://github.com/pmd/pmd/releases/download/pmd_releases%2F$PMD_VERSION/pmd-bin-$PMD_VERSION.zip
# Here I would download and unzip the PMD source code. But for now I have the PMD source already in my project for testing purposes
# apt update && apt -y install unzip
# wget $URL
# unzip -o pmd-bin-$PMD_VERSION.zip
# rm pmd-bin-$PMD_VERSION.zip
echo "Installed PMD!"
mkdir -p $OUTPUT_DIRECTORY
echo "Going to run PMD!"
ls
echo "Start"
pmd-bin-$PMD_VERSION/bin/run.sh pmd -d $SOURCE_DIRECTORY -R $RULESET_PATH -f xslt -P xsltFilename=pmd_report.xsl -r $OUTPUT_DIRECTORY/pmd-apex.html
echo "Done"
rm -r pmd-bin-$PMD_VERSION
echo "Remove pmd"
}
before_script:
- *sfdx_helpers
When I try to run this pipeline, it will fail after starting the PMD:
However, if I make a small change to the PMD's .sh file and add an echo command at the very end. Then the pipeline succeeds:
PMD /bin/run.sh before (doesn't work):
...
java ${HEAPSIZE} ${PMD_JAVA_OPTS} $(jre_specific_vm_options) -cp "${classpath}" "${CLASSNAME}" "$#"
PMD /bin/run.sh after (does work):
...
java ${HEAPSIZE} ${PMD_JAVA_OPTS} $(jre_specific_vm_options) -cp "${classpath}" "${CLASSNAME}" "$#"
echo "Done1" // This is the last line in the file
I don't have the slightest idea why this is the case. Does anyone know why adding this echo command at the end of the .sh file would cause the pipeline to succeed? I could keep it as is with the echo command, but I would like to understand why it is behaving this way. I don't want to be that guy that just leaves a comment saying Hey don't touch this line of code, I don't know why, but without it the whole thing fails. Thank you!

PMD exits with a specific exit code depending whether it found some violations or not, see https://pmd.github.io/latest/pmd_userdocs_cli_reference.html#exit-status
I guess, your PMD run finds some violations, and PMD exits with exit code 4 - which is not a success exit code.
In general, this is used to make the CI build fail, in case any PMD violations are present - forcing to fix the violations before you get a green build.
If that is not what you want, e.g. you only want to report the violations but not fail the build, then you need to add the following command line option:
--fail-on-violation false
Then PMD will exit with exit code 0, even when there are violations.

So it appears that the java command that the PMD runs for some reason returns a non-zero exit code (even though the script is successful). Because I was adding an echo command at the end of that bash script, the last line in the script returned a success exit code, which is why the GitLab CI pipeline succeeded when the echo command was there.
In order to work around the non-zero exit code being returned by the java PMD command, I have changed this line in my .gitlab-ci.yml file to catch the non-zero exit code and proceed.
function install_pmd() {
// ... For brevity I'm just including the line that was changed in this method
pmd-bin-$PMD_VERSION/bin/run.sh pmd -d $SOURCE_DIRECTORY -R $RULESET_PATH -f xslt -P xsltFilename=pmd_report.xsl -r $OUTPUT_DIRECTORY/pmd-apex.html || echo "PMD Returned Exit Code"
// ...
}

Related

Gitlab CI job fails even if the script/command is successful

I have a CI stage with the following command, which has to be executed remotely and checks if the mentioned file exists, if yes it creates a backup for it.
script: |
ssh ${USER}#${HOST} '([ -f "${PATH}/test_1.txt" ] && cp -v "${PATH}/test_1.txt" ${PATH}/test_1_$CI_COMMIT_TIMESTAMP.txt)'
The issue is, this job always fails whether the file exists or not with the following output:
ssh user#hostname '([ -f /etc/file/path/test_1.txt ] && cp -v /etc/file/path/test_1.txt /etc/file/path/test_1_$CI_COMMIT_TIMESTAMP.txt)'
Cleaning up project directory and file based variables
ERROR: Job failed: exit status 1
Running the same command manually, just works fine. So,
How can I make sure that this job succeeds as long as command logic is executed successfully and only fail incase there are some genuine failures?
There is no way for the job to know if the command you ran remotely worked or not. It can only know if the ssh instruction worked or not. You can force it to always succeed by appending || true to any instruction.
However, if you want to see and save the output of your remote instruction, you can do something like this:
ssh user#host command 2>&1 | tee ssh-session.log

CI-pipeline ignore any commands that fail in a given step

I'm trying to debug a CI pipeline and want to create a custom logger stage that dumps a bunch of information about the environment in which the pipeline is running.
I tried adding this:
stages:
- logger
logger-commands:
stage: logger
allow_failure: true
script:
- echo 'Examining environment'
- echo PWD=$(pwd) Using image ${CI_JOB_IMAGE}
- git --version
- echo --------------------------------------------------------------------------------
- env
- echo --------------------------------------------------------------------------------
- npm --version
- node --version
- echo java -version
- mvn --version
- kanico --version
- echo --------------------------------------------------------------------------------
The problem is that the Java command is failing because java isn't installed. The error says:
/bin/sh: eval: line 217: java: not found
I know I could remove the line java -version, but I'm trying to come up with a canned logger that I could use in all my CI-Pipelines, so it would include: Java, Maven, Node, npm, python, and whatever else I want to include and I realize that some of those commands will fail because some of the commands are not found.
Searching for the above solution got me close.
GitLab CI: How to continue job even when script fails - Which did help. By adding allow_failure: true I found that even if the logger job failed the remaining stages would run (which is desirable). The answer also suggests a syntax to wrap commands in which is:
./script_that_fails.sh > /dev/null 2>&1 || FAILED=true
if [ $FAILED ]
then ./do_something.sh
fi
So that is helpful, but my question is this.
Is there anything built into gitlab's CI-pipeline syntax (or bash syntax) that allows all commands in a given step to run even if one command fails?
Is it possible to allow for a script in a CI/CD job to fail? - suggests adding the UNIX bash OR syntax as shown below:
- npm --version || echo nmp failed
- node --version || echo node failed
- echo java -version || echo java failed
That is a little cleaner (syntax) but I'm trying to make it simpler.
The answers already mentioned are good, but I was looking for something simpler so I wrote the following bash script. The script always returns a zero exit code so the CI-pipeline always thinks the command was successful.
If the command did fail, the command is printed along with the non-zero exit code.
# File: runit
#!/bin/sh
"$#"
EXITCODE=$?
if [ $EXITCODE -ne 0 ]
then
echo "CMD: $#"
echo "Ignored exit code ($EXITCODE)"
fi
exit 0
Testing it as follows:
./runit ls "/bad dir"
echo "ExitCode = $?"
Gives this output:
ls: cannot access /bad dir: No such file or directory
CMD: ls /bad dir
Ignored exit code (2)
ExitCode=0
Notice even though the command failed the ExitCode=0 shows what the ci-pipeline will see.
To use it in the pipeline, I have to have that shell script available. I'll research how to include it, but it must be in the CI runner job. For example,
stages:
- logger-safe
logger-safe-commands:
stage: logger-safe
allow_failure: true
script:
- ./runit npm --version
- ./runit java -version
- ./runit mvn --version
I don't like this solution because it requires extra file in the repo but this is in the spirit of what I'm looking for. So far the simplest built in solution is:
- some_command || echo command failed $?

Gitlab CI exit 1 even if it is successful

I have a step on my gitlabci that runs php code sniff. I used custom base image for this step.
This step exit with code 1 and failed the step.
I checked this with starting a container with my docker image. phpcs command is working like charm inside of base image.
It seems like gitlab-ci throw this code even if job is succeded.
this is the output from gitlab-ci.
I checkout artifacts file row number and cli commands(inside docker container) row number. They are the same.
I could allow failure but this error is strange.
if [[ -f "phpstan.txt" && -s "phpstan.txt" ]]; then echo "exist and not empty";
I tried to allow failure inside bash script. I write a small custom control as above and place it after phpcs command inside my .gitlab-ci.yml. But job is failed before this script.
Gitlab version : v11.9.1
Docker image : custom based on php:7.2
My gitlab CI step :
phpcs:
stage: analysis
script:
- phpcs --standard=PSR2 --extensions=php --severity=5 -s src | tee phpcs.txt
artifacts:
when: always
expire_in: 1 week
paths:
- phpcs.txt
I think this is not about phpcs. I have a similar step (like phpcs) is named phpstan also an analsis mecahinism. It throws exactly same error on same line of script

Exit code from docker-compose breaking while loop

I've got case: there's WordPress project where I'm supposed to create a script for updating plugins and commit source changes to the separated branch. While doing this I had run into a strange issue.
Input variable:
akimset,4.0.3
all-in-one-wp-migration,6.71
What I wanted to do was iterating over each line of this variable
while read -r line; do
echo $line
done <<< "$variable"
and this piece of code worked perfectly fine, but when I have added docker-compose logic everything started to act weirdly
while read -r line; do
docker-compose run backend echo $line
done <<< "$variable"
now only one line was executed and after this script exited with 0 and stopped iterating. I have found workaround with:
echo $variable > file.tmp
for line in $(cat file.tmp); do
docker-compose run backend echo $line
done
and that works perfectly fine and it iterates each line. Now my question is: why? ZSH and shell scripting could be a bit misterious and running in edge-cases like this one isn't anything new for me, but I'm wondering why succesfully executed script broke input stream.
The problem with this
while read -r line; do
docker-compose run backend echo $line
done <<< "$variable"
is that docker allocate pseudo-TTY. After the first execution of docker-compose run (first loop) it access to the terminal using up the next lines as input.
You have to pass -T parameter to 'docker-compose run' command in order to avoid docker allocating pseudo-TTY. Then, a working code is:
while read -r line; do
docker-compose run -T backend echo $line
done < $(variable)
Update
The above solution is for docker version 18 and docker-compose version 1.17. For newer version the parameter -T is not working but you can try:
-d instead of -T to run container in background mode BUT no you will not see stdout in terminal.
If you have docker-compose v1.25.0, in your docker-compose.yml add the parameter stdin_open: false to the service.
I was able to solve the same problem by using a different loop :
for line in $(cat $variable)
do
docker-compose run backend echo $line
done
I ran into a nearly identical problem about a year ago, though the shell was bash (the command/problem was also slightly different, but it applied to your issue). I ended up writing the script in zsh.
I'm not certain what's going on, but it's not actually the exit code (you can confirm by running the following):
variable=$'akimset,4.0.3\nall-in-one-wp-migration,6.71'
while read line; do docker-compose run backend print "$line"; print "$?"; done <<<($variable)
... which yielded ...
(akimset,4.0.3
0
(I'm not at all sure where the ( came from and perhaps solving that would answer why this problem happens)
Working Script
for line in "${(f)variable}"; do
docker-compose run backend echo "$line"
done
The (f) flag tells zsh to split on newlines; the "${(f)variable" is in quotes so that any blank lines aren't lost. If you're going to include escap sequences that you want to not be converted to the corresponding values (something that I often need when reading file contents from a variable), make the flags (fV)

pkgbuild postinstall script causes "Installation failed" on others' Macs

I have an issue in my custom installer that occurs when I append a postinstall script to the pkg. On my computer the installation works just fine, but on other users' systems the .app is installed but the postinstall script fails without execution.
If I remove the --scripts argument on pkgbuild, the installer produces no issues. If I add it (and even if the postinstall script is empty) a "failed installation" message is shown. No logs are produced.
The pkg is built using a script similar to this:
pkgbuild --identifier $PKG_IDENTIFIER \
--version $APP_VERSION \
--root $APP_PATH \
--scripts Scripts/ \
--install-location /Applications/$APP_NAME.app $TMP_PKG_PATH
productbuild --sign "Developer ID Installer: $COMPANY_NAME" \
--distribution Distribution.xml \
--package-path $INSTALLER_BUILD_PATH $INSTALLER_PKG_PATH
On my system the app is installed into /Applications and the postinstall script runs and does it's business. On other systems the postinstall doesn't even seem to be executed at all.
It has been tested on OSX 10.8 and 10.7 and both get the same issue. The postinstall script is tested independently on all systems (using ./postinstall in the Terminal) and works.
The script looks like this:
#!/usr/bin/env sh
set -e
# Install launch agent
LAUNCH_AGENT_SRC="/Applications/MyApp.app/Contents/Resources/launchd.plist"
LAUNCH_AGENT_DEST="$HOME/Library/LaunchAgents/com.company.myapp.agent.plist"
# Uninstall old launch agent
if [ -f "$LAUNCH_AGENT_DEST" ]; then
launchctl unload "$LAUNCH_AGENT_DEST"
rm -f "$LAUNCH_AGENT_DEST"
fi
cp "$LAUNCH_AGENT_SRC" "$LAUNCH_AGENT_DEST"
launchctl load "$LAUNCH_AGENT_DEST"
# Open application
open -a "MyApp"
exit 0
What could be causing this issue?
It seems the cause of the issue was the if statement. And when it wasn't present the contents of the if could cause the error to fire unless the launch agent was installed already.
I solved it by switching the code for:
#!/usr/bin/env sh
set -e
# Launch agent location
LAUNCH_AGENT_SRC="/Applications/MyApp.app/Contents/Resources/launchd.plist"
LAUNCH_AGENT_DEST="$HOME/Library/LaunchAgents/com.company.myapp.agent.plist"
# Uninstall old launch agent
launchctl unload "$LAUNCH_AGENT_DEST" || true
rm -f "$LAUNCH_AGENT_DEST" || true
# Install launch agent
cp "$LAUNCH_AGENT_SRC" "$LAUNCH_AGENT_DEST" || true
launchctl load "$LAUNCH_AGENT_DEST" || true
# Open application
open -a "MyApp"
exit 0
The error I made before when testing an empty script was not having exit 0 at the end. So now when I got that working I could activate different rows of the code and see what was causing an error.
You might have found your answer already, and it's a bit hard to say without looking at the script, but can you make sure that you have "exit 0" at the end of your postinstall script?