GitLab CI: how to return command stdout to the code where it was called? - notifications

I have this piece of code in the yaml file:
.setUpEnvironment:
stage: provision
script:
- |
cd ansible
ansible-playbook install_app.yml <...>
<... n more lines here>
echo 'rc = ' $(install_app.rc)
echo 'stdout = ' $(install_app.stdout)
allow_failure: true
I need to send notification if the script fails. The problem is that neither install_app.rc nor install_app.stdout works for me. There is 'bash: line 217: install_app_win.rc: command not found
rc =
bash: line 218: install_app_win.stdout: command not found' Do you have any ideas how to fix this behavior?
Thanks in advance,
Irina

Related

Running PMD in GitLab CI Script doesn't work unless echo command is added after the script runs

This is an interesting issue. I have a GitLab project, and I've created a .gitlab-ci.yml to run a PMD that will scan my code after every commit. The ci.yml file looks like this:
image: "node:latest"
stages:
- preliminary-testing
apex-code-scan:
stage: preliminary-testing
allow_failure: false
script:
- install_java
- install_pmd
artifacts:
paths:
- pmd-reports/
####################################################
# Helper Methods
####################################################
.sfdx_helpers: &sfdx_helpers |
function install_java() {
local JAVA_VERSION=11
local JAVA_INSTALLATION=openjdk-$JAVA_VERSION-jdk
echo "Installing ${JAVA_INSTALLATION}"
apt update && apt -y install $JAVA_INSTALLATION
}
function install_pmd() {
local PMD_VERSION=6.52.0
local RULESET_PATH=ruleset.xml
local OUTPUT_DIRECTORY=pmd-reports
local SOURCE_DIRECTORY=force-app
local URL=https://github.com/pmd/pmd/releases/download/pmd_releases%2F$PMD_VERSION/pmd-bin-$PMD_VERSION.zip
# Here I would download and unzip the PMD source code. But for now I have the PMD source already in my project for testing purposes
# apt update && apt -y install unzip
# wget $URL
# unzip -o pmd-bin-$PMD_VERSION.zip
# rm pmd-bin-$PMD_VERSION.zip
echo "Installed PMD!"
mkdir -p $OUTPUT_DIRECTORY
echo "Going to run PMD!"
ls
echo "Start"
pmd-bin-$PMD_VERSION/bin/run.sh pmd -d $SOURCE_DIRECTORY -R $RULESET_PATH -f xslt -P xsltFilename=pmd_report.xsl -r $OUTPUT_DIRECTORY/pmd-apex.html
echo "Done"
rm -r pmd-bin-$PMD_VERSION
echo "Remove pmd"
}
before_script:
- *sfdx_helpers
When I try to run this pipeline, it will fail after starting the PMD:
However, if I make a small change to the PMD's .sh file and add an echo command at the very end. Then the pipeline succeeds:
PMD /bin/run.sh before (doesn't work):
...
java ${HEAPSIZE} ${PMD_JAVA_OPTS} $(jre_specific_vm_options) -cp "${classpath}" "${CLASSNAME}" "$#"
PMD /bin/run.sh after (does work):
...
java ${HEAPSIZE} ${PMD_JAVA_OPTS} $(jre_specific_vm_options) -cp "${classpath}" "${CLASSNAME}" "$#"
echo "Done1" // This is the last line in the file
I don't have the slightest idea why this is the case. Does anyone know why adding this echo command at the end of the .sh file would cause the pipeline to succeed? I could keep it as is with the echo command, but I would like to understand why it is behaving this way. I don't want to be that guy that just leaves a comment saying Hey don't touch this line of code, I don't know why, but without it the whole thing fails. Thank you!
PMD exits with a specific exit code depending whether it found some violations or not, see https://pmd.github.io/latest/pmd_userdocs_cli_reference.html#exit-status
I guess, your PMD run finds some violations, and PMD exits with exit code 4 - which is not a success exit code.
In general, this is used to make the CI build fail, in case any PMD violations are present - forcing to fix the violations before you get a green build.
If that is not what you want, e.g. you only want to report the violations but not fail the build, then you need to add the following command line option:
--fail-on-violation false
Then PMD will exit with exit code 0, even when there are violations.
So it appears that the java command that the PMD runs for some reason returns a non-zero exit code (even though the script is successful). Because I was adding an echo command at the end of that bash script, the last line in the script returned a success exit code, which is why the GitLab CI pipeline succeeded when the echo command was there.
In order to work around the non-zero exit code being returned by the java PMD command, I have changed this line in my .gitlab-ci.yml file to catch the non-zero exit code and proceed.
function install_pmd() {
// ... For brevity I'm just including the line that was changed in this method
pmd-bin-$PMD_VERSION/bin/run.sh pmd -d $SOURCE_DIRECTORY -R $RULESET_PATH -f xslt -P xsltFilename=pmd_report.xsl -r $OUTPUT_DIRECTORY/pmd-apex.html || echo "PMD Returned Exit Code"
// ...
}

Show remote command output in CI job results

I have CI pipeline which have stages like this. As it shows most of the stuff here is done on remote machine which is working fine.
The only issues I am unable to see the command outputs here. For e.g. scp is used with -v which if run manually on machine shows a lot of verbose information useful for debugging etc. same goes for cp -v but in job results it shows no such information.
So is there a way I can re-route the command outputs from remote machine to local (gitlab job output)
my job 1/6:
rules:
- changes:
- ${LOCA_FILE_PATH}
stage: prepare
allow_failure: true
script: |
ssh ${USER}#${HOST} '([ -f "${PATH}/test_conf_1.txt" ] && cp -v "${PATH}/test_conf_1.txt" ${PATH}/test_yaml_$CI_COMMIT_TIMESTAMP.txt)'
my job 2/6:
rules:
- changes:
- ${LOCA_FILE_PATH}
stage: scp
script:
scp -v ${TF_ROOT}${LOCA_FILE_PATH} ${USER}#${HOST}:${PATH}/
Perhaps you can try something like this:
ssh user#host 2>&1 command | tee ssh-session.log
cat ssh-session.log
In the script part you can define a variable and hold there the result of your command and you can print this out.
script:
- RESULT=$(scp -v ${TF_ROOT}${LOCA_FILE_PATH} ${USER}#${HOST}:${PATH}/)
- echo $RESULT

Unable to use `run` routine on complex bash command

Got this command: cd /some/dir; /usr/local/bin/git log --diff-filter=A --follow --format=%aI -- /some/dir/file | tail -1
I want to get the output from it.
Tried this:
my $proc2 = run 'cd', $dirname, ';', '/usr/local/bin/git', 'log', '--diff-filter=A', '--follow', '--format=%aI', '--', $output_file, '|', 'tail', '-1', :out, :err;
Nothing output.
Tried this:
my $proc2 = run </usr/local/bin/git -C>, $dirname, <log --diff-filter=A --follow --format=%aI -->, $output_file, <| tail -1>, :out, :err;
Git throws an error:
fatal: --follow requires exactly one pathspec
The same git command runs fine when run directly from the command line.
I've confirmed both $dirname and $output_file are correct.
git log --help didn't shed any light on this for me. Command runs fine straight from command line.
UPDATE: So if I take off the | tail -1 bit, I get output from the command in raku (a date). I also discovered if I take the pipe out when running on the command line, the output gets piped into more. I'm not knowledgeable enough about bash and how it might interact with raku's run command to know for sure what's going on.
You need to run a separate proc for piping:
my $p = run «git -C "$dirname" log --diff-filter=A --format=%aI», :out, :err;
my $p2 = run <tail -1>, :in($p.out), :out;
put .out.slurp: :close with $p2;
Also you don't need tail in this case, you can do:
put .out.lines(:close).tail with $p

CI-pipeline ignore any commands that fail in a given step

I'm trying to debug a CI pipeline and want to create a custom logger stage that dumps a bunch of information about the environment in which the pipeline is running.
I tried adding this:
stages:
- logger
logger-commands:
stage: logger
allow_failure: true
script:
- echo 'Examining environment'
- echo PWD=$(pwd) Using image ${CI_JOB_IMAGE}
- git --version
- echo --------------------------------------------------------------------------------
- env
- echo --------------------------------------------------------------------------------
- npm --version
- node --version
- echo java -version
- mvn --version
- kanico --version
- echo --------------------------------------------------------------------------------
The problem is that the Java command is failing because java isn't installed. The error says:
/bin/sh: eval: line 217: java: not found
I know I could remove the line java -version, but I'm trying to come up with a canned logger that I could use in all my CI-Pipelines, so it would include: Java, Maven, Node, npm, python, and whatever else I want to include and I realize that some of those commands will fail because some of the commands are not found.
Searching for the above solution got me close.
GitLab CI: How to continue job even when script fails - Which did help. By adding allow_failure: true I found that even if the logger job failed the remaining stages would run (which is desirable). The answer also suggests a syntax to wrap commands in which is:
./script_that_fails.sh > /dev/null 2>&1 || FAILED=true
if [ $FAILED ]
then ./do_something.sh
fi
So that is helpful, but my question is this.
Is there anything built into gitlab's CI-pipeline syntax (or bash syntax) that allows all commands in a given step to run even if one command fails?
Is it possible to allow for a script in a CI/CD job to fail? - suggests adding the UNIX bash OR syntax as shown below:
- npm --version || echo nmp failed
- node --version || echo node failed
- echo java -version || echo java failed
That is a little cleaner (syntax) but I'm trying to make it simpler.
The answers already mentioned are good, but I was looking for something simpler so I wrote the following bash script. The script always returns a zero exit code so the CI-pipeline always thinks the command was successful.
If the command did fail, the command is printed along with the non-zero exit code.
# File: runit
#!/bin/sh
"$#"
EXITCODE=$?
if [ $EXITCODE -ne 0 ]
then
echo "CMD: $#"
echo "Ignored exit code ($EXITCODE)"
fi
exit 0
Testing it as follows:
./runit ls "/bad dir"
echo "ExitCode = $?"
Gives this output:
ls: cannot access /bad dir: No such file or directory
CMD: ls /bad dir
Ignored exit code (2)
ExitCode=0
Notice even though the command failed the ExitCode=0 shows what the ci-pipeline will see.
To use it in the pipeline, I have to have that shell script available. I'll research how to include it, but it must be in the CI runner job. For example,
stages:
- logger-safe
logger-safe-commands:
stage: logger-safe
allow_failure: true
script:
- ./runit npm --version
- ./runit java -version
- ./runit mvn --version
I don't like this solution because it requires extra file in the repo but this is in the spirit of what I'm looking for. So far the simplest built in solution is:
- some_command || echo command failed $?

Executing BTEQ file via shell script (BTEQ: Command not found error)

I'm trying to set up an environment to execute BTEQ script via shell script in the local machine. On running the shell script I'm getting an error of BTEQ: Command not found. Not sure what I'm doing wrong.
I've created a separate .tdlogon file which contains .LOGON credentials. BTEQ script is a simple create table statement that I'm trying to execute.
My .tdlogon file is something like
.logon servername/uname,pwd
I'm calling the file like this
#!/bin/bash
server_path=/Users/xyz/xyz
log_path=/Users/xyz/xyz/logs
echo -e 'Starting the script'>> ${log_path}/test_log.log
cat ${server_path}/.tdlogon ${server_path}/code/temp_query.btq | bteq >> ${log_path}/test_log.log 2>&1
if [ ${rtn_code} -ne 0 ] ; then
echo -e 'Script completed successfully'>> ${log_path}/test_log.log
exit 0
else
echo -e 'Error in the script'>> ${log_path}/test_log.log
exit 1
fi
On executing the above code I'm getting below error in the log file
line 10: bteq: command not found
Appreciate any guidance related to this.
Seems like your Linux is not pointing to the bteq path. Update the bteq path:
export PATH=/usr/bin/bteq:$PATH
And, in some cases, there will be bteq32 instead of bteq in that case set path as:
export PATH=/usr/bin/bteq32:$PATH