How to stop robocopy from exiting the build? - gitlab-ci

I'm using Gitlab 8.15.4 and the latest runner for that build. Because of our firewall I can't run npm install so I'm copying the node-modules from another location into the build folder. The runner is on a Windows 7 machine.
My first attempt:
(.gitlab-ci.yml)
before_script:
- robocopy S:\Storage\GitLab-Runner\Assets\node_modules .\node_modules /s
build:
stage: build
script:
- echo starting
- gulp
- echo done
artifacts:
paths:
- deploy.zip
Fails the build with the error:
ERROR: Job failed: exit status 1
My second (nth) try puts the robocopy into a script file and executes it from there:
(.gitlab-ci.yml)
before_script:
- S:\Storage\GitLab-Runner\Scripts\CopyAssets.bat
build:
stage: build
script:
- echo starting
- gulp
- echo done
artifacts:
paths:
- deploy.zip
(CopyAssets.bat)
robocopy S:\Storage\GitLab-Runner\Assets\node_modules .\node_modules /s
set/A errlev="%ERRORLEVEL% & 24"
exit/B %errlev%
Passes but does not execute any other steps.
How can I prevent RoboCopy from exiting the build when it finishes?

You and a lot of other people have encountered this issue with robocopy in CI deployment. As I have found this question being unanswered for some time and the other answers being incompatible with continuing the script after robocopy, I want to share the solution here.
If you want robocopy to ignore all return codes under 8 (>= 8 means copy error), you need a condition that follows the command directly and changes the error level.
(robocopy src dst) ^& IF %ERRORLEVEL% LSS 8 SET ERRORLEVEL = 0

For powershell users:
(robocopy src dst) ; if ($lastexitcode -lt 8) { $global:LASTEXITCODE = $null }
or
cmd /c (robocopy src dst) ^& IF %ERRORLEVEL% LEQ 1 exit 0
I have tested on GitLab 13.12 with GitLab Runner powershell and it worked well.

Related

Running PMD in GitLab CI Script doesn't work unless echo command is added after the script runs

This is an interesting issue. I have a GitLab project, and I've created a .gitlab-ci.yml to run a PMD that will scan my code after every commit. The ci.yml file looks like this:
image: "node:latest"
stages:
- preliminary-testing
apex-code-scan:
stage: preliminary-testing
allow_failure: false
script:
- install_java
- install_pmd
artifacts:
paths:
- pmd-reports/
####################################################
# Helper Methods
####################################################
.sfdx_helpers: &sfdx_helpers |
function install_java() {
local JAVA_VERSION=11
local JAVA_INSTALLATION=openjdk-$JAVA_VERSION-jdk
echo "Installing ${JAVA_INSTALLATION}"
apt update && apt -y install $JAVA_INSTALLATION
}
function install_pmd() {
local PMD_VERSION=6.52.0
local RULESET_PATH=ruleset.xml
local OUTPUT_DIRECTORY=pmd-reports
local SOURCE_DIRECTORY=force-app
local URL=https://github.com/pmd/pmd/releases/download/pmd_releases%2F$PMD_VERSION/pmd-bin-$PMD_VERSION.zip
# Here I would download and unzip the PMD source code. But for now I have the PMD source already in my project for testing purposes
# apt update && apt -y install unzip
# wget $URL
# unzip -o pmd-bin-$PMD_VERSION.zip
# rm pmd-bin-$PMD_VERSION.zip
echo "Installed PMD!"
mkdir -p $OUTPUT_DIRECTORY
echo "Going to run PMD!"
ls
echo "Start"
pmd-bin-$PMD_VERSION/bin/run.sh pmd -d $SOURCE_DIRECTORY -R $RULESET_PATH -f xslt -P xsltFilename=pmd_report.xsl -r $OUTPUT_DIRECTORY/pmd-apex.html
echo "Done"
rm -r pmd-bin-$PMD_VERSION
echo "Remove pmd"
}
before_script:
- *sfdx_helpers
When I try to run this pipeline, it will fail after starting the PMD:
However, if I make a small change to the PMD's .sh file and add an echo command at the very end. Then the pipeline succeeds:
PMD /bin/run.sh before (doesn't work):
...
java ${HEAPSIZE} ${PMD_JAVA_OPTS} $(jre_specific_vm_options) -cp "${classpath}" "${CLASSNAME}" "$#"
PMD /bin/run.sh after (does work):
...
java ${HEAPSIZE} ${PMD_JAVA_OPTS} $(jre_specific_vm_options) -cp "${classpath}" "${CLASSNAME}" "$#"
echo "Done1" // This is the last line in the file
I don't have the slightest idea why this is the case. Does anyone know why adding this echo command at the end of the .sh file would cause the pipeline to succeed? I could keep it as is with the echo command, but I would like to understand why it is behaving this way. I don't want to be that guy that just leaves a comment saying Hey don't touch this line of code, I don't know why, but without it the whole thing fails. Thank you!
PMD exits with a specific exit code depending whether it found some violations or not, see https://pmd.github.io/latest/pmd_userdocs_cli_reference.html#exit-status
I guess, your PMD run finds some violations, and PMD exits with exit code 4 - which is not a success exit code.
In general, this is used to make the CI build fail, in case any PMD violations are present - forcing to fix the violations before you get a green build.
If that is not what you want, e.g. you only want to report the violations but not fail the build, then you need to add the following command line option:
--fail-on-violation false
Then PMD will exit with exit code 0, even when there are violations.
So it appears that the java command that the PMD runs for some reason returns a non-zero exit code (even though the script is successful). Because I was adding an echo command at the end of that bash script, the last line in the script returned a success exit code, which is why the GitLab CI pipeline succeeded when the echo command was there.
In order to work around the non-zero exit code being returned by the java PMD command, I have changed this line in my .gitlab-ci.yml file to catch the non-zero exit code and proceed.
function install_pmd() {
// ... For brevity I'm just including the line that was changed in this method
pmd-bin-$PMD_VERSION/bin/run.sh pmd -d $SOURCE_DIRECTORY -R $RULESET_PATH -f xslt -P xsltFilename=pmd_report.xsl -r $OUTPUT_DIRECTORY/pmd-apex.html || echo "PMD Returned Exit Code"
// ...
}

Append the package.json version number to my build artifact in aws-codebuild

I really dont know if this is a simple (must be), common or complex task.
I have a buildspec.yml file in my codebuild project, and i am trying to append the version written in package.json file to the output artifact.
I have already seen a lot of tutorials that teach how to append the date (not really useful to me), and others that tell me to execute a version.sh file with this
echo $(sed -nr 's/^\s*"version": "([0-9]{1,}.[0-9]{1,}.*)",$/\1/p' package.json)
and set it in a variable (it doesn't work).
i'm ending up with a build folder called: "my-project-$(version.sh)"
codebuild environment uses ubuntu and nodejs
Update (solved):
my version.sh file:
#!/usr/bin/env bash
echo $(sed -nr 's/^\s*\"version": "([0-9]{1,}\.[0-9]{1,}.*)",$/\1/p' package.json)
Then, i just found out 2 things:
Allow access to your version.sh file:
git update-index --add --chmod=+x version.sh
Declare a variable in any phase in buildspec, i dit in in build phase (just to make sure repository is already copied in environment)
TAGG=$($CODEBUILD_SRC_DIR/version.sh)
then reference it in artifact versioned name:
artifacts:
files:
- '**/*'
name: workover-frontend-$TAG
As result, my build artifact's name: myproject-1.0.0
In my case this script do not want to fetch data from package.json. On my local machine it working great but on AWS doesn't. I had to use chmod in different way, because i got message that i don't have right permissions. My buildspec:
version: 0.2
env:
variables:
latestTag: ""
phases:
pre_build:
commands:
- "echo sed version"
- sed --version
build:
commands:
- chmod +x version.sh
- latestTag=$($CODEBUILD_SRC_DIR/version.sh)
- "echo $latestTag"
artifacts:
files:
- '**/*'
discard-paths: yes
And results in console:
CodeBuild
I also have to mark that when i paste only for example echo 222 into version.sh file i got right answer in CodeBuild console.

CI-pipeline ignore any commands that fail in a given step

I'm trying to debug a CI pipeline and want to create a custom logger stage that dumps a bunch of information about the environment in which the pipeline is running.
I tried adding this:
stages:
- logger
logger-commands:
stage: logger
allow_failure: true
script:
- echo 'Examining environment'
- echo PWD=$(pwd) Using image ${CI_JOB_IMAGE}
- git --version
- echo --------------------------------------------------------------------------------
- env
- echo --------------------------------------------------------------------------------
- npm --version
- node --version
- echo java -version
- mvn --version
- kanico --version
- echo --------------------------------------------------------------------------------
The problem is that the Java command is failing because java isn't installed. The error says:
/bin/sh: eval: line 217: java: not found
I know I could remove the line java -version, but I'm trying to come up with a canned logger that I could use in all my CI-Pipelines, so it would include: Java, Maven, Node, npm, python, and whatever else I want to include and I realize that some of those commands will fail because some of the commands are not found.
Searching for the above solution got me close.
GitLab CI: How to continue job even when script fails - Which did help. By adding allow_failure: true I found that even if the logger job failed the remaining stages would run (which is desirable). The answer also suggests a syntax to wrap commands in which is:
./script_that_fails.sh > /dev/null 2>&1 || FAILED=true
if [ $FAILED ]
then ./do_something.sh
fi
So that is helpful, but my question is this.
Is there anything built into gitlab's CI-pipeline syntax (or bash syntax) that allows all commands in a given step to run even if one command fails?
Is it possible to allow for a script in a CI/CD job to fail? - suggests adding the UNIX bash OR syntax as shown below:
- npm --version || echo nmp failed
- node --version || echo node failed
- echo java -version || echo java failed
That is a little cleaner (syntax) but I'm trying to make it simpler.
The answers already mentioned are good, but I was looking for something simpler so I wrote the following bash script. The script always returns a zero exit code so the CI-pipeline always thinks the command was successful.
If the command did fail, the command is printed along with the non-zero exit code.
# File: runit
#!/bin/sh
"$#"
EXITCODE=$?
if [ $EXITCODE -ne 0 ]
then
echo "CMD: $#"
echo "Ignored exit code ($EXITCODE)"
fi
exit 0
Testing it as follows:
./runit ls "/bad dir"
echo "ExitCode = $?"
Gives this output:
ls: cannot access /bad dir: No such file or directory
CMD: ls /bad dir
Ignored exit code (2)
ExitCode=0
Notice even though the command failed the ExitCode=0 shows what the ci-pipeline will see.
To use it in the pipeline, I have to have that shell script available. I'll research how to include it, but it must be in the CI runner job. For example,
stages:
- logger-safe
logger-safe-commands:
stage: logger-safe
allow_failure: true
script:
- ./runit npm --version
- ./runit java -version
- ./runit mvn --version
I don't like this solution because it requires extra file in the repo but this is in the spirit of what I'm looking for. So far the simplest built in solution is:
- some_command || echo command failed $?

Publish ASP.NET Core to IIS with GITLAB CI/CD

I can run the web application by using dotnet run on the .gitlab-ci.yml script.
stages:
- build
build:
stage: build
before_script:
- 'dotnet restore'
script:
- echo "Building the My Application"
- "dotnet publish Eitms.Decoder.sln -c release"
- "cd C:\\MyFolderContaints\\Eitms.Decoder.Backend"
- "dotnet run"
only:
- release
But how I can publish into the IIS? anyone can show the step?
Thanks
UPDATE
After view the script from HERE, still not success yet. Did I do something wrong here?
stages:
- build
- deploy
build:
stage: build
before_script:
- 'dotnet restore'
script:
- echo "Building the app"
- "dotnet publish Eitms.Decoder.sln -c release"
artifacts:
untracked: true
only:
- release
deploy_staging:
stage: deploy
script:
- echo "Deploy to IIS"
- "dotnet publish Eitms.Decoder.Backend\\Eitms.Decoder.Backend.csproj -c release -o C:\\Secret Path\\PRODUCTION\\Secret Project"
dependencies:
- build
only:
- release
I don't know is it still actual or not and also i never used GitLab CI
but looking on provided scripts i think you need just to copy (using CMD commands like xcopy) files into IIS folder after publish
like when you want to do same using CMD .bat file
Steps
publish project
stop appPool
copy files
start appPool
e.g (just for example)
dotnet publish "Eitms.Decoder.Backend\\Eitms.Decoder.Backend.csproj" -c release -o "C:\\Secret Path\\PRODUCTION\\Secret Project"
appcmd stop apppool /apppool.name:"Secret Project APP POOL"
xcopy "C:\\Secret Path\\PRODUCTION\\Secret Project" "C:\\inetpub\\wwwroot\\Secret Project" /s /e
appcmd start apppool /apppool.name:"Secret Project APP POOL"
I had the same requirement, and #Simon response helped me a lot. But when trying to apply it, I was confronted to some other problems that I have handled with an improved script
deploy_staging:
stage: deploy
script:
- 'dotnet publish WebProjectFolder\\WebProject.csproj -c release -o E:\\TemporaryLocation'
#I used powershell to stop the apppool.
#I also added a test to verify that the apppool is already Started. (If you try to stop a stopped apppool your job will fail)
- 'if ((Get-WebAppPoolState -Name pointeuse).Value -eq "Started") { Stop-WebAppPool -Name pointeuse }'
# I used /exclude so I doesn't copy config files (in my case it was web.config and appsettings.json)
# These config files already exists under my IIS because they contains specific configuration (So I copied them manually once because they do not change)
# deployignore.txt is a file that I added to the root of my solution (next to SolutionName.sln). It contains the config files that I don't want to copy (In my case it contains 2 lines web.config and appsettings.json)
# E:\\DeployLocation is the folder where your IIS site points to
- 'xcopy /s /e /y /exclude:deployignore.txt E:\\TemporaryLocation E:\\DeployLocation'
# Added the sleep because the app pool couldn't be started (maybe I can lower the sleep delay)
- sleep 10
- 'Start-WebAppPool -Name pointeuse'
only:
- master
Real thanks Simon.

Issues with gitlab-ci stages

I've been working on setting up an automated RPM build and I'd like to perform a simple test on the SPEC file before proceeding with any build steps. The problem I am having is that the job always seems to jump to the deploy stage. Here is the relevant snippet from my .gitlab-ci.yml:
stages:
- test
- build
- deploy
job1:
stage: test
script:
# Test the SPEC file
- su - newbuild -c "rpmbuild --nobuild -vv ~/rpmbuild/SPECS/package.SPEC"
stage: build
script:
# Install our required packages
- yum -y install openssl-devel freetype-devel fontconfig-devel libicu-devel sqlite-devel libpng-devel libjpeg-devel ruby
# Initialize the submodules to build
- git submodule update --init
# build the RPM
- su - newbuild -c "rpmbuild -ba --target=`uname -m` -vv ~/rpmbuild/SPECS/package.SPEC"
stage: deploy
script:
# move the RPM/SRPM
- mkdir -pv $BUILD_DIR/$RELEASEVER/{SRPMS,x86_64}
- 'for f in $WORK_DIR/rpmbuild/RPMS/x86_64/*; do cp -v "$f" $BUILD_DIR/$RELEASEVER/x86_64; done'
- 'for f in $WORK_DIR/rpmbuild/SRPMS/*; do cp -v "$f" $BUILD_DIR/$RELEASEVER/SRPMS; done'
# create the repo
- createrepo -dvp $BUILD_DIR/$RELEASEVER
# update latest
- 'if [ $CI_BUILD_REF_NAME == "master" ]; then rm $PROJECT_DIR/latest; ln -sv $(basename $BUILD_DIR) $PROJECT_DIR/latest; fi'
- 'if [ $CI_BUILD_REF_NAME == "devel" ]; then rm $PROJECT_DIR/latest-dev; ln -sv $(basename $BUILD_DIR) $PROJECT_DIR/latest-dev; fi'
tags:
- repos
I've not found any questions or online documentation to properly explain this to me so any help is appreciated!
You have all stages in one job which does not work. You need to split it up into individual jobs for the three different stages.
Quote from the documentation:
First all jobs of build are executed in parallel.
If all jobs of build succeeds, the test jobs are executed in parallel.
If all jobs of test succeeds, the deploy jobs are executed in parallel.
If all jobs of deploy succeeds, the commit is marked as success.
If any of the previous jobs fails, the commit is marked as failed and no jobs of further stage are executed.
Something like this should work:
stages:
- test
- build
- deploy
do_things_on_stage_test:
script:
- do things
stage: test
do_things_on_stage_build:
script:
- do things
stage: build
do_things_on_stage_deploy:
script:
- do things
stage: deploy
I think you assume that the stages are build on top of each other, which is not the case. If one of your stages needs something like pre-installed packages, you have to add a before_script directive. Think of the stages as in: test-if-build-succeeds, test-if-depoy-succeeds, etc.