I'm using an Jenkins pipeline to trigger AWS CodeBuild, and in my buildspec I run some tests that I wish to publish as artefacts so that they can be downloaded and read by Jenkins.
When all of my tests pass, this works just fine. However, when one or more tests fail, it seems as though the artifacts phase is ignored, so there are no artefacts for Jenkins to download.
Though it's not what I require, I have also attempted to use the reports phase, but that behaves in the exact same way, which I find confusing as it seems crazy to fail on a test and then not publish the reports.
Is it possible to make CodeBuild execute the artifacts phase regardless of success or failure?
version: 0.2
env:
shell: bash
phases:
install:
runtime-versions:
python: latest
commands:
- pip install cfn-lint checkov
- ...
pre_build:
commands:
- cd myproj
- cfn-lint --template cloudformation/template.cfn.yaml --format junit > cfn-lint.xml
- checkov --directory cloudformation --framework cloudformation secrets --output=junitxml > checkov.xml
build:
commands:
- ...
post_build:
commands:
- ...
artifacts:
base-directory: myproj
files:
- cfn-lint.xml
- checkov.xml
The answer here is that the artefacts (and reports) phase is not run if the pre_build phase fails.
https://docs.aws.amazon.com/codebuild/latest/userguide/view-build-details.html#view-build-details-phases
While I've technically answered my question, this means that I've had to move tests into the build phase, which feels wrong because the post_build phase is run regardless of success or failure, so publishing of my artefact (outside of AWS) also fails.
Related
I have 7 stages in my pipeline. I need ruby for 3 of the stages.
things I have tried two different options,
Install ruby on each of the required stage,
Install ruby as part of the before_script section
Using before_script takes up too much of time trying to install ruby on the 4 other stages that does not require it.
Is there a way to do install dependencies as part of one stage and carry it forward for rest of the stages.
example yml
image: ubuntu:21.10
before_script:
- apt update
- apt install ruby-full
- apt install python3.8
stages:
- s1
- s2
- s3
- s4
s1:
stage: s1
script: ruby s1.rb
s2:
stage: s2
script: ruby s2.rb
s3:
stage: s3
script: python3 s3.py
s4:
stage: s4
script: python3 s4.py
There's a few elements here to understand. Generally, every job starts with the same fresh environment. The only differences to this would be files passed through artifacts: or files restored from cache: configurations. Actions performed in one job generally otherwise have no effect on any other jobs.
Using before_script takes up too much of time trying to install ruby on the 4 other stages that does not require it.
It's also important to know that before_script can be set for each job independently. If one job doesn't need it, just override the before_script: key in that job.
Anyhow. There are a few ways you might optimize your build speed with respect to dependencies:
Docker image containing your dependencies
Typically, you would just use a ruby image as your image: for jobs requiring ruby. Usually an official image from dockerhub will work, like ruby:3.1-alpine.
some_ruby_job:
image: "ruby:3.1-alpine"
script: # ruby is already available by default
- echo "hello ruby"
- ruby -v
some_other_job:
image: alpine:latest
script:
- echo "this job does not need ruby"
Making a custom docker image
If your dependencies are very complex, you may even choose to create your own docker images and push them to the project's container registry so you can use the custom image with all your dependencies as your image:.
You could even build an image in one stage and use it as the image: in subsequent stages. This example uses docker caching with --cache-from to further speed up that process.
build:
image: docker:19.03.12
stage: .pre
services:
- docker:19.03.12-dind
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker pull $CI_REGISTRY/group/project/image:$CI_BRANCH_NAME || true
- docker build --cache-from $CI_REGISTRY/group/project/image:$CI_BRANCH_NAME -t $CI_REGISTRY/group/project/image:$CI_BRANCH_NAME .
- docker push $CI_REGISTRY/group/project/image:$CI_BRANCH_NAME
some_ruby_job:
stage: test
# This is the image that was built in the previous stage!
image: $CI_REGISTRY/group/project/image:$CI_BRANCH_NAME
script:
- echo "all my dependencies are here!"
- ruby -v
Caching
To further speed things along, you may also choose to cache your ruby dependencies (say, if you install gems as part of your job)
Something like:
some_ruby_job:
stage: one
cache:
key:
files:
- Gemfile.lock
paths:
- vendor/ruby
# ...
That way the vendor/ruby directory is cached which will avoid the need to download the gems again in every stage.
Cache policy
You can also speed up caching behavior in subsequent stages by setting the cache policy to pull (to avoid time spent uploading the cache after the job). In other words, only one job is responsible for generating the cache, the other jobs reuse the same cache.
ruby_jobs_in_future_stages:
cache:
key:
files:
- Gemfile.lock
paths:
- vendor/ruby
policy: pull # only download the cache, don't upload it
I'm learning GitLab CI/CD, I want to when finished build send files in artifacts, the idea is possible?
image: maven:3.8.1-jdk-11
stages:
- build
- deploy
build:
stage: build
script:
- mvn clean install
artifacts:
paths:
- "*/target/*.jar"
deploy:
stage: deploy
script:
- scp -r <artifacts_path> root#test.com:~/Deploy
Could I get artifacts real path in runner then send files with scp?
Generally speaking, no. You must rely on artifact restoration process. Keep in mind that (1) artifacts are generally not stored on the runner and (2) docker runners execute jobs inside of a docker container and typically would not have access to files on the runner host, even if artifacts were stored there.
When jobs start, artifacts from previous stages are restored into the workspace.
So, as an alternative solution, you can simply start with an empty workspace (don't checkout the repo), then upload all files in the workspace, which should be only the restored artifacts, assuming there are no file-based variables.
deploy:
variables: # prevent checkout of repository
GIT_STRATEGY: none
stage: deploy
script:
- ls -laht # list files, which should be just restored artifacts
- scp -r ./ root#test.com:~/Deploy
Another way might be to just use the same glob pattern used in the artifacts:paths: to find the files and upload them.
variables:
ARTIFACTS_PATTERN: "*/target/*.jar"
build:
# ...
artifacts:
paths:
- $ARTIFACTS_PATTERN
deploy:
script: # something like this. Not sure if scp supports glob patterns
- rsync -a -m --include="$ARTIFACTS_PATTERN" user#remote:~/Deploy
Currently I have a pipeline that builds a C++ program currently like this:
build:
stage: build
script:
- rm -rf .git/modules/docs .git/modules/libraries/fc ./docs ./libraries/fc
- git submodule sync
- git submodule update --init --recursive
- rm -rf build
- mkdir build
- cd build
- cmake -DCMAKE_BUILD_TYPE=Release ..
- make -j$(nproc)
This build must still build, but I also would like to build this in parallel but with a different cmake option;
cmake -DBOOST_ROOT="$BOOST_ROOT" -DCMAKE_BUILD_TYPE=Release -DBUILD_TESTNET=1
I have read about the parallel option thats included in gitlab-ci, but haven't had success with incorporating this.
Any insight it greatly appreciated! Will update if solved prior to answers
You need to have two jobs. This article has some good ideas of how to set it up.
Now, Gitlab infers dependencies and assumes that you want to run them in order, so if you add a needs: [] list, it helps it build a graph. If you want two of them to run at the same time, then you remove their dependencies.
If you have something before this build, like a test or compare, you can use needs: ["test"] or needs: ["prepare"] or whatever jobs you want to run before this build step, but you can use [] to tell the CI no dependencies are needed and to run them as soon as possible.
build:
stage: build
needs: []
script:
- .. common stuff
- cmake -DCMAKE_BUILD_TYPE=Release ..
- make # I'd probably remove this in a CI situation -j$(nproc)
build2:
stage: build
needs: []
script:
- .. common stuff
- cmake -DCMAKE_BUILD_TYPE=Release AND OTHER OPTIONS ..
- make # I'd probably remove this in a CI situation -j$(nproc)
You can make use of parallel:matrix jobs. This feature runs one job multiple times but with a different variables set each time.
In your case it would look similar to this:
build:
stage: build
script:
- rm -rf .git/modules/docs .git/modules/libraries/fc ./docs ./libraries/fc
- git submodule sync
- git submodule update --init --recursive
- rm -rf build
- mkdir build
- cd build
- cmake -DCMAKE_BUILD_TYPE=${DCMAKE_BUILD_TYPE}
- make -j$(nproc)
parallel:
matrix:
# Initial state of your job
- DCMAKE_BUILD_TYPE="Release .."
# Other options...
- DCMAKE_BUILD_TYPE=Release
DBOOST_ROOT="$BOOST_ROOT"
DBUILD_TESTNET=1
This technique assumes that your jobs use the same variable subset, but with different values for each execution.
You can find more info in the official docs and here is another example, docker builds in this scenario, but the principle should be clear.
I have a code using Fortran modules. I can build it with no problems under normal circumstances. CMake takes care of the ordering of the module files.
However, using a gitlab runner, it SOMETIMES happens that cmake does NOT order the Fortran modules by dependencies, but alphabetically instead, which than leads to a build failure.
The problem seems to occur at random. I have a branch that built in the CI. After adding a commit, that modified a utility script not involved in any way in the build, I ran into this problem. There is no difference in the output of the cmake configure step.
I use the matrix configuration for the CI to test different configurations. I found, that I could trigger this by adding another mpi version (e.g. openmpi/4.1.6). Without that version, it built. With it added in the matrix, ALL configurations showed the problem.
stages:
- configure
- build
- test
.basic_config:
tags:
- hpc_runner
variables:
# load submodules
GIT_SUBMODULE_STRATEGY: recursive
.config_matrix:
extends: .basic_config
# define job matrix
parallel:
matrix:
- COMPILER: [gcc/9.4.0]
PARALLELIZATION: [serial, openmpi/3.1.6]
TYPE: [option1, option2]
BUILD_TYPE: [debug, release]
- COMPILER: [gcc/10.3.0, intel/19.0.5]
PARALLELIZATION: [serial]
TYPE: [option2]
BUILD_TYPE: [debug]
###############################################################################
# setup script
# These commands will run before each job.
before_script:
- set -e
- uname -a
- |
if [[ "$(uname)" = "Linux" ]]; then
export THREADS=$(nproc --all)
elif [[ "$(uname)" = "Darwin" ]]; then
export THREADS=$(sysctl -n hw.ncpu)
else
echo "Unknown platform. Setting THREADS to 1."
export THREADS=1
fi
# load environment
- source scripts/build/load_environment $COMPILER $BUILD_TYPE $TYPE $PARALLELIZATION
# set path for build folder
- build_path=build/$COMPILER/$PARALLELIZATION/$TYPE/$BUILD_TYPE
configure:
stage: configure
extends: .config_matrix
script:
- mkdir -p $build_path
- cd $build_path
- $CMAKE_COMMAND
artifacts:
paths:
- build
expire_in: 1 days
###############################################################################
# build script
build:
stage: build
extends: .config_matrix
script:
- cd $build_path
- make
artifacts:
paths:
- build
expire_in: 1 days
needs:
- configure
###############################################################################
# test
test:
stage: test
extends: .config_matrix
script:
- cd $build_path
- ctest --output-on-failure
needs:
- build
The runner runs on an HPC machine which a complex setup, and I am not to familiar with the exact configuration. I contacted the admin with this problem, but wanted to see if anybody else had run into this before and have solutions or hints on what is going on.
With the help from our admin I figured it out.
The problem comes from cmake using absolute paths. The runner has actually several runners for parallel jobs, with each using a different prefix path, e.g. /runner/001/ or /runner/012/. So when I run configure on a specific runner, cmake saves that prefix path to the configuration.
Now in the build stage, there is no guarantee to have the same configuration run on the same runner. However, since there are absolute paths in the make files, make tries to access the folders in the configure runner's prefix. Now, that can be anything from non-existing, over old files from previous pipelines to the correct files downloaded by another case.
The only fix I currently can see is to run everything on the same runner in one stage, to avoid the roulette of prefix paths. If anybody has a different idea, or if there is a way to fix a specific matrix case to a specific runner prefix, please comment.
When I run mvn javadoc:javadoc locally, it gives me a bunch of warnings—empty #return tags, unknown tags etc.—but eventually builds the Javadoc tree.
On GitLab CI, I have the following in .gitlab-ci-yml:
variables:
MAVEN_OPTS: "-Dhttps.protocols=TLSv1.2 -Dmaven.repo.local=$CI_PROJECT_DIR/.m2/repository -Dorg.slf4j.simpleLogger.log.org.apache.maven.cli.transfer.Slf4jMavenTransferListener=WARN -Dorg.slf4j.simpleLogger.showDateTime=true -Djava.awt.headless=true"
# [...]
deploy:jdk8:
stage: deploy
script:
- if [ ! -f ci_settings.xml ];
then echo "CI settings missing\! If deploying to GitLab Maven Repository, please see https://docs.gitlab.com/ee/user/project/packages/maven_repository.html#creating-maven-packages-with-gitlab-cicd for instructions.";
fi
- 'mvn $MAVEN_CLI_OPTS deploy -s ci_settings.xml'
- 'mvn javadoc:javadoc'
- 'cp -r target/site/apidocs public/javadoc/dev'
artifacts:
paths:
- public
only:
- master
- dev
Here, Javadoc generation fails, with the previously mentioned warnings being reported as errors.
I am ultimately going to fix these things, but in the meantime, I would like Javadoc on CI to behave like its local counterpart. Where is the setting to accomplish that?
Changing the mvn javadoc:javadoc line as follows did the trick for me:
- 'mvn javadoc:javadoc -DadditionalJOption=-Xdoclint:none'
Still not sure why this sems to be the default behavior on my local Maven installation but not on GitLab, but at least this solves my issue for now.