Enabling parallel builds in Gitlab-CI - cmake

Currently I have a pipeline that builds a C++ program currently like this:
build:
stage: build
script:
- rm -rf .git/modules/docs .git/modules/libraries/fc ./docs ./libraries/fc
- git submodule sync
- git submodule update --init --recursive
- rm -rf build
- mkdir build
- cd build
- cmake -DCMAKE_BUILD_TYPE=Release ..
- make -j$(nproc)
This build must still build, but I also would like to build this in parallel but with a different cmake option;
cmake -DBOOST_ROOT="$BOOST_ROOT" -DCMAKE_BUILD_TYPE=Release -DBUILD_TESTNET=1
I have read about the parallel option thats included in gitlab-ci, but haven't had success with incorporating this.
Any insight it greatly appreciated! Will update if solved prior to answers

You need to have two jobs. This article has some good ideas of how to set it up.
Now, Gitlab infers dependencies and assumes that you want to run them in order, so if you add a needs: [] list, it helps it build a graph. If you want two of them to run at the same time, then you remove their dependencies.
If you have something before this build, like a test or compare, you can use needs: ["test"] or needs: ["prepare"] or whatever jobs you want to run before this build step, but you can use [] to tell the CI no dependencies are needed and to run them as soon as possible.
build:
stage: build
needs: []
script:
- .. common stuff
- cmake -DCMAKE_BUILD_TYPE=Release ..
- make # I'd probably remove this in a CI situation -j$(nproc)
build2:
stage: build
needs: []
script:
- .. common stuff
- cmake -DCMAKE_BUILD_TYPE=Release AND OTHER OPTIONS ..
- make # I'd probably remove this in a CI situation -j$(nproc)

You can make use of parallel:matrix jobs. This feature runs one job multiple times but with a different variables set each time.
In your case it would look similar to this:
build:
stage: build
script:
- rm -rf .git/modules/docs .git/modules/libraries/fc ./docs ./libraries/fc
- git submodule sync
- git submodule update --init --recursive
- rm -rf build
- mkdir build
- cd build
- cmake -DCMAKE_BUILD_TYPE=${DCMAKE_BUILD_TYPE}
- make -j$(nproc)
parallel:
matrix:
# Initial state of your job
- DCMAKE_BUILD_TYPE="Release .."
# Other options...
- DCMAKE_BUILD_TYPE=Release
DBOOST_ROOT="$BOOST_ROOT"
DBUILD_TESTNET=1
This technique assumes that your jobs use the same variable subset, but with different values for each execution.
You can find more info in the official docs and here is another example, docker builds in this scenario, but the principle should be clear.

Related

CodeBuild: Always run the 'artifacts' phase regardless of success or failure

I'm using an Jenkins pipeline to trigger AWS CodeBuild, and in my buildspec I run some tests that I wish to publish as artefacts so that they can be downloaded and read by Jenkins.
When all of my tests pass, this works just fine. However, when one or more tests fail, it seems as though the artifacts phase is ignored, so there are no artefacts for Jenkins to download.
Though it's not what I require, I have also attempted to use the reports phase, but that behaves in the exact same way, which I find confusing as it seems crazy to fail on a test and then not publish the reports.
Is it possible to make CodeBuild execute the artifacts phase regardless of success or failure?
version: 0.2
env:
shell: bash
phases:
install:
runtime-versions:
python: latest
commands:
- pip install cfn-lint checkov
- ...
pre_build:
commands:
- cd myproj
- cfn-lint --template cloudformation/template.cfn.yaml --format junit > cfn-lint.xml
- checkov --directory cloudformation --framework cloudformation secrets --output=junitxml > checkov.xml
build:
commands:
- ...
post_build:
commands:
- ...
artifacts:
base-directory: myproj
files:
- cfn-lint.xml
- checkov.xml
The answer here is that the artefacts (and reports) phase is not run if the pre_build phase fails.
https://docs.aws.amazon.com/codebuild/latest/userguide/view-build-details.html#view-build-details-phases
While I've technically answered my question, this means that I've had to move tests into the build phase, which feels wrong because the post_build phase is run regardless of success or failure, so publishing of my artefact (outside of AWS) also fails.

Execute GitLab job only if files have changed in a subdirectory, otherwise use cached artefact in following job

I have a simple pipeline, comparable to this one:
image: docker:20
variables:
GIT_STRATEGY: clone
stages:
- Building - Frontend
- Building - Backend
include:
- local: /.ci/extensions/ci-variables.yml
- local: /.ci/extensions/docker-login.yml
Build Management:
stage: Building - Frontend
image: node:14-buster
script:
# Install needed dependencies for building
- apt-get update
- apt-get -y upgrade
- apt-get install -y build-essential
- yarn global add #quasar/cli
- yarn global add #vue/cli
# Install required modules
- cd ${CI_PROJECT_DIR}/resources/js/management
- npm ci --cache .npm --prefer-offline
# Build project
- npm run build
# Create archive
- tar czf ${CI_PROJECT_DIR}/dist-resources-js-management.tar.gz *
cache:
policy: pull-push
key:
files:
- ./resources/js/management/package-lock.json
paths:
- ./resources/js/management/.npm/
artifacts:
paths:
- dist-resources-js-management.tar.gz
Build Docker:
stage: Building - Backend
needs: [Build Management, Build Administration]
dependencies:
- Build Management
- Build Administration
variables:
CI_REGISTRY_IMAGE_COMMIT_SHA: !reference [.ci-variables, variables, CI_REGISTRY_IMAGE_COMMIT_SHA]
CI_REGISTRY_IMAGE_REF_NAME: !reference [.ci-variables, variables, CI_REGISTRY_IMAGE_REF_NAME]
before_script:
- !reference [.docker-login, before_script]
script:
- mkdir -p {CI_PROJECT_DIR}/public/static/management
- tar xzf ${CI_PROJECT_DIR}/dist-resources-js-management.tar.gz --directory ${CI_PROJECT_DIR}/public/static/management
- docker build
--pull
--label "org.opencontainers.image.title=$CI_PROJECT_TITLE"
--label "org.opencontainers.image.url=$CI_PROJECT_URL"
--label "org.opencontainers.image.created=$CI_JOB_STARTED_AT"
--label "org.opencontainers.image.revision=$CI_COMMIT_SHA"
--label "org.opencontainers.image.version=$CI_COMMIT_REF_NAME"
--tag "$CI_REGISTRY_IMAGE_COMMIT_SHA"
-f .build/Dockerfile
.
I now want the first job to be executed under the following conditions:
Something has changed in the directory ${CI_PROJECT_DIR}/resources/js/management
This job has not yet created an artifact.
The last job should therefore always be able to access an artifact. If nothing has changed in the directory, it does not have to be created anew each time. If it did not exist before, it must of course be created.
Is there a way to map this in the GitLab Ci?
If I currently specify the dependencies and then work with only:changes: for the first job, GitLab complains if the job is not executed. Likewise with needs:.

cmake does not (always) order Fortran modules correctly

I have a code using Fortran modules. I can build it with no problems under normal circumstances. CMake takes care of the ordering of the module files.
However, using a gitlab runner, it SOMETIMES happens that cmake does NOT order the Fortran modules by dependencies, but alphabetically instead, which than leads to a build failure.
The problem seems to occur at random. I have a branch that built in the CI. After adding a commit, that modified a utility script not involved in any way in the build, I ran into this problem. There is no difference in the output of the cmake configure step.
I use the matrix configuration for the CI to test different configurations. I found, that I could trigger this by adding another mpi version (e.g. openmpi/4.1.6). Without that version, it built. With it added in the matrix, ALL configurations showed the problem.
stages:
- configure
- build
- test
.basic_config:
tags:
- hpc_runner
variables:
# load submodules
GIT_SUBMODULE_STRATEGY: recursive
.config_matrix:
extends: .basic_config
# define job matrix
parallel:
matrix:
- COMPILER: [gcc/9.4.0]
PARALLELIZATION: [serial, openmpi/3.1.6]
TYPE: [option1, option2]
BUILD_TYPE: [debug, release]
- COMPILER: [gcc/10.3.0, intel/19.0.5]
PARALLELIZATION: [serial]
TYPE: [option2]
BUILD_TYPE: [debug]
###############################################################################
# setup script
# These commands will run before each job.
before_script:
- set -e
- uname -a
- |
if [[ "$(uname)" = "Linux" ]]; then
export THREADS=$(nproc --all)
elif [[ "$(uname)" = "Darwin" ]]; then
export THREADS=$(sysctl -n hw.ncpu)
else
echo "Unknown platform. Setting THREADS to 1."
export THREADS=1
fi
# load environment
- source scripts/build/load_environment $COMPILER $BUILD_TYPE $TYPE $PARALLELIZATION
# set path for build folder
- build_path=build/$COMPILER/$PARALLELIZATION/$TYPE/$BUILD_TYPE
configure:
stage: configure
extends: .config_matrix
script:
- mkdir -p $build_path
- cd $build_path
- $CMAKE_COMMAND
artifacts:
paths:
- build
expire_in: 1 days
###############################################################################
# build script
build:
stage: build
extends: .config_matrix
script:
- cd $build_path
- make
artifacts:
paths:
- build
expire_in: 1 days
needs:
- configure
###############################################################################
# test
test:
stage: test
extends: .config_matrix
script:
- cd $build_path
- ctest --output-on-failure
needs:
- build
The runner runs on an HPC machine which a complex setup, and I am not to familiar with the exact configuration. I contacted the admin with this problem, but wanted to see if anybody else had run into this before and have solutions or hints on what is going on.
With the help from our admin I figured it out.
The problem comes from cmake using absolute paths. The runner has actually several runners for parallel jobs, with each using a different prefix path, e.g. /runner/001/ or /runner/012/. So when I run configure on a specific runner, cmake saves that prefix path to the configuration.
Now in the build stage, there is no guarantee to have the same configuration run on the same runner. However, since there are absolute paths in the make files, make tries to access the folders in the configure runner's prefix. Now, that can be anything from non-existing, over old files from previous pipelines to the correct files downloaded by another case.
The only fix I currently can see is to run everything on the same runner in one stage, to avoid the roulette of prefix paths. If anybody has a different idea, or if there is a way to fix a specific matrix case to a specific runner prefix, please comment.

how to config lfs.fetchinclude in gitlabci

I want to git lfs fetch only in some dir in the gitlab CI. but failed
the gitlab-runner was 11.8.0~beta.1077
i config like this:
variables:
# Please edit to your GitLab project
GIT_STRATEGY: clone
GIT_CHECKOUT: "false"
script:
- git config lfs.fetchinclude "xxx/xxx/, test/"
but ci erro:
root config contains unknown keys: script
how to fix it?
The first part is your syntax error - the script key must be part of a job, e.g. like this:
build:
stage: build
script:
- git config ...
However, I think the CI runner will fetch LFS files automatically, and the script is only run after cloning.
So I think you have to disable automatic fetching before, & then this might work:
build:
stage: build
before_script:
- git config lfs.fetchinclude "xxx/xxx/, test/"
- git lfs pull

Issues with gitlab-ci stages

I've been working on setting up an automated RPM build and I'd like to perform a simple test on the SPEC file before proceeding with any build steps. The problem I am having is that the job always seems to jump to the deploy stage. Here is the relevant snippet from my .gitlab-ci.yml:
stages:
- test
- build
- deploy
job1:
stage: test
script:
# Test the SPEC file
- su - newbuild -c "rpmbuild --nobuild -vv ~/rpmbuild/SPECS/package.SPEC"
stage: build
script:
# Install our required packages
- yum -y install openssl-devel freetype-devel fontconfig-devel libicu-devel sqlite-devel libpng-devel libjpeg-devel ruby
# Initialize the submodules to build
- git submodule update --init
# build the RPM
- su - newbuild -c "rpmbuild -ba --target=`uname -m` -vv ~/rpmbuild/SPECS/package.SPEC"
stage: deploy
script:
# move the RPM/SRPM
- mkdir -pv $BUILD_DIR/$RELEASEVER/{SRPMS,x86_64}
- 'for f in $WORK_DIR/rpmbuild/RPMS/x86_64/*; do cp -v "$f" $BUILD_DIR/$RELEASEVER/x86_64; done'
- 'for f in $WORK_DIR/rpmbuild/SRPMS/*; do cp -v "$f" $BUILD_DIR/$RELEASEVER/SRPMS; done'
# create the repo
- createrepo -dvp $BUILD_DIR/$RELEASEVER
# update latest
- 'if [ $CI_BUILD_REF_NAME == "master" ]; then rm $PROJECT_DIR/latest; ln -sv $(basename $BUILD_DIR) $PROJECT_DIR/latest; fi'
- 'if [ $CI_BUILD_REF_NAME == "devel" ]; then rm $PROJECT_DIR/latest-dev; ln -sv $(basename $BUILD_DIR) $PROJECT_DIR/latest-dev; fi'
tags:
- repos
I've not found any questions or online documentation to properly explain this to me so any help is appreciated!
You have all stages in one job which does not work. You need to split it up into individual jobs for the three different stages.
Quote from the documentation:
First all jobs of build are executed in parallel.
If all jobs of build succeeds, the test jobs are executed in parallel.
If all jobs of test succeeds, the deploy jobs are executed in parallel.
If all jobs of deploy succeeds, the commit is marked as success.
If any of the previous jobs fails, the commit is marked as failed and no jobs of further stage are executed.
Something like this should work:
stages:
- test
- build
- deploy
do_things_on_stage_test:
script:
- do things
stage: test
do_things_on_stage_build:
script:
- do things
stage: build
do_things_on_stage_deploy:
script:
- do things
stage: deploy
I think you assume that the stages are build on top of each other, which is not the case. If one of your stages needs something like pre-installed packages, you have to add a before_script directive. Think of the stages as in: test-if-build-succeeds, test-if-depoy-succeeds, etc.