How to run specific tests at each pull-request? - testing

Let's assume that we have more than 1500 tests, and before of pull-request get merged to master branch we have a test pipeline that always runs the tests, and it takes more than 45min for all of the tests to get run. Some of these tests are not necessary to run again and again and again and some of them should run.
Do we have a solution to specify which tests should be run in which pull-request and which test shouldn't run in a specific pull-request or I should ask it like this. Can we somehow define a filter to specify which tests should get run in [X]pull-request?

You should be able to configure phpunit to run specific tests for a particular branch, for example, in the configuration below, phpunit will run all tests for the master branch, and run only a subset of tests for any branch that matches the feature/* glob pattern:
pipelines:
default:
- step:
name: Run all tests
script:
- phpunit mydir
branches:
master:
- step:
name: Run all tests
script:
- phpunit mydir
'feature/*':
- step:
name: Run only MyTest test class
script:
- phpunit --filter MyTest
Alternatively, you should be able decide which test to run based on the BITBUCKET_BRANCH environment variable:
pipelines:
default:
- step:
name: Run test that match the name of the branch
script:
- phpunit --filter "${BITBUCKET_BRANCH}"

Related

cmake does not (always) order Fortran modules correctly

I have a code using Fortran modules. I can build it with no problems under normal circumstances. CMake takes care of the ordering of the module files.
However, using a gitlab runner, it SOMETIMES happens that cmake does NOT order the Fortran modules by dependencies, but alphabetically instead, which than leads to a build failure.
The problem seems to occur at random. I have a branch that built in the CI. After adding a commit, that modified a utility script not involved in any way in the build, I ran into this problem. There is no difference in the output of the cmake configure step.
I use the matrix configuration for the CI to test different configurations. I found, that I could trigger this by adding another mpi version (e.g. openmpi/4.1.6). Without that version, it built. With it added in the matrix, ALL configurations showed the problem.
stages:
- configure
- build
- test
.basic_config:
tags:
- hpc_runner
variables:
# load submodules
GIT_SUBMODULE_STRATEGY: recursive
.config_matrix:
extends: .basic_config
# define job matrix
parallel:
matrix:
- COMPILER: [gcc/9.4.0]
PARALLELIZATION: [serial, openmpi/3.1.6]
TYPE: [option1, option2]
BUILD_TYPE: [debug, release]
- COMPILER: [gcc/10.3.0, intel/19.0.5]
PARALLELIZATION: [serial]
TYPE: [option2]
BUILD_TYPE: [debug]
###############################################################################
# setup script
# These commands will run before each job.
before_script:
- set -e
- uname -a
- |
if [[ "$(uname)" = "Linux" ]]; then
export THREADS=$(nproc --all)
elif [[ "$(uname)" = "Darwin" ]]; then
export THREADS=$(sysctl -n hw.ncpu)
else
echo "Unknown platform. Setting THREADS to 1."
export THREADS=1
fi
# load environment
- source scripts/build/load_environment $COMPILER $BUILD_TYPE $TYPE $PARALLELIZATION
# set path for build folder
- build_path=build/$COMPILER/$PARALLELIZATION/$TYPE/$BUILD_TYPE
configure:
stage: configure
extends: .config_matrix
script:
- mkdir -p $build_path
- cd $build_path
- $CMAKE_COMMAND
artifacts:
paths:
- build
expire_in: 1 days
###############################################################################
# build script
build:
stage: build
extends: .config_matrix
script:
- cd $build_path
- make
artifacts:
paths:
- build
expire_in: 1 days
needs:
- configure
###############################################################################
# test
test:
stage: test
extends: .config_matrix
script:
- cd $build_path
- ctest --output-on-failure
needs:
- build
The runner runs on an HPC machine which a complex setup, and I am not to familiar with the exact configuration. I contacted the admin with this problem, but wanted to see if anybody else had run into this before and have solutions or hints on what is going on.
With the help from our admin I figured it out.
The problem comes from cmake using absolute paths. The runner has actually several runners for parallel jobs, with each using a different prefix path, e.g. /runner/001/ or /runner/012/. So when I run configure on a specific runner, cmake saves that prefix path to the configuration.
Now in the build stage, there is no guarantee to have the same configuration run on the same runner. However, since there are absolute paths in the make files, make tries to access the folders in the configure runner's prefix. Now, that can be anything from non-existing, over old files from previous pipelines to the correct files downloaded by another case.
The only fix I currently can see is to run everything on the same runner in one stage, to avoid the roulette of prefix paths. If anybody has a different idea, or if there is a way to fix a specific matrix case to a specific runner prefix, please comment.

How to parallel gitlab-ci script steps

I have a job in gitlab-ci that looks like this:
job_name:
script:
- someExe.exe --auto-exit 120
- script.py
needs:
- some_needs
stage: stage
tags:
- tags
someExe.exe is an executable that will run for 120 seconds. I want to start this executable, and while it is running, i want to start script.py. The problem is, gitlab will wait until someExe.exe stops running, and then runs script.py.
Is there any way to do what i want?Preferably, in only one job(having 2 jobs, one that starts .exe and one that starts script.py is not good)
Do your requirements allow for two different jobs with the same stage name? If so, gitlab-ci will run them in parallel:
stages:
- my-stage
some-exe:
script:
- someExe.exe --auto-exit 120
needs:
- some_needs
stage: my-stage
tags:
- tags
py-script:
script:
- script.py
needs:
- some_needs
stage: my-stage
tags:
- tags

Azure DevOps build don't fail when coverage is below target with dotnet test

I have a build pipeline in Azure DevOps for a ASP.NET Core app, and a want use it with a criteria to approve a pull requests.
steps:
- script: dotnet restore
displayName: 'Run command: dotnet restore'
- script: >
dotnet test
/p:CollectCoverage=true
/p:CoverletOutputFormat=cobertura
/p:Threshold=80
/p:ThresholdStat=total
/p:Exclude="[*xunit.*]*"
displayName: 'Run command: dotnet test'
I want when code coverage (using coverlet) don't pass, the build fails. but despite the acceptance criteria do not pass, even a log message is generated, the step runs successfully.
coverlet.msbuild.targets(41,5): error : The total line coverage is below the specified 80 coverlet.msbuild.targets(41,5): error : The total branch coverage is below the specified 80 coverlet.msbuild.targets(41,5): error : The total method coverage is below the specified 80
It's possible force a fail in this case?
Try to run the tests with DotNetCoreCLI#2 task and not with the simple script:
- task: DotNetCoreCLI#2
displayName: 'dotnet test'
inputs:
commands: test
projects: 'path/to/tests/projects'
arguments: 'p:CollectCoverage=true
/p:CoverletOutputFormat=cobertura /p:Threshold=80
/p:ThresholdStat=total /p:Exclude="[*xunit.*]"'

How to bind Jenkins build output with tests result?

I'm setting automated protractor tests to run in a docker container with the help of jenkins. But not been able to make a the jenkins build result to reflect the testing outcome (if some test fail, build should fail also).
Important to say that all tests should run, even if the first one fails.
The tests are initiated with docker-compose up --abort-on-container-exit and my docker-compose file looks like:
version: '2'
services:
selenium:
image: selenium/standalone-chrome
ports:
- 4444:4444
volumes:
- /dev/shm:/dev/shm
protractor:
volumes:
- ./reporting:/assets/reporting
image: protractor-test
command: "dockerize -wait http://selenium:4444 -timeout 60m protractor /assets/conf.js"
Looks like your docker-compose command is returning exit code 0 no matter what.
How about using a Jasmine xunit reporter to generate a test report, copy the generated xml test report to outside the container (using docker cp), and then publish it using Jenkins' post-build action?
The job will be marked as failed if the xml is not present, which means there's an error during the test runtime or it will be marked as unstable, if it has failed any of the test asserts.

Gitlab CI Different executor per stage

Is it possible to have 2 stages in gitlab-ci.yml and one to be run with docker runner but the other to be run with shell?
Imagine I want to run tests in a docker container but I want to run deploy stage in shell locally in the container.
Not exactly stages but you can have different jobs to be run by different runners using tags configuration option which should give you exactly what you want.
Add (either during runner creation or later in Project settings -> Runners) tag docker to the Docker runner and tag shell to the shell runner. Then you can set the tags in your .gitlab-ci.yml file:
stages:
- test
- deploy
tests:
stage: test
tags:
- docker
script:
- [test routine]
deployment:
stage: deploy
tags:
- shell
script:
- [deployment routine]