What Gitlab tool used for code coverage reports? - kotlin

Instead of using JaCoCo, I was told, that there would be an internal Gitlab tool, where I can create test coverage reports?
I do not want to use JaCoCo.
I am not interessted in any vizualization plugin within Gitlab.
I would like to generate a xml/html file(s) with e.g. bar graphs, what can be emailed and opened externally.
I couldn't find anything in the Gitlab dashboard menu. The project is a Android App Kotlin project.

the question is what part of Coverage you want to see/have:
just a number within the MR - therefore GitLab parses the logoutput of the Jobs
coverage visualization within MR - therefore you need to provide a report.
Coverage in Overview
For the coverage in the Overview and just to get a percentage, you need to configure your job with an regex how it can be parsed like
job1:
# ....
coverage: '/Code coverage: \d+\.\d+/'
https://docs.gitlab.com/ee/ci/yaml/#coverage
Visualization
We are actually using JaCoCo, but to make the coverage visible and to have the information in Merge Requests you have to convert everything into Cobertura Reports.
There are different approaches to achieve this:
with a gradle-plugin like https://github.com/kageiit/gradle-jacobo-plugin
the configuration is pretty neat, and if you do have already a gradle build it is easy to integrate
with an own step within the CI Pipeline - see https://docs.gitlab.com/ee/user/project/merge_requests/test_coverage_visualization.html
test-jdk11:
stage: test
image: gradle:6.6.1-jdk11
script:
- 'gradle test jacocoTestReport' # jacoco must be configured to create an xml report
artifacts:
paths:
- build/jacoco/jacoco.xml
coverage-jdk11:
# Must be in a stage later than test-jdk11's stage.
# The `visualize` stage does not exist by default.
# Please define it first, or chose an existing stage like `deploy`.
stage: visualize
image: registry.gitlab.com/haynes/jacoco2cobertura:1.0.7
script:
# convert report from jacoco to cobertura, using relative project path
- python /opt/cover2cover.py build/jacoco/jacoco.xml $CI_PROJECT_DIR/src/main/java/ > build/cobertura.xml
needs: ["test-jdk11"]
artifacts:
reports:
cobertura: build/cobertura.xml
important to note is that you always will have to tell GitLab CI your path to the artifact for cobertura with
job:
#...
artifacts:
reports:
cobertura: build/cobertura.xml

Our approach is the following.
have to tell Gitlab where your coverage report is, for example we have this setup for a java unit test report "jacoco.xml":
Unit Test:
stage: pruebas
script:
- echo "Iniciar Pruebas"
- mvn $MAVEN_CLI_OPTS test
artifacts:
when: always
reports:
junit:
- target/surefire-reports/*Test.xml
- target/failsafe-reports/*Test.xml
cobertura: target/site/jacoco/jacoco.xml
Our summary in Gitlab :
Unit Test Detaills:
The key is your "jacoco.xml".

Related

How to generate the report of API changes on the pipeline?

I have manually generated the report of my API changes using swagger-diff
I can automate it in a local machine using makefile or script but what about if I wanted to implement it in the Gitlab pipeline, how can I generate the report in such a way when someone pushes the changes on the API endpoints
java -jar bin/swagger-diff.jar -old https://url/v1/swagger.json -new https://url2/v2/swagger.json -v 2.0 -output-mode html > changes.html
Note that: All the project code is also being containerized.
Configure a job in the pipeline to run when there are changes to your api routes. Save the output as an artifact. If you also need the diff published, you could either do the publishing in that job or create a dependent job which uses the artifact to publish the diff to a Gitlab page or external provider.
If you have automated the process locally, then most of the work is done already if it is in a shell script or something similar.
Example:
This example assumes that your api routes are defined in customer/api/routes/ and internal/api/routes and that you want to generate the diff when a commit or MR is pushed to the dev branch.
ApiDiff:
stage: build
image: java:<some-tag>
script:
- java -jar bin/swagger-diff.jar -old https://url/v1/swagger.json -new https://url2/v2/swagger.json -v 2.0 -output-mode html > changes.html
artifacts:
expire_in: 1 day
name: api-diff
when: on_success
paths: changes.html
rules:
- if: "$CI_COMMIT_REF_NAME == 'dev'"
changes:
- customer/api/routes/*
- internal/api/routes/*
- when: never
And then the job to publish the diff if you want one. This could also be done in the same job that generates the diff.
PublishDiff:
stage: deploy
needs:
- job: "ApiDiff"
optional: false
artifacts: true
image: someimage:latest
script:
- <some script to publish the report>
rules:
- if: "$CI_COMMIT_REF_NAME == 'dev'"
changes:
- customer/api/routes/*
- internal/api/routes/*
- when: never

Gitlab CI does not support variable expansion in needs keyword, is there any solution?

I'm creating a template for all the deploy jobs, and I need to be able to use needs keyword with different values for each deploy job, but GitLab CI, as far as I know, does not support using variable in needs keyword. Is there any workaround?
This is what I need to do:
# Deploy template
.deploy:
stage: deploy
only:
- develop
tags:
- deploy
needsL ["build:$PROJECT_NAME"]
# Deploy jobs
deploy:package1:
extends: .deploy
variables:
PROJECT_NAME: 'package1'
#needs: ['build:package1']
deploy:package2:
extends: .deploy
variables:
PROJECT_NAME: 'package2'
#needs: ['build:package2']
You can't do this. needs: will not support variables.
However, if the template you're making does not contain the job it depends on, the best approach is probably to not use needs: at all, otherwise you greatly increase the likelihood that including your template will cause an invalid yaml file.
So, your options would be either to (1) include the jobs you depend on in the same template, then designate needs: explicitly or (2) Rely on users to provide the needs: key in the deploy job if they want.
For example, a user can do this:
include:
- "your template"
# job originates in the project configuration
my_project_jobs:
script: "..."
your_deploy_template_job:
needs: ["my_project_job"] # add the key to the included template job
Or if you provide both jobs in your pipeline configuration, you can use some rules: to keep the jobs from running, and let users enable them and override their script configurations to implement builds.
# your template yaml
your_template_build_job:package1
rules:
- if: '$PACKAGE1_ENABLED'
when: on_success
- when: never
your_template_deploy_job:package1
rules:
- if: '$PACKAGE1_ENABLED'
needs: [your_template_build_job:package1]
# ...
Then a user might just do this:
# user project yaml
include:
- "your template"
variables:
PACKAGE1_ENABLED: true
your_template_build_job:package1
script: "my project build script"
When the user doesn't explicitly enable a job, neither the build nor deploy job will be in the pipeline configuration. However, they only need to enable the build job (by variable) and the needs: configuration for the deploy job will already be in place.
Neither of these approaches are particularly perfect for very flexible use of templates, unfortunately. But there may be another option...
Workaround: Dynamic child pipelines
As a possible workaround, users could use dynamic child pipelines to generate an entire pipeline configuration with correct needs: based on a minimal configuration. Almost anything is possible with dynamic child pipelines because you can generate the YAML programmatically on-the-fly, though, it may be more trouble than it's worth.

Variables in gitlab CI

I just began with the implementation of CI jobs using gitlab-ci and I'm trying to create a job template. Basically the job uses the same image, tags and script where I use variables:
.job_e2e_template: &job_e2e
stage: e2e-test
tags:
- test
image: my_image_repo/siderunner
script:
- selenium-side-runner -c "browserName=$JOB_BROWSER" --server http://${SE_EVENT_BUS_HOST}:${SELENIUM_HUB_PORT}/wd/hub --output-directory docker/selenium/out_$FOLDER_POSTFIX docker/selenium/tests/*.side;
And here is one of the jobs using this anchor:
test-chrome:
<<: *job_e2e
variables:
JOB_BROWSER: "chrome"
FOLDER_POSTFIX: "chrome"
services:
- selenium-hub
- node-chrome
artifacts:
paths:
- tests/
- out_chrome/
I'd like this template to be more generic and I was wondering if I could also use variables in the services and artifacts section, so I could add a few more lines in my template like this:
services:
- selenium-hub
- node-$JOB_BROWSER
artifacts:
paths:
- tests/
- out_$JOB_BROWSER/
However I cannot find any example of that and the doc only talks about using that in scripts. I know that variables are like environment variables for jobs but I'm not sure if they can be used for other purposes.
Any suggestions?
Short answer, yes you can. Like described in this blog post, gitlab does a deep merge based on the keys.
You can see how your merged pipeline file looks like under CI/CD -> Editor -> View merged YAML.
If you want to modularize your pipeline even further I would recommend using include instead of yaml anchors, so you can reuse your templates in different pipelines.

GitLab CI: Is it possible to run parallel jobs in different runner

I'm looking for a way to run parallels jobs in different runners. I have several powerful runners set up for GitLab CI. In general, it's ok to run jobs on the same runner because they're executed in Docker container.
However, now I have a Pipeline that jobs are executed in parallel and each job consumes lots of CPU and Mem.(it's by design, not an issue). If it's unlucky that GitLab CI schedules those jobs to the same runner, job fails.
And, I want this limitation applies to this project ONLY, as my runners have 30+ CPU and 120GB+ Memory.
Thanks in advance.
It is possible, if you have set up say two runners (either specific, shared or group runners) with tags.
Say, runner1 has tags runner1-ci, my-runner1
Similarly, runner2 has tags runner2-ci, my-runner2
Now, in your .gitlab-ci.yml file, you can use the tags like below, so a job will pick up that particular runner and execute the job.
image: maven:latest
stages:
- build
- test
- deploy
install_dependencies:
stage: build
tags:
- runner1-ci
script:
- pwd
- echo "Build"
test:
stage: test
tags:
- runner2-ci
script:
- echo "Testing"
deploy:
stage: deploy
tags:
- runner1-ci
script:
- echo "Deploy to nexus"
Note: This is just an example .gitlab-ci.yml to demonstrate the use of tags in pipeline.

Gitlab run different deploment scripts on merge depending on Labels

How can I run different CI deployment scripts on merge to master depending on the labels attached to the merge request?
I have a repository from which I build different versions of my software. I keep it in one repository as the systems share 90% of the code but there are differences that defitively need code modifications. On merge requests all versions are buildt and a suite of tests is run. Usually I want to deploy on accepting the merge request.
As not always the changes are relevant for all systems I would like to attach labels to the merge request that decide which deployments scripts are run on accepting the merge request. I already tried to automatically decide on the changed code parts but this is not possible as often I expand a shared library but this is only relevant for one of the systems.
I am aware of variables but I don't know how to apply them on merge accept in YML like this
deploy:
stage: deploy
script:
...
only:
- master
Update on strategy:
As CI_MERGE_REQUEST_LABELS is not available with only:master I will try to do a beta deployment depending on merge request labels in only:merge-request. In only:master I will deploy the betas that have changed. This most likely will fit my needs. I will add it as a solution once it works.
I finally solved it this way:
My YML script has three stages:
stages:
- buildtest
- createbeta
- deploy
buildtest:
stage: buildtest
script:
- ... run unit tests
- ... build all systems
- ... run scripted tests on all systems
only:
refs:
- merge_requests
createbeta:
stage: createbeta
script:
- ... run setup and update package creation with parameter $CI_MERGE_REQUEST_LABELS
- ... run update package tests with parameter $CI_MERGE_REQUEST_LABELS
- ... run beta deployment scripts with parameter $CI_MERGE_REQUEST_LABELS (see text)
only:
refs:
- merge_requests
deploy:
stage: deploy
script:
- ... run production deployment scripts (see text)
only:
refs:
- master
The first stages are run on merge request creation.
As changes to shared libraries might affect all systems all builds and tests are run in stage "buildtest".
The scripts in stage "createbeta" check for existance of the merge request label for the corresponding system and are skipped if the system is not involved by the labels.
The script for beta deployment creates a signal file "deploy_me" in the beta folder (important) if it runs
When the request is merged the deployment script runs in stage "deploy". It checks for the existance of the "deploy_me" file and only deploys and informs via mail if the file exists.
This way I can easily decide which system I want to deploy by applying a labes to the merge request. I can thorowly test the new feature with the beta version and make sure that changes do not break the other systems as unittests and system tests are run for all systems.
As the GitLab runner runs in a Windows environment (yes, this makes sense as I work with Delphi) here is the way I find the system label in a Windows cmd file for those who are interested. I use %* as the labels are separated by spaces and treated as individual command line parameters.
echo %* | findstr /i /c:"MyCoolSystem" > nul
if %ERRORLEVEL% EQU 0 goto runit
rem If the label is not supplied with the merge request, do nothing
goto ok
:runit
... content
:ok
Perhaps this helps someone with a similar environment and similar workflow.