Connecting SonarCloud with bitbucket pipelines - sonarcloud

I am new to sonarcloud and would like to automate my code quality tests in bitbucket I am trying to use bitbucket pipelines to do it (I am new to pipelines too) my end goal is to do a branch analysis of codes and display the results in the bitbucket UI itself on trying to follow the steps in sonarcloud I came across these lines of codes and I am unclear what to add to allow the automating code check :
image: ************************** # Choose an image matching your project needs
clone:
depth: full # SonarCloud scanner needs the full history to assign issues properly
definitions:
caches:
sonar: ~/.sonar/cache # Caching SonarCloud artifacts will speed up your build
steps:
- step: &build-test-sonarcloud
name: Build, test and analyze on SonarCloud
caches:
- ************************** # See https://confluence.atlassian.com/bitbucket/caching-dependencies-895552876.html
- sonar
script:
- ************************** # Build your project and run
- pipe: sonarsource/sonarcloud-scan:1.2.0
- step: &check-quality-gate-sonarcloud
name: Check the Quality Gate on SonarCloud
script:
- pipe: sonarsource/sonarcloud-quality-gate:0.1.4
pipelines: # More info here: https://confluence.atlassian.com/bitbucket/configure-bitbucket-pipelines-yml-792298910.html
branches:
master:
- step: *build-test-sonarcloud
- step: *check-quality-gate-sonarcloud
pull-requests:
'**':
- step: *build-test-sonarcloud
- step: *check-quality-gate-sonarcloud

Related

How to generate the report of API changes on the pipeline?

I have manually generated the report of my API changes using swagger-diff
I can automate it in a local machine using makefile or script but what about if I wanted to implement it in the Gitlab pipeline, how can I generate the report in such a way when someone pushes the changes on the API endpoints
java -jar bin/swagger-diff.jar -old https://url/v1/swagger.json -new https://url2/v2/swagger.json -v 2.0 -output-mode html > changes.html
Note that: All the project code is also being containerized.
Configure a job in the pipeline to run when there are changes to your api routes. Save the output as an artifact. If you also need the diff published, you could either do the publishing in that job or create a dependent job which uses the artifact to publish the diff to a Gitlab page or external provider.
If you have automated the process locally, then most of the work is done already if it is in a shell script or something similar.
Example:
This example assumes that your api routes are defined in customer/api/routes/ and internal/api/routes and that you want to generate the diff when a commit or MR is pushed to the dev branch.
ApiDiff:
stage: build
image: java:<some-tag>
script:
- java -jar bin/swagger-diff.jar -old https://url/v1/swagger.json -new https://url2/v2/swagger.json -v 2.0 -output-mode html > changes.html
artifacts:
expire_in: 1 day
name: api-diff
when: on_success
paths: changes.html
rules:
- if: "$CI_COMMIT_REF_NAME == 'dev'"
changes:
- customer/api/routes/*
- internal/api/routes/*
- when: never
And then the job to publish the diff if you want one. This could also be done in the same job that generates the diff.
PublishDiff:
stage: deploy
needs:
- job: "ApiDiff"
optional: false
artifacts: true
image: someimage:latest
script:
- <some script to publish the report>
rules:
- if: "$CI_COMMIT_REF_NAME == 'dev'"
changes:
- customer/api/routes/*
- internal/api/routes/*
- when: never

Giltab CI Job stuck because the runner tag value hasn’t been assigned

I have a CICD configuration that looks something like this:
.rule_template: &rule_configuration
rules:
- changes:
- file/dev/script1.txt
variables:
DESTINATION_HOST: somehost1
RUNNER_TAG: somerunner1
- changes:
- file/test/script1.txt
variables:
DESTINATION_HOST: somehost2
RUNNER_TAG: somerunner2
default:
tags:
- scripts
stages:
- lint
deploy scripts 1/6:
<<: *rule_configuration
tags:
- ${RUNNER_TAG}
stage: lint
script: |
echo "Add linting here!"
....
In short, which runner to choose depends on which file was changed, hence the runner tag has to be conditionally decided. However, these jobs never execute and the value of never gets assigned as i always get:
This job is stuck because you don't have any active runners online or available with any of these tags assigned to them: ${RUNNER_TAG}
Any idea, what is it this way and what can I do to resolve this?
gitlab-runner --version
Version: 14.7.0
Git revision: 98daeee0
Git branch: 14-7-stable
GO version: go1.17.5
Built: 2022-01-19T17:11:48+0000
OS/Arch: linux/amd64
Tags map jobs to runners. I tag my runners with the type of executor they use, e.g. - shell, docker.
Based on the error message, you do not have any runners with the tag ${RUNNER_TAG}, which means that it is not resolving the variable the way you want it to.
Instead of combining rules like this, make separate jobs for each, and a rule for each to say when to trigger it.
I have faced this issue, and similar issues many times while trying to do some dynamic pipelines for a multi-client environment.
The config you have above should work for your purposes to the best of my knowledge, but since it is not there is another way to accomplish this with trigger jobs.
Create a trigger job for each possible runner tag. You can use extends to reduce the total code required for this.
gitlab-ci.yml
stages:
- trigger
- lint
.trigger:
stage: trigger
trigger:
include:
- local: ./lint-job.yml
strategy: depend
trigger-lint-script1:
extends:
- .trigger
variables:
RUNNER_TAG: somerunner1
rules:
- changes:
- file/dev/script1.txt
trigger-lint-script2:
extends:
- .trigger
variables:
RUNNER_TAG: somerunner2
rules:
- changes:
- file/dev/script2.txt
Create a trigger job with associated rules for each possible tag. This way you can change more than one of the specified files in a single commit with no issues. Define the triggered job in lint-job.yml
lint-job.yml
deploy scripts 1/6:
tags: [$RUNNER_TAG]
stage: lint
script: |
echo "Add linting here!"
There are other ways to accomplish this, but this method is by far the simplest and cleanest for this particular use.

What Gitlab tool used for code coverage reports?

Instead of using JaCoCo, I was told, that there would be an internal Gitlab tool, where I can create test coverage reports?
I do not want to use JaCoCo.
I am not interessted in any vizualization plugin within Gitlab.
I would like to generate a xml/html file(s) with e.g. bar graphs, what can be emailed and opened externally.
I couldn't find anything in the Gitlab dashboard menu. The project is a Android App Kotlin project.
the question is what part of Coverage you want to see/have:
just a number within the MR - therefore GitLab parses the logoutput of the Jobs
coverage visualization within MR - therefore you need to provide a report.
Coverage in Overview
For the coverage in the Overview and just to get a percentage, you need to configure your job with an regex how it can be parsed like
job1:
# ....
coverage: '/Code coverage: \d+\.\d+/'
https://docs.gitlab.com/ee/ci/yaml/#coverage
Visualization
We are actually using JaCoCo, but to make the coverage visible and to have the information in Merge Requests you have to convert everything into Cobertura Reports.
There are different approaches to achieve this:
with a gradle-plugin like https://github.com/kageiit/gradle-jacobo-plugin
the configuration is pretty neat, and if you do have already a gradle build it is easy to integrate
with an own step within the CI Pipeline - see https://docs.gitlab.com/ee/user/project/merge_requests/test_coverage_visualization.html
test-jdk11:
stage: test
image: gradle:6.6.1-jdk11
script:
- 'gradle test jacocoTestReport' # jacoco must be configured to create an xml report
artifacts:
paths:
- build/jacoco/jacoco.xml
coverage-jdk11:
# Must be in a stage later than test-jdk11's stage.
# The `visualize` stage does not exist by default.
# Please define it first, or chose an existing stage like `deploy`.
stage: visualize
image: registry.gitlab.com/haynes/jacoco2cobertura:1.0.7
script:
# convert report from jacoco to cobertura, using relative project path
- python /opt/cover2cover.py build/jacoco/jacoco.xml $CI_PROJECT_DIR/src/main/java/ > build/cobertura.xml
needs: ["test-jdk11"]
artifacts:
reports:
cobertura: build/cobertura.xml
important to note is that you always will have to tell GitLab CI your path to the artifact for cobertura with
job:
#...
artifacts:
reports:
cobertura: build/cobertura.xml
Our approach is the following.
have to tell Gitlab where your coverage report is, for example we have this setup for a java unit test report "jacoco.xml":
Unit Test:
stage: pruebas
script:
- echo "Iniciar Pruebas"
- mvn $MAVEN_CLI_OPTS test
artifacts:
when: always
reports:
junit:
- target/surefire-reports/*Test.xml
- target/failsafe-reports/*Test.xml
cobertura: target/site/jacoco/jacoco.xml
Our summary in Gitlab :
Unit Test Detaills:
The key is your "jacoco.xml".

Passing variables between Tekton Steps

I am trying to implement a basic Tekton CI pipeline. All the pipeline does is 1) get the source code 2) build an image with a new version and push it to an image registry.
The image version is generated by a Tekton Step. Images are built by another Tekton step that uses Kaniko as described here.
I am aware of using workspaces to pass variables between Tekton steps. Meaning I can write the version to a file in the workspace. But cant figure out a way to read this version from the file in the Kaniko build step below:
steps:
- name: build-and-push
image: gcr.io/kaniko-project/executor:latest
# specifying DOCKER_CONFIG is required to allow kaniko to detect docker credential
env:
- name: "DOCKER_CONFIG"
value: "/tekton/home/.docker/"
command:
- /kaniko/executor
args:
- --dockerfile=$(params.pathToDockerFile)
- --destination=$(resources.outputs.builtImage.url):<IMAGE-VERSION-NEEDED-HERE>
- --context=$(params.pathToContext)
- --build-arg=BASE=alpine:3
There should be a common pattern to resolve this but I am not sure if I am looking at the right places in Tekton documentation for this.
Can anyone offer some pointers?
This is to confirm that I managed to resolve the issue by redesigning the steps to tasks as suggested by #Jonas.
Tekton Tasks can have outputs which can be referred in other tasks. At the time of writing this Tekton steps don't seem to have this feature.
For more details refer the links in #Jonas comments above.
All steps in a Task share the same Pod and thus as access to a shared workspace implemented as an emptyDir volume:
Volumes:
tekton-internal-workspace:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
A common way to share data between steps is to a file in the /workspace and read it in the next step.
Alternatively, as suggested by #Jonas, if you use different Tasks you can write a result in the first Task and feed it into a parameter of the second Task in the Pipeline definition.
Using results this way implicitly creates a dependency between the two Tasks, so the Tekton controller won't schedule the second Task until the first one has successfully completed and results are available.
You can use the gcr.io/kaniko-project/executor:debug image that has shell at /busybox/sh.
And create something like this (pass kaniko commands via script):
steps:
- name: write-to-workspace
image: ubuntu
script: |
#!/usr/bin/env bash
echo "IMAGE_VERSION" > /workspace/FOO
- name: read-from-workspace
image: gcr.io/kaniko-project/executor:debug
script: |
#!/busybox/sh
export IMAGE_VERSION=$(cat /workspace/FOO)
echo "$IMAGE_VERSION"
/kaniko/executor \
--dockerfile=$(params.pathToDockerFile) \
--destination=$(resources.outputs.builtImage.url):"${IMAGE_VERSION}" \
--context=$(params.pathToContext) \
--build-arg=BASE=alpine:3
You can refer to this discussion: https://github.com/tektoncd/pipeline/issues/1476

Variables in gitlab CI

I just began with the implementation of CI jobs using gitlab-ci and I'm trying to create a job template. Basically the job uses the same image, tags and script where I use variables:
.job_e2e_template: &job_e2e
stage: e2e-test
tags:
- test
image: my_image_repo/siderunner
script:
- selenium-side-runner -c "browserName=$JOB_BROWSER" --server http://${SE_EVENT_BUS_HOST}:${SELENIUM_HUB_PORT}/wd/hub --output-directory docker/selenium/out_$FOLDER_POSTFIX docker/selenium/tests/*.side;
And here is one of the jobs using this anchor:
test-chrome:
<<: *job_e2e
variables:
JOB_BROWSER: "chrome"
FOLDER_POSTFIX: "chrome"
services:
- selenium-hub
- node-chrome
artifacts:
paths:
- tests/
- out_chrome/
I'd like this template to be more generic and I was wondering if I could also use variables in the services and artifacts section, so I could add a few more lines in my template like this:
services:
- selenium-hub
- node-$JOB_BROWSER
artifacts:
paths:
- tests/
- out_$JOB_BROWSER/
However I cannot find any example of that and the doc only talks about using that in scripts. I know that variables are like environment variables for jobs but I'm not sure if they can be used for other purposes.
Any suggestions?
Short answer, yes you can. Like described in this blog post, gitlab does a deep merge based on the keys.
You can see how your merged pipeline file looks like under CI/CD -> Editor -> View merged YAML.
If you want to modularize your pipeline even further I would recommend using include instead of yaml anchors, so you can reuse your templates in different pipelines.