GitlabCI configuration of 2 exclusive jobs - gitlab-ci

I have pretty common (I guess?) use case for GitlabCI
I want to create next Semantic Version by choosing one of possibilities - patch or minor
Then I want to build image with this version
So my pipeline looks like this:
stages:
- tag
- build_image
tag-minor:
stage: tag
when: manual
allow_failure: false
script:
- echo "tagging minor..."
tag-patch:
stage: tag
when: manual
allow_failure: false
script:
- echo "tagging patch..."
build_image:
stage: build_image
script:
- echo "building..."
tag_minor and tag_patch are on same stage and they are both manual jobs, I want user to select which version it should create and then it should automatically move on to next stage which is build. However, build-image job is not starting unless both tag_minor and tag_patch are completed. How to change this behavior so it will wait for only one of them? It would be perfect not must have tho) if on to of that, running tag_patch prevents user from running tag_minor in same pipeline as well.

Related

GitLab CI/CD pipeline no tags supplied means SANITY module should run automatically

I am trying to run my test suit using GitLab with annotations so suppose i have 4 Scenario defined out of which two are for regression and 2 are sanity but when pass tag named regression it run my regression hook now, I want a solution if I don't pass any tag it should run sanity hook
stages:
- build
cucumber_test:
stage: build
tags: [regression , sanity]
allow_failure: false
script:
- mvn "clean" "test" "-Dcucumber.filter.tags=#%Tag%"
rules:
- if: '$Tag == "reg"'
allow_failure: true
artifacts:
paths:
- Report
when: always
enter image description here
Because your only rule requires a specific value for Tag, the job will only be present in that circumstance.
I want a solution if I don't pass any tag it should run sanity hook
What you probably want to do here is set a default value defined in variables:. Additionally, you need to add a default rule to make sure the job runs even when the Tag value is not reg.
variables:
Tag: "sanity" # the default if none is provided manually
rules:
- if: '$Tag == "reg"'
allow_failure: true
- when: on_success # run this job normally, even when $Tag is not "reg"

How to generate the report of API changes on the pipeline?

I have manually generated the report of my API changes using swagger-diff
I can automate it in a local machine using makefile or script but what about if I wanted to implement it in the Gitlab pipeline, how can I generate the report in such a way when someone pushes the changes on the API endpoints
java -jar bin/swagger-diff.jar -old https://url/v1/swagger.json -new https://url2/v2/swagger.json -v 2.0 -output-mode html > changes.html
Note that: All the project code is also being containerized.
Configure a job in the pipeline to run when there are changes to your api routes. Save the output as an artifact. If you also need the diff published, you could either do the publishing in that job or create a dependent job which uses the artifact to publish the diff to a Gitlab page or external provider.
If you have automated the process locally, then most of the work is done already if it is in a shell script or something similar.
Example:
This example assumes that your api routes are defined in customer/api/routes/ and internal/api/routes and that you want to generate the diff when a commit or MR is pushed to the dev branch.
ApiDiff:
stage: build
image: java:<some-tag>
script:
- java -jar bin/swagger-diff.jar -old https://url/v1/swagger.json -new https://url2/v2/swagger.json -v 2.0 -output-mode html > changes.html
artifacts:
expire_in: 1 day
name: api-diff
when: on_success
paths: changes.html
rules:
- if: "$CI_COMMIT_REF_NAME == 'dev'"
changes:
- customer/api/routes/*
- internal/api/routes/*
- when: never
And then the job to publish the diff if you want one. This could also be done in the same job that generates the diff.
PublishDiff:
stage: deploy
needs:
- job: "ApiDiff"
optional: false
artifacts: true
image: someimage:latest
script:
- <some script to publish the report>
rules:
- if: "$CI_COMMIT_REF_NAME == 'dev'"
changes:
- customer/api/routes/*
- internal/api/routes/*
- when: never

Giltab CI Job stuck because the runner tag value hasn’t been assigned

I have a CICD configuration that looks something like this:
.rule_template: &rule_configuration
rules:
- changes:
- file/dev/script1.txt
variables:
DESTINATION_HOST: somehost1
RUNNER_TAG: somerunner1
- changes:
- file/test/script1.txt
variables:
DESTINATION_HOST: somehost2
RUNNER_TAG: somerunner2
default:
tags:
- scripts
stages:
- lint
deploy scripts 1/6:
<<: *rule_configuration
tags:
- ${RUNNER_TAG}
stage: lint
script: |
echo "Add linting here!"
....
In short, which runner to choose depends on which file was changed, hence the runner tag has to be conditionally decided. However, these jobs never execute and the value of never gets assigned as i always get:
This job is stuck because you don't have any active runners online or available with any of these tags assigned to them: ${RUNNER_TAG}
Any idea, what is it this way and what can I do to resolve this?
gitlab-runner --version
Version: 14.7.0
Git revision: 98daeee0
Git branch: 14-7-stable
GO version: go1.17.5
Built: 2022-01-19T17:11:48+0000
OS/Arch: linux/amd64
Tags map jobs to runners. I tag my runners with the type of executor they use, e.g. - shell, docker.
Based on the error message, you do not have any runners with the tag ${RUNNER_TAG}, which means that it is not resolving the variable the way you want it to.
Instead of combining rules like this, make separate jobs for each, and a rule for each to say when to trigger it.
I have faced this issue, and similar issues many times while trying to do some dynamic pipelines for a multi-client environment.
The config you have above should work for your purposes to the best of my knowledge, but since it is not there is another way to accomplish this with trigger jobs.
Create a trigger job for each possible runner tag. You can use extends to reduce the total code required for this.
gitlab-ci.yml
stages:
- trigger
- lint
.trigger:
stage: trigger
trigger:
include:
- local: ./lint-job.yml
strategy: depend
trigger-lint-script1:
extends:
- .trigger
variables:
RUNNER_TAG: somerunner1
rules:
- changes:
- file/dev/script1.txt
trigger-lint-script2:
extends:
- .trigger
variables:
RUNNER_TAG: somerunner2
rules:
- changes:
- file/dev/script2.txt
Create a trigger job with associated rules for each possible tag. This way you can change more than one of the specified files in a single commit with no issues. Define the triggered job in lint-job.yml
lint-job.yml
deploy scripts 1/6:
tags: [$RUNNER_TAG]
stage: lint
script: |
echo "Add linting here!"
There are other ways to accomplish this, but this method is by far the simplest and cleanest for this particular use.

How to run job on a specific branch using rules in GitLab CI/CD

It seems rules replaces only/except functionality in the latests GitLab versions.
Before, specifying that a job had to be executed only for master branch, for example, was very straightforward.
How would that be done with rules?
I'm guessing GitLab provides some variable that specifies the current branch's name, but I cannot find that. The only examples I see are regarding merge requests.
In other words, if I have the following job, how to restrict it to run only in potato branch?
unit_tests:
stage: test
script: dotnet vstest test/*UnitTests/bin/Release/**/*UnitTests.dll --Blame
rules:
- exists:
- test/*UnitTests/*UnitTests.csproj
I guess this would be it:
unit_tests:
stage: test
script: dotnet vstest test/*UnitTests/bin/Release/**/*UnitTests.dll --Blame
rules:
- if: $CI_COMMIT_BRANCH == "potato"
Here are the variable references:
https://docs.gitlab.com/ee/ci/variables/predefined_variables.html
Here is an example from gitlab-runner project source code itself
https://gitlab.com/gitlab-org/gitlab-runner/-/blob/main/.gitlab/ci/test.gitlab-ci.yml
job-name:
script:
- echo "i am potato"
rules:
- if: '$CI_COMMIT_BRANCH == "potato"'

gitlab-ci.yml - variables not evaluated

My gitlab-ci.yml is configured to deploy to a staging server on push to a staging branch. Each developer has their own staging server for testing. The way I have it now doesn't seem very scalable, in that I would have to duplicate each job for each user.
I have now:
deploy_to_staging_sf:
image: debian:jessie
stage: deploy
only:
- staging_sf
tags:
- staging_sf
script:
- ./deploy.sh
deploy_to_staging_ay:
image: debian:jessie
stage: deploy
only:
- staging_ay
tags:
- staging_ay
script:
- ./deploy.sh
I was wondering if it was possible to do some kind of regex or pattern matching to keep it DRY and scalable, and I came up with this...
deploy_to_staging:
image: debian:jessie
stage: deploy
only:
- /^staging_.*$/
tags:
- $CI_COMMIT_REF_NAME
script:
- ./deploy.sh
I have the tag for the runner configured to match the branch name. However, $CI_COMMIT_REF_NAME is not evaluated for tags, and I just get the error
This job is stuck, because you don't have any active runners online
with any of these tags assigned to them: $CI_COMMIT_REF_NAME
Is this actually possible and have I just done something wrong, or is just not possible to evaluate variables here at all?
Thanks for any help.