I just began with the implementation of CI jobs using gitlab-ci and I'm trying to create a job template. Basically the job uses the same image, tags and script where I use variables:
.job_e2e_template: &job_e2e
stage: e2e-test
tags:
- test
image: my_image_repo/siderunner
script:
- selenium-side-runner -c "browserName=$JOB_BROWSER" --server http://${SE_EVENT_BUS_HOST}:${SELENIUM_HUB_PORT}/wd/hub --output-directory docker/selenium/out_$FOLDER_POSTFIX docker/selenium/tests/*.side;
And here is one of the jobs using this anchor:
test-chrome:
<<: *job_e2e
variables:
JOB_BROWSER: "chrome"
FOLDER_POSTFIX: "chrome"
services:
- selenium-hub
- node-chrome
artifacts:
paths:
- tests/
- out_chrome/
I'd like this template to be more generic and I was wondering if I could also use variables in the services and artifacts section, so I could add a few more lines in my template like this:
services:
- selenium-hub
- node-$JOB_BROWSER
artifacts:
paths:
- tests/
- out_$JOB_BROWSER/
However I cannot find any example of that and the doc only talks about using that in scripts. I know that variables are like environment variables for jobs but I'm not sure if they can be used for other purposes.
Any suggestions?
Short answer, yes you can. Like described in this blog post, gitlab does a deep merge based on the keys.
You can see how your merged pipeline file looks like under CI/CD -> Editor -> View merged YAML.
If you want to modularize your pipeline even further I would recommend using include instead of yaml anchors, so you can reuse your templates in different pipelines.
Related
I am trying to generalize the cicd of our GitLab projects.
I am planning to create a cicd-templates repo, containing general jobs that I run in multiple projects.
I have for example a terraform template that accepts input variables and runs an init, validate, plan and apply job.
I am now trying to create a similar template for our python-nox sessions. The issue is that, for our integration tests, we need two services.
I would prefer not to include the services in the template, since they are not needed for the integration tests of other projects (but other services might).
So I was wondering how I could include a ci template (from another project) and pass the needed images from the parent pipeline.
What is not working:
Parent/project pipeline:
trigger-nox-template:
variables:
IMAGE: "registry.gitlab.com/path/to/my/image:latest"
trigger:
include:
- project: cicd-templates
file: /nox_tests.yml
strategy: depend
services:
- name: mcr.microsoft.com/mssql/server:2017-latest
alias: db
- name: mcr.microsoft.com/azure-storage/azurite:3.13.1
alias: storage
cicd-templates/nox_tests.yml:
variables:
IMAGE: "registry.gitlab.com/path/to/a/default/image:latest"
integration:
image: '$IMAGE'
script:
- python -m nox -s integration
As I said, I could hardcode the services in the template as well, but they might vary based on the parent pipeline, so I'm looking for a more dynamic solution.
ps. How I implemented the image does work, but if there is a more elegant way, that would be appreciated as well.
Thanks in advance!
I have manually generated the report of my API changes using swagger-diff
I can automate it in a local machine using makefile or script but what about if I wanted to implement it in the Gitlab pipeline, how can I generate the report in such a way when someone pushes the changes on the API endpoints
java -jar bin/swagger-diff.jar -old https://url/v1/swagger.json -new https://url2/v2/swagger.json -v 2.0 -output-mode html > changes.html
Note that: All the project code is also being containerized.
Configure a job in the pipeline to run when there are changes to your api routes. Save the output as an artifact. If you also need the diff published, you could either do the publishing in that job or create a dependent job which uses the artifact to publish the diff to a Gitlab page or external provider.
If you have automated the process locally, then most of the work is done already if it is in a shell script or something similar.
Example:
This example assumes that your api routes are defined in customer/api/routes/ and internal/api/routes and that you want to generate the diff when a commit or MR is pushed to the dev branch.
ApiDiff:
stage: build
image: java:<some-tag>
script:
- java -jar bin/swagger-diff.jar -old https://url/v1/swagger.json -new https://url2/v2/swagger.json -v 2.0 -output-mode html > changes.html
artifacts:
expire_in: 1 day
name: api-diff
when: on_success
paths: changes.html
rules:
- if: "$CI_COMMIT_REF_NAME == 'dev'"
changes:
- customer/api/routes/*
- internal/api/routes/*
- when: never
And then the job to publish the diff if you want one. This could also be done in the same job that generates the diff.
PublishDiff:
stage: deploy
needs:
- job: "ApiDiff"
optional: false
artifacts: true
image: someimage:latest
script:
- <some script to publish the report>
rules:
- if: "$CI_COMMIT_REF_NAME == 'dev'"
changes:
- customer/api/routes/*
- internal/api/routes/*
- when: never
I have a CICD configuration that looks something like this:
.rule_template: &rule_configuration
rules:
- changes:
- file/dev/script1.txt
variables:
DESTINATION_HOST: somehost1
RUNNER_TAG: somerunner1
- changes:
- file/test/script1.txt
variables:
DESTINATION_HOST: somehost2
RUNNER_TAG: somerunner2
default:
tags:
- scripts
stages:
- lint
deploy scripts 1/6:
<<: *rule_configuration
tags:
- ${RUNNER_TAG}
stage: lint
script: |
echo "Add linting here!"
....
In short, which runner to choose depends on which file was changed, hence the runner tag has to be conditionally decided. However, these jobs never execute and the value of never gets assigned as i always get:
This job is stuck because you don't have any active runners online or available with any of these tags assigned to them: ${RUNNER_TAG}
Any idea, what is it this way and what can I do to resolve this?
gitlab-runner --version
Version: 14.7.0
Git revision: 98daeee0
Git branch: 14-7-stable
GO version: go1.17.5
Built: 2022-01-19T17:11:48+0000
OS/Arch: linux/amd64
Tags map jobs to runners. I tag my runners with the type of executor they use, e.g. - shell, docker.
Based on the error message, you do not have any runners with the tag ${RUNNER_TAG}, which means that it is not resolving the variable the way you want it to.
Instead of combining rules like this, make separate jobs for each, and a rule for each to say when to trigger it.
I have faced this issue, and similar issues many times while trying to do some dynamic pipelines for a multi-client environment.
The config you have above should work for your purposes to the best of my knowledge, but since it is not there is another way to accomplish this with trigger jobs.
Create a trigger job for each possible runner tag. You can use extends to reduce the total code required for this.
gitlab-ci.yml
stages:
- trigger
- lint
.trigger:
stage: trigger
trigger:
include:
- local: ./lint-job.yml
strategy: depend
trigger-lint-script1:
extends:
- .trigger
variables:
RUNNER_TAG: somerunner1
rules:
- changes:
- file/dev/script1.txt
trigger-lint-script2:
extends:
- .trigger
variables:
RUNNER_TAG: somerunner2
rules:
- changes:
- file/dev/script2.txt
Create a trigger job with associated rules for each possible tag. This way you can change more than one of the specified files in a single commit with no issues. Define the triggered job in lint-job.yml
lint-job.yml
deploy scripts 1/6:
tags: [$RUNNER_TAG]
stage: lint
script: |
echo "Add linting here!"
There are other ways to accomplish this, but this method is by far the simplest and cleanest for this particular use.
i'm trying to implement automation inside my GitLab project.
In order to perform security scan, i would like to use ZAP to go through all the URLs present in the
project and scan them. It's clearly not possible to pass manually all the URLs, so i'm trying to find a way to make all the test as automated as possible.
The problem is: how to reach all the URLs present in the application?
I thought a way could be to pass them as a "variable" in the YML file, and use them as parameter in the ZAP command, something like that (see below).
Is this a reasonable solution? Is there any other way to perform an automated scan inside a repository (without passing manually the URLs)?
Thanks
variables:
OWASP_CONTAINER: $APP_NAME-$BUILD_ID-OWASP
OWASP_IMAGE: "owasp/zap2docker-stable"
OWASP_REPORT_DIR: "owasp-data"
ZAP_API_PORT: "8090"
PENTEST_IP: 'application:8080'
run penetration tests:
stage: pen-tests
image: docker:stable
- docker exec $OWASP_CONTAINER zap-cli -v -p $ZAP_API_PORT active-scan http://$PENTEST_IP/html
You need to turn on a new feature flag (FF_NETWORK_PER_BUILD) to enable a network per build. Then also services can reach each others (Available since GitLab runner 12.9). For more information see: https://docs.gitlab.com/runner/executors/docker.html#networking
Working example owasp zap job in GitLab CI:
owasp-zap:
variables:
FF_NETWORK_PER_BUILD: 1
image: maven
services:
- selenium/standalone-chrome
- name: owasp/zap2docker-weekly
entrypoint: ['zap.sh', '-daemon', '-host', '0.0.0.0', '-port', '8080',
'-config', 'api.addrs.addr.name=.*', '-config', 'api.addrs.addr.regex=true', '-config', 'api.key=1234567890']
script:
- sleep 5
- mvn clean test -Dbrowser=chrome -Dgrid_url=http://selenium-standalone-chrome:4444/wd/hub -Dproxy=http://owasp-zap2docker-weekly:8080
- curl http://owasp-zap2docker-weekly:8080/OTHER/core/other/htmlreport/?apikey=1234567890 -o report.html
artifacts:
paths:
- report.html
My gitlab-ci.yml is configured to deploy to a staging server on push to a staging branch. Each developer has their own staging server for testing. The way I have it now doesn't seem very scalable, in that I would have to duplicate each job for each user.
I have now:
deploy_to_staging_sf:
image: debian:jessie
stage: deploy
only:
- staging_sf
tags:
- staging_sf
script:
- ./deploy.sh
deploy_to_staging_ay:
image: debian:jessie
stage: deploy
only:
- staging_ay
tags:
- staging_ay
script:
- ./deploy.sh
I was wondering if it was possible to do some kind of regex or pattern matching to keep it DRY and scalable, and I came up with this...
deploy_to_staging:
image: debian:jessie
stage: deploy
only:
- /^staging_.*$/
tags:
- $CI_COMMIT_REF_NAME
script:
- ./deploy.sh
I have the tag for the runner configured to match the branch name. However, $CI_COMMIT_REF_NAME is not evaluated for tags, and I just get the error
This job is stuck, because you don't have any active runners online
with any of these tags assigned to them: $CI_COMMIT_REF_NAME
Is this actually possible and have I just done something wrong, or is just not possible to evaluate variables here at all?
Thanks for any help.