Pass services to child pipeline in GitLab - gitlab-ci

I am trying to generalize the cicd of our GitLab projects.
I am planning to create a cicd-templates repo, containing general jobs that I run in multiple projects.
I have for example a terraform template that accepts input variables and runs an init, validate, plan and apply job.
I am now trying to create a similar template for our python-nox sessions. The issue is that, for our integration tests, we need two services.
I would prefer not to include the services in the template, since they are not needed for the integration tests of other projects (but other services might).
So I was wondering how I could include a ci template (from another project) and pass the needed images from the parent pipeline.
What is not working:
Parent/project pipeline:
trigger-nox-template:
variables:
IMAGE: "registry.gitlab.com/path/to/my/image:latest"
trigger:
include:
- project: cicd-templates
file: /nox_tests.yml
strategy: depend
services:
- name: mcr.microsoft.com/mssql/server:2017-latest
alias: db
- name: mcr.microsoft.com/azure-storage/azurite:3.13.1
alias: storage
cicd-templates/nox_tests.yml:
variables:
IMAGE: "registry.gitlab.com/path/to/a/default/image:latest"
integration:
image: '$IMAGE'
script:
- python -m nox -s integration
As I said, I could hardcode the services in the template as well, but they might vary based on the parent pipeline, so I'm looking for a more dynamic solution.
ps. How I implemented the image does work, but if there is a more elegant way, that would be appreciated as well.
Thanks in advance!

Related

Is it possible to bind a workspace from within a Pipeline (and not a PipelineRun) or create a PipelineRun with bound workspaces using GUI?

I'd like to be able to start my pipeline which uses two workspaces using the Tekton dashboard (GUI) but from what I can see, there is no option to provide workspaces here.
The only two options of creating a PipelineRun with bound workspaces I can think of are to either:
Create it programmatically and apply using either kubectl or tekton-cli
Create a TriggerTemplate with bound workspaces and run the pipeline by a webhook to an EventListener
My main problem is both of those options require the developer to go through a very non-user-friendly process. Ideally, I'd like to run the pipelines with bound workspaces from tekton GUI. Can this be somehow achieved?
I've tried providing the binding in the workspaces section of the Pipeline definition as below:
workspaces:
- name: source-dir
persistentVolumeClaim:
claimName: gradle-cache-pvc
- name: ssh-creds
secret:
secretName: ssh-key-secret

Invoke GitLab CI jobs from inside other jobs

I have many different GitLab CI jobs in my repository and dependent on variables that are set by an user in a config file I want to execute different sequences of jobs. My approach is to create a scheduler job that analyzes the config file and executes the jobs accordingly. However, I cannot figure out how to execute another job from within a job.
Any help is appreciated!
This would be a good use case for dynamic child pipelines. This is pretty much the only way to customize a pipeline based on the outcome of another job.
From the docs:
generate-config:
stage: build
script: generate-ci-config > generated-config.yml
artifacts:
paths:
- generated-config.yml
child-pipeline:
stage: test
trigger:
include:
- artifact: generated-config.yml
job: generate-config
In your case, the script generate-ci-config would be the analysis of your config files and creates a job configuration conditionally based on the config contents.

Gitlab CI does not support variable expansion in needs keyword, is there any solution?

I'm creating a template for all the deploy jobs, and I need to be able to use needs keyword with different values for each deploy job, but GitLab CI, as far as I know, does not support using variable in needs keyword. Is there any workaround?
This is what I need to do:
# Deploy template
.deploy:
stage: deploy
only:
- develop
tags:
- deploy
needsL ["build:$PROJECT_NAME"]
# Deploy jobs
deploy:package1:
extends: .deploy
variables:
PROJECT_NAME: 'package1'
#needs: ['build:package1']
deploy:package2:
extends: .deploy
variables:
PROJECT_NAME: 'package2'
#needs: ['build:package2']
You can't do this. needs: will not support variables.
However, if the template you're making does not contain the job it depends on, the best approach is probably to not use needs: at all, otherwise you greatly increase the likelihood that including your template will cause an invalid yaml file.
So, your options would be either to (1) include the jobs you depend on in the same template, then designate needs: explicitly or (2) Rely on users to provide the needs: key in the deploy job if they want.
For example, a user can do this:
include:
- "your template"
# job originates in the project configuration
my_project_jobs:
script: "..."
your_deploy_template_job:
needs: ["my_project_job"] # add the key to the included template job
Or if you provide both jobs in your pipeline configuration, you can use some rules: to keep the jobs from running, and let users enable them and override their script configurations to implement builds.
# your template yaml
your_template_build_job:package1
rules:
- if: '$PACKAGE1_ENABLED'
when: on_success
- when: never
your_template_deploy_job:package1
rules:
- if: '$PACKAGE1_ENABLED'
needs: [your_template_build_job:package1]
# ...
Then a user might just do this:
# user project yaml
include:
- "your template"
variables:
PACKAGE1_ENABLED: true
your_template_build_job:package1
script: "my project build script"
When the user doesn't explicitly enable a job, neither the build nor deploy job will be in the pipeline configuration. However, they only need to enable the build job (by variable) and the needs: configuration for the deploy job will already be in place.
Neither of these approaches are particularly perfect for very flexible use of templates, unfortunately. But there may be another option...
Workaround: Dynamic child pipelines
As a possible workaround, users could use dynamic child pipelines to generate an entire pipeline configuration with correct needs: based on a minimal configuration. Almost anything is possible with dynamic child pipelines because you can generate the YAML programmatically on-the-fly, though, it may be more trouble than it's worth.

Passing variables between Tekton Steps

I am trying to implement a basic Tekton CI pipeline. All the pipeline does is 1) get the source code 2) build an image with a new version and push it to an image registry.
The image version is generated by a Tekton Step. Images are built by another Tekton step that uses Kaniko as described here.
I am aware of using workspaces to pass variables between Tekton steps. Meaning I can write the version to a file in the workspace. But cant figure out a way to read this version from the file in the Kaniko build step below:
steps:
- name: build-and-push
image: gcr.io/kaniko-project/executor:latest
# specifying DOCKER_CONFIG is required to allow kaniko to detect docker credential
env:
- name: "DOCKER_CONFIG"
value: "/tekton/home/.docker/"
command:
- /kaniko/executor
args:
- --dockerfile=$(params.pathToDockerFile)
- --destination=$(resources.outputs.builtImage.url):<IMAGE-VERSION-NEEDED-HERE>
- --context=$(params.pathToContext)
- --build-arg=BASE=alpine:3
There should be a common pattern to resolve this but I am not sure if I am looking at the right places in Tekton documentation for this.
Can anyone offer some pointers?
This is to confirm that I managed to resolve the issue by redesigning the steps to tasks as suggested by #Jonas.
Tekton Tasks can have outputs which can be referred in other tasks. At the time of writing this Tekton steps don't seem to have this feature.
For more details refer the links in #Jonas comments above.
All steps in a Task share the same Pod and thus as access to a shared workspace implemented as an emptyDir volume:
Volumes:
tekton-internal-workspace:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
A common way to share data between steps is to a file in the /workspace and read it in the next step.
Alternatively, as suggested by #Jonas, if you use different Tasks you can write a result in the first Task and feed it into a parameter of the second Task in the Pipeline definition.
Using results this way implicitly creates a dependency between the two Tasks, so the Tekton controller won't schedule the second Task until the first one has successfully completed and results are available.
You can use the gcr.io/kaniko-project/executor:debug image that has shell at /busybox/sh.
And create something like this (pass kaniko commands via script):
steps:
- name: write-to-workspace
image: ubuntu
script: |
#!/usr/bin/env bash
echo "IMAGE_VERSION" > /workspace/FOO
- name: read-from-workspace
image: gcr.io/kaniko-project/executor:debug
script: |
#!/busybox/sh
export IMAGE_VERSION=$(cat /workspace/FOO)
echo "$IMAGE_VERSION"
/kaniko/executor \
--dockerfile=$(params.pathToDockerFile) \
--destination=$(resources.outputs.builtImage.url):"${IMAGE_VERSION}" \
--context=$(params.pathToContext) \
--build-arg=BASE=alpine:3
You can refer to this discussion: https://github.com/tektoncd/pipeline/issues/1476

Variables in gitlab CI

I just began with the implementation of CI jobs using gitlab-ci and I'm trying to create a job template. Basically the job uses the same image, tags and script where I use variables:
.job_e2e_template: &job_e2e
stage: e2e-test
tags:
- test
image: my_image_repo/siderunner
script:
- selenium-side-runner -c "browserName=$JOB_BROWSER" --server http://${SE_EVENT_BUS_HOST}:${SELENIUM_HUB_PORT}/wd/hub --output-directory docker/selenium/out_$FOLDER_POSTFIX docker/selenium/tests/*.side;
And here is one of the jobs using this anchor:
test-chrome:
<<: *job_e2e
variables:
JOB_BROWSER: "chrome"
FOLDER_POSTFIX: "chrome"
services:
- selenium-hub
- node-chrome
artifacts:
paths:
- tests/
- out_chrome/
I'd like this template to be more generic and I was wondering if I could also use variables in the services and artifacts section, so I could add a few more lines in my template like this:
services:
- selenium-hub
- node-$JOB_BROWSER
artifacts:
paths:
- tests/
- out_$JOB_BROWSER/
However I cannot find any example of that and the doc only talks about using that in scripts. I know that variables are like environment variables for jobs but I'm not sure if they can be used for other purposes.
Any suggestions?
Short answer, yes you can. Like described in this blog post, gitlab does a deep merge based on the keys.
You can see how your merged pipeline file looks like under CI/CD -> Editor -> View merged YAML.
If you want to modularize your pipeline even further I would recommend using include instead of yaml anchors, so you can reuse your templates in different pipelines.