Pass ENV vars from one Tekton Task step to the next? - tekton

So Tekton Pipelines allows you to create individual tasks and connect them into cloud native ci/cd pipelines. It's pretty cool. But as you can imagine, so things you would think are easy, are pretty tricky. For example, I'm trying to run a Kaniko executor as a task, but that executor needs specific arguments. I can hard code those args, but that makes the task less re-usable, so I'd like a previous task to simply read a config file from source and output or set env variables for the subsequent task. Not sure how to do this. In the case of Kaniko, its really tricky because you don't have any shell or anything. Any suggestions?
Here's a sample task from their documentation that I've tweaked to kind of show what I'm trying to do.
apiVersion: tekton.dev/v1alpha1
kind: Task
metadata:
name: example-task-name
spec:
inputs:
resources:
- name: workspace
type: git
params:
- name: pathToDockerFile
type: string
description: The path to the dockerfile to build
default: /workspace/workspace/Dockerfile
outputs:
resources:
- name: builtImage
type: image
steps:
- name: ubuntu-example
image: ubuntu
args: "export FOO=bar"
- image: gcr.io/example-builders/build-example
command: ["echo $FOO"]

I tried to achieve the same and found a (hacky) solution:
You can make use of the concept of workspaces and write your variable value to a file. You can use this also to mix between different types of scripts (python and sh in this example):
steps:
- name: python-write-value
image: python:3.7-alpine
script: |
#!/usr/bin/env python3
value = "my_value_to_share"
f = open("/workspace/value.txt","w+")
f.write(value)
- name: sh-read-value
image: ubuntu
script: |
value=$(cat /workspace/value.txt)
echo $value

There is no easy way to do this for now, but you can use task params and task-results.

Related

Azure DevOps yaml trigger pipeline

Is there a way to trigger a pipeline if particular variables yaml file is updated in Azure Source Control and use exactly that updated variables yaml file values? I know how to gather values from variables yaml, but I can not figure out how to get the main pipeline to use exactly that particular variables yaml file. Example, folders hierarchy in SC:
Variables
var1.yaml
var2.yaml
var3.yaml
My main pipeline just a draft:
trigger:
batch: true
branches:
include:
- master
paths:
include:
- variables/var*
pool:
vmImage: ubuntu-latest
stages:
- stage: stage1
displayName: 'My stage'
jobs:
- job: Job 1
displayName: 'Run something'
variables:
- template: ../variables/...(not sure about this part yet)
steps:
- task: PowerShell#2
displayName: 'Run PoSH command'
inputs:
targetType: 'inline'
script: |
write-host '${{ variables.varx }}' (again not sure here yet)
Basically, is there a logic how to run main pipeline, based by the variables yaml template which has triggered the main pipeline. There will be many variables yaml files in SC, and once particular yaml, example var2.yaml is updated, the main pipeline gets triggered and uses var2.yaml values.

Serverless.yml - Epilogue

One magical day I found a reference to an 'epilogue' key to be used in the Serverless.yml file. It's the best. We use it to cleanup after testing that occurs inside our CI/CD pipeline.
- name: Test Integration
dependencies:
- Deploy Dev
task:
jobs:
- name: Test endpoints
commands:
- cache restore
- checkout
- sem-version python 3.8
- cd integration_tests
- pip install -r requirements.txt
- // our various testing scripts...
epilogue:
always: // This runs, no matter what. There are other options!!
commands:
- python3 99_cleanup.py
secrets:
- name: secret_things_go_here
Today, I don't want epilogue: always: , but rather epilogue: when it doesn't fail: . I cannot find one shred of documentation about this option. Nothing to even explain how I got here in the first place.
Oh, internet: How do I run something only when my tests have passed?
WOO!
I was barking up the wrong tree. The solution is within SemaphoreCI, not Serverless.
https://docs.semaphoreci.com/reference/pipeline-yaml-reference/#the-epilogue-property
Options include: on_pass and on_fail.
Whew.

Passing variables between Tekton Steps

I am trying to implement a basic Tekton CI pipeline. All the pipeline does is 1) get the source code 2) build an image with a new version and push it to an image registry.
The image version is generated by a Tekton Step. Images are built by another Tekton step that uses Kaniko as described here.
I am aware of using workspaces to pass variables between Tekton steps. Meaning I can write the version to a file in the workspace. But cant figure out a way to read this version from the file in the Kaniko build step below:
steps:
- name: build-and-push
image: gcr.io/kaniko-project/executor:latest
# specifying DOCKER_CONFIG is required to allow kaniko to detect docker credential
env:
- name: "DOCKER_CONFIG"
value: "/tekton/home/.docker/"
command:
- /kaniko/executor
args:
- --dockerfile=$(params.pathToDockerFile)
- --destination=$(resources.outputs.builtImage.url):<IMAGE-VERSION-NEEDED-HERE>
- --context=$(params.pathToContext)
- --build-arg=BASE=alpine:3
There should be a common pattern to resolve this but I am not sure if I am looking at the right places in Tekton documentation for this.
Can anyone offer some pointers?
This is to confirm that I managed to resolve the issue by redesigning the steps to tasks as suggested by #Jonas.
Tekton Tasks can have outputs which can be referred in other tasks. At the time of writing this Tekton steps don't seem to have this feature.
For more details refer the links in #Jonas comments above.
All steps in a Task share the same Pod and thus as access to a shared workspace implemented as an emptyDir volume:
Volumes:
tekton-internal-workspace:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
A common way to share data between steps is to a file in the /workspace and read it in the next step.
Alternatively, as suggested by #Jonas, if you use different Tasks you can write a result in the first Task and feed it into a parameter of the second Task in the Pipeline definition.
Using results this way implicitly creates a dependency between the two Tasks, so the Tekton controller won't schedule the second Task until the first one has successfully completed and results are available.
You can use the gcr.io/kaniko-project/executor:debug image that has shell at /busybox/sh.
And create something like this (pass kaniko commands via script):
steps:
- name: write-to-workspace
image: ubuntu
script: |
#!/usr/bin/env bash
echo "IMAGE_VERSION" > /workspace/FOO
- name: read-from-workspace
image: gcr.io/kaniko-project/executor:debug
script: |
#!/busybox/sh
export IMAGE_VERSION=$(cat /workspace/FOO)
echo "$IMAGE_VERSION"
/kaniko/executor \
--dockerfile=$(params.pathToDockerFile) \
--destination=$(resources.outputs.builtImage.url):"${IMAGE_VERSION}" \
--context=$(params.pathToContext) \
--build-arg=BASE=alpine:3
You can refer to this discussion: https://github.com/tektoncd/pipeline/issues/1476

Variables in gitlab CI

I just began with the implementation of CI jobs using gitlab-ci and I'm trying to create a job template. Basically the job uses the same image, tags and script where I use variables:
.job_e2e_template: &job_e2e
stage: e2e-test
tags:
- test
image: my_image_repo/siderunner
script:
- selenium-side-runner -c "browserName=$JOB_BROWSER" --server http://${SE_EVENT_BUS_HOST}:${SELENIUM_HUB_PORT}/wd/hub --output-directory docker/selenium/out_$FOLDER_POSTFIX docker/selenium/tests/*.side;
And here is one of the jobs using this anchor:
test-chrome:
<<: *job_e2e
variables:
JOB_BROWSER: "chrome"
FOLDER_POSTFIX: "chrome"
services:
- selenium-hub
- node-chrome
artifacts:
paths:
- tests/
- out_chrome/
I'd like this template to be more generic and I was wondering if I could also use variables in the services and artifacts section, so I could add a few more lines in my template like this:
services:
- selenium-hub
- node-$JOB_BROWSER
artifacts:
paths:
- tests/
- out_$JOB_BROWSER/
However I cannot find any example of that and the doc only talks about using that in scripts. I know that variables are like environment variables for jobs but I'm not sure if they can be used for other purposes.
Any suggestions?
Short answer, yes you can. Like described in this blog post, gitlab does a deep merge based on the keys.
You can see how your merged pipeline file looks like under CI/CD -> Editor -> View merged YAML.
If you want to modularize your pipeline even further I would recommend using include instead of yaml anchors, so you can reuse your templates in different pipelines.

How can I use Image PipelineResource for input in tekton task

As documented in https://github.com/tektoncd/pipeline/blob/master/docs/resources.md I have configured an Image PipelineResource:
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: my-data-image
spec:
type: image
params:
- name: url
value: image-registry.openshift-image-registry.svc:5000/default/my-data
Now when I am using the above PipelineResource as an input to a task:
apiVersion: tekton.dev/v1alpha1
kind: Task
metadata:
name: my-task
spec:
inputs:
resources:
- name: my-data-image
type: image
steps:
- name: print-info
image: image-registry.openshift-image-registry.svc:5000/default/my-task-runner-image:latest
imagePullPolicy: Always
command: ["/bin/sh"]
args:
- "-c"
- >
echo "List the contents of the input image" &&
ls -R "$(inputs.resources.my-data-image.path)"
I am not able to list the content of the image, as I get the error
[test : print-info] List the contents of the input image
[test : print-info] ls: cannot access '/workspace/my-data-image': No such file or directory
The documentation (https://github.com/tektoncd/pipeline/blob/master/docs/resources.md) states that an Image PipelineResource is usually used as a Task output for Tasks that build images.
How can I access the contents of my container data image from within the tekton task?
Currently Tekton does not support Image inputs in the way that OpenShift's build configs support them: https://docs.openshift.com/container-platform/4.2/builds/creating-build-inputs.html#image-source_creating-build-inputs
Image inputs are only useful for variable interpolation, for example, "$(inputs.resources.my-image.url)" while `ls "$(inputs.resources.my-image.path)" will always print empty content.
There are several ways to access the contents of the Image though including:
Export image to tar: podman export $(podman create $(inputs.resources.my-image.url) --tls-verify=false) > contents.tar
Copy files from the image: docker cp $(docker create $(inputs.resources.my-image.url)):/my/image/files ./local/copy. The tool skopeo can also copy files however does not seem to offer sub-directory copy capabilities.
Copy a pod directory to a local directory (https://docs.openshift.com/container-platform/4.2/nodes/containers/nodes-containers-copying-files.html): oc rsync $(inputs.resources.my-image.url):/src /home/user/source
Having said the above, I decided to simply use the OpenShift's built-in BuildConfig resources to create a chained build for my pipeline. The variety of Build strategies supported by OpenShift out-of-the box is sufficient for my pipeline scenarios and the fact that Image inputs are supported makes it much easier when comparing to Tekton pipelines (https://docs.openshift.com/container-platform/4.2/builds/creating-build-inputs.html#image-source_creating-build-inputs). The only advantage that Tekton pipelines seem to have is the ability to easily reuse tasks, however, the equivalent can be achieved by creating Operators for OpenShift resources.