Tekton task fails with `more than one PersistentVolumeClaim is bound` - tekton

I'm trying to run a Tekton pipeline with a task that needs to access multiple PersistentVolumeClaim workspaces. When I run the pipeline, the task fails with the message "more than one PersistentVolumeClaim is bound". As far as I can tell, there's nothing that forbids having more than one PersistentVolumeClaim bound in the same task, so why am I getting this error and how can I fix it?

Have you tried to disable Tekton affinity assistant?
$ kubectl edit configmap feature-flags -n tekton-pipelines
Look for disable-affinity-assistant. Change its value to true.
See:
https://github.com/tektoncd/pipeline/issues/3480
https://github.com/tektoncd/pipeline/issues/3085
Also: make sure your Tekton stack is relatively up to date, as there may have been some regression (unconfirmed) in 0.14.3.

Related

Configure allowed_pull_policies on shared GitLab runner

I'm using GitLab.com's managed CI runners, and I'd like to run my CI jobs using the if-not-present pull policy to avoid the extra minutes it takes to pull the image for each job. Trying to set that value in the .gitlab-ci.yml file gives me this error:
pull_policy ([if-not-present]) defined in GitLab pipeline config is not one of the allowed_pull_policies ([always])
This led me to the config.toml settings for restricting Docker pull policies, so I created a config.toml file at the root of my repository and tried that. However, I still get the same error.
Is config.toml only available for manual/self-hosted runners? Is there any other way to get past this?
Context
Image selection in .gitlab-ci.yml:
default:
image:
name: registry.gitlab.com/myorg/myrepo/ci/builder:latest
pull_policy: if-not-present
Contents of config.toml:
[[runners]]
executor = "docker"
[runners.docker]
pull_policy = ["if-not-present"]
allowed_pull_policies = ["always", "if-not-present"]
First of all, the config.toml file is not meant to be in your repo but on the runner machine (or container).
But anyways, the always pull policy should not cause image pulls to last minutes if the layers are already cached locally: it just ensures you have the latest version by checking the metadata. If the pulls take minutes, it means that either the layers are not available locally, or the image was actually updated (or that the connection to your container registry is so incredibly slow that just checking the metadata takes minutes, but that is unlikely).
It is very possible that Gitlab's managed runners do not have a way to locally cache layers, and thus there would be no practical difference between the always and if-not-present policies. For instance if you use Gitlab Saas:
A dedicated temporary runner VM hosts and runs each CI job.
(see https://docs.gitlab.com/ee/ci/runners/index.html)
Thus the downloaded layers are discarded as soon as the job finishes.

Passing variables between Tekton Steps

I am trying to implement a basic Tekton CI pipeline. All the pipeline does is 1) get the source code 2) build an image with a new version and push it to an image registry.
The image version is generated by a Tekton Step. Images are built by another Tekton step that uses Kaniko as described here.
I am aware of using workspaces to pass variables between Tekton steps. Meaning I can write the version to a file in the workspace. But cant figure out a way to read this version from the file in the Kaniko build step below:
steps:
- name: build-and-push
image: gcr.io/kaniko-project/executor:latest
# specifying DOCKER_CONFIG is required to allow kaniko to detect docker credential
env:
- name: "DOCKER_CONFIG"
value: "/tekton/home/.docker/"
command:
- /kaniko/executor
args:
- --dockerfile=$(params.pathToDockerFile)
- --destination=$(resources.outputs.builtImage.url):<IMAGE-VERSION-NEEDED-HERE>
- --context=$(params.pathToContext)
- --build-arg=BASE=alpine:3
There should be a common pattern to resolve this but I am not sure if I am looking at the right places in Tekton documentation for this.
Can anyone offer some pointers?
This is to confirm that I managed to resolve the issue by redesigning the steps to tasks as suggested by #Jonas.
Tekton Tasks can have outputs which can be referred in other tasks. At the time of writing this Tekton steps don't seem to have this feature.
For more details refer the links in #Jonas comments above.
All steps in a Task share the same Pod and thus as access to a shared workspace implemented as an emptyDir volume:
Volumes:
tekton-internal-workspace:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
A common way to share data between steps is to a file in the /workspace and read it in the next step.
Alternatively, as suggested by #Jonas, if you use different Tasks you can write a result in the first Task and feed it into a parameter of the second Task in the Pipeline definition.
Using results this way implicitly creates a dependency between the two Tasks, so the Tekton controller won't schedule the second Task until the first one has successfully completed and results are available.
You can use the gcr.io/kaniko-project/executor:debug image that has shell at /busybox/sh.
And create something like this (pass kaniko commands via script):
steps:
- name: write-to-workspace
image: ubuntu
script: |
#!/usr/bin/env bash
echo "IMAGE_VERSION" > /workspace/FOO
- name: read-from-workspace
image: gcr.io/kaniko-project/executor:debug
script: |
#!/busybox/sh
export IMAGE_VERSION=$(cat /workspace/FOO)
echo "$IMAGE_VERSION"
/kaniko/executor \
--dockerfile=$(params.pathToDockerFile) \
--destination=$(resources.outputs.builtImage.url):"${IMAGE_VERSION}" \
--context=$(params.pathToContext) \
--build-arg=BASE=alpine:3
You can refer to this discussion: https://github.com/tektoncd/pipeline/issues/1476

In what order is the serverless file evaluated?

I have tried to find out in what order the statements of the serverless file are evaluated (maybe it is more common to say that 'variables are resolved').
I haven't been able to find any information about this and to some extent it makes working with serverless feel like a guessing game for me.
As an example, the latest surprise I got was when I tried to run:
$ sls deploy
serverless.yaml
useDotenv: true
provider:
stage: ${env:stage}
region: ${env:region}
.env
region=us-west-1
stage=dev
I got an error message stating that env is not available at the time when stage is resolved. This was surprising to me since I have been able to use env to resolve other variables in the provider section, and there is nothing in the syntax to indicate that stage is resolved earlier.
In what order is the serverless file evaluated?
In effect you've created a circular dependency. Stage is special because it is needed to identify which .env file to load. ${env:stage} is being resolved from ${stage}.env, but Serverless needs to know what ${stage} is in order to find ${stage}.env etc.
This is why it's evaluated first.
Stage (and region, actually) are both optional CLI parameters. In your serverless.yml file what you're setting is a default, with the CLI parameter overriding it where different.
Example:
provider:
stage: staging
region: ca-central-1
Running serverless deploy --stage prod --region us-west-2 will result in prod and us-west-2 being used for stage and region (respectively) for that deployment.
I'd suggest removing any variable interpolation for stage and instead setting a default, and overriding via CLI when needed.
Then dotenv will know which environment file to use, and complete the rest of the template.

Is there a better way to disable/skip a job in a GitLab CI pipeline than commenting everything out?

I have a job in a GitLab CI pipeline that I want to temporarily disable because it is not working. My test suites are running locally but not inside docker, so until I figure that out I want to skip the test job or the test stage.
I tried commenting out the stage, but then I get an error message from the CI validation: test job: chosen stage does not exist; available stages are .pre, build, deploy, .post
So I can simply comment out the entire job, but I was wondering if there was a better way?
Turns out, there is! It's quite at the end of the very thorough documentation of GitLab CI:
https://docs.gitlab.com/ee/ci/jobs/index.html#hide-jobs
Instead of using comments on the job or stage, simply prefix the job name with a dot ..
Example from the official documentation:
.hidden_job:
script:
- run test

Drone.io secrets not populating in yml appropriately and documentation seems inaccurate

I am running version 0.8.4 as a container in my lab. CLI is also at version 0.8.4
I am trying to use a secret in a command one of my containers is trying to run.
Following the documentation has me needing to sign a repo to allow the job to consume the secret. The drone CLI does not seem to have a
drone sign command for me to run. So I create the secret with a --skip-verify=true flag. This creates the secret but when I run the job it errors out. The output in the UI shows a blank space where the secret should be injected.
Here is an excerpt of my .drone.yml where I am trying to inject secrets -s production -u ${cf_user} -p ${cf_password} --s
I have tried all the following ways to create a secret:
drone secret add <repo_name> --name <key> --value <value> --skip-verify=true
drone secret add <repo_name> --name <key> --value <value>
GUI Creation
I notice when I create an all capital name value the UI represents the value in all lowercase when the CLI shows it in capitals.
I also notice that if I include hyphens in the name and try to use that in my drone.yml the job errors out immediately with a bad substitution error.
Any help understanding what I am doing wrong would be much appreciated!
I got lost in the different documentation available. Should have been looking here rather than secret-guide.
In case I am not alone, I needed to add a secrects block in my pipeline.
I also needed to access them with $SECRET_KEY rather than ${SECRET_KEY}
pipeline:
publish:
image: governmentpaas/cf-cli
secrets: [ cf_user, cf_password ]
Just a little update on this one, I stumbled over it as well because the docs are inconsistent.
In the 0.8.5 version the only thing I had to do is:
add secrets via CLI or UI
add secrets array to utilise it
no need to pass variables to environment.