How to attach a volume to docker running in tekton pipelines - tekton

I have a problem attaching a volume to the docker image running inside tekton pipelines. I have used the below task
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: distributor-base
namespace: cicd
labels:
app.kubernetes.io/version: "0.1"
annotations:
tekton.dev/pipelines.minVersion: "0.12.1"
tekton.dev/platforms: "linux/amd64"
spec:
params:
- name: builder_image
description: The location of the docker builder image.
default: docker:stable
- name: dind_image
description: The location of the docker-in-docker image.
default: docker:dind
- name: context
description: Path to the directory to use as context.
default: .
workspaces:
- name: source
steps:
- name: docker-build
image: docker
env:
# Connect to the sidecar over TCP, with TLS.
- name: DOCKER_HOST
value: tcp://localhost:2376
# Verify TLS.
- name: DOCKER_TLS_VERIFY
value: '1'
# Use the certs generated by the sidecar daemon.
- name: DOCKER_CERT_PATH
value: /certs/client
- name: DOCKER_USER
valueFrom:
secretKeyRef:
key: username
name: docker-auth
- name: DOCKER_TOKEN
valueFrom:
secretKeyRef:
key: password
name: docker-auth
- name: DIND_CONFIG
valueFrom:
configMapKeyRef:
key: file
name: dind-env
workingDir: $(workspaces.source.path)
args:
- --storage-driver=vfs
- --debug
securityContext:
privileged: true
script: |
#!/usr/bin/env sh
set -e
pwd
ls -ltr /workspace/source
docker run --privileged -v "/workspace/source:/workspace" busybox ls -ltr /workspace
volumeMounts:
- mountPath: /certs/client
name: dind-certs
sidecars:
- image: $(params.dind_image)
name: server
args:
- --storage-driver=vfs
- --debug
- --userland-proxy=false
resources:
requests:
memory: "512Mi"
securityContext:
privileged: true
env:
# Write generated certs to the path shared with the client.
- name: DOCKER_TLS_CERTDIR
value: /certs
volumeMounts:
- mountPath: /certs/client
name: dind-certs
# Wait for the dind daemon to generate the certs it will share with the
# client.
readinessProbe:
periodSeconds: 1
exec:
command: ['ls', '/certs/client/ca.pem']
volumes:
- name: dind-certs
emptyDir: {}
in the above task workspace comes from another git-clone task
workspaces:
- name: source
in this task, I am trying to run a docker image that has access to the workspace folder , because I have to modify some files in the workspace folder.
when we look into the script
pwd
ls -ltr /workspace/source
docker run --privileged -v "/workspace/source:/workspace"
below is the console log of above 3 commands
workspace/source
total 84
-rwxr-xr-x 1 50381 50381 3206 Jun 1 10:13 README.md
-rwxr-xr-x 1 50381 50381 10751 Jun 1 10:13 Jenkinsfile.next
-rwxr-xr-x 1 50381 50381 5302 Jun 1 10:13 wait-for-it.sh
drwxr-xr-x 4 50381 50381 6144 Jun 1 10:13 overlays
-rwxr-xr-x 1 50381 50381 2750 Jun 1 10:13 example-distributor.yaml
drwxr-xr-x 5 50381 50381 6144 Jun 1 10:13 bases
-rw-r--r-- 1 50381 50381 0 Jun 1 10:13 semantic.out
-rw-r--r-- 1 50381 50381 44672 Jun 1 10:13 final.yaml
Unable to find image 'busybox:latest' locally
latest: Pulling from library/busybox
462eb288b104: Pulling fs layer
462eb288b104: Verifying Checksum
462eb288b104: Download complete
462eb288b104: Pull complete
Digest: sha256:ebadf81a7f2146e95f8c850ad7af8cf9755d31cdba380a8ffd5930fba5996095
Status: Downloaded newer image for busybox:latest
total 0
basically pwd command is giving me results
and ls -ltr command also gives me the results
but when I try to attach /workspace/source folder as a volume to busybox docker, I am not able to see the content.
i mean since I have attached a volume into the directory /workspace I would expect the contents from local folder /workspace/source but I see 0 results from the above log.
basically volume is not getting attached properly.
can anyone please help me to fix this issue.
below is my pipeline run triggered by tekton-triggers
apiVersion: triggers.tekton.dev/v1alpha1
kind: TriggerTemplate
metadata:
name: github-gitops-template
namespace: cicd
spec:
params:
- name: gitRevision
description: The git revision (SHA)
default: master
- name: gitRepoUrl
description: The git repository url ("https://github.com/foo/bar.git")
- name: gitRepoName
description: The git repository name
- name: branchUrl
description: The git repository branch url
- name: repoFullName
description: The git repository full name
- name: commitSha
description: The git commit sha
resourcetemplates:
- apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: $(tt.params.gitRepoName)-
spec:
timeout: 0h10m
pipelineRef:
name: gitops-pipeline
serviceAccountName: github-service-account
params:
- name: url
value: $(tt.params.gitRepoUrl)
- name: branch
value: $(tt.params.gitRevision)
- name: repoName
value: $(tt.params.gitRepoName)
- name: branchUrl
value: $(tt.params.branchUrl)
- name: repoFullName
value: $(tt.params.repoFullName)
- name: commitSha
value: $(tt.params.commitSha)
workspaces:
- name: ws
volumeClaimTemplate:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Mi
below is my task run:
completionTime: '2022-06-01T10:13:47Z'
conditions:
- lastTransitionTime: '2022-06-01T10:13:47Z'
message: All Steps have completed executing
reason: Succeeded
status: 'True'
type: Succeeded
podName: gitops-core-business-tzb7f-distributor-base-pod
sidecars:
- container: sidecar-server
imageID: 'docker-pullable://gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/nop#sha256:1d65a20cd5fbc79dc10e48ce9d2f7251736dac13b302b49a1c9a8717c5f2b5c5'
name: server
terminated:
containerID: 'docker://d5e96143812bb4912c6297f7706f141b9036c6ee77efbffe2bcb7edb656755a5'
exitCode: 0
finishedAt: '2022-06-01T10:13:49Z'
message: Sidecar container successfully stopped by nop image
reason: Completed
startedAt: '2022-06-01T10:13:37Z'
startTime: '2022-06-01T10:13:30Z'
steps:
- container: step-docker-build
imageID: 'docker-pullable://docker#sha256:5bc07a93c9b28e57a58d57fbcf437d1551ff80ae33b4274fb60a1ade2d6c9da4'
name: docker-build
terminated:
containerID: 'docker://18aa9111f180f9cfc6b9d86d5ef1da9f8dbe83375bb282bba2776b5bbbcaabfb'
exitCode: 0
finishedAt: '2022-06-01T10:13:46Z'
reason: Completed
startedAt: '2022-06-01T10:13:42Z'
taskSpec:
params:
- default: 'docker:stable'
description: The location of the docker builder image.
name: builder_image
type: string
- default: 'docker:dind'
description: The location of the docker-in-docker image.
name: dind_image
type: string
- default: .
description: Path to the directory to use as context.
name: context
type: string
sidecars:
- args:
- '--storage-driver=vfs'
- '--debug'
- '--userland-proxy=false'
env:
- name: DOCKER_TLS_CERTDIR
value: /certs
image: $(params.dind_image)
name: server
readinessProbe:
exec:
command:
- ls
- /certs/client/ca.pem
periodSeconds: 1
resources:
requests:
memory: 512Mi
securityContext:
privileged: true
volumeMounts:
- mountPath: /certs/client
name: dind-certs
steps:
- args:
- '--storage-driver=vfs'
- '--debug'
env:
- name: DOCKER_HOST
value: 'tcp://localhost:2376'
- name: DOCKER_TLS_VERIFY
value: '1'
- name: DOCKER_CERT_PATH
value: /certs/client
- name: DOCKER_USER
valueFrom:
secretKeyRef:
key: username
name: docker-auth
- name: DOCKER_TOKEN
valueFrom:
secretKeyRef:
key: password
name: docker-auth
- name: DIND_CONFIG
valueFrom:
configMapKeyRef:
key: file
name: dind-env
image: docker
name: docker-build
resources: {}
script: |
#!/usr/bin/env sh
set -e
pwd
ls -ltr /workspace/source
docker run --privileged -v "/workspace/source:/workspace" busybox ls -ltr /workspace
securityContext:
privileged: true
volumeMounts:
- mountPath: /certs/client
name: dind-certs
workingDir: $(workspaces.source.path)
volumes:
- emptyDir: {}
name: dind-certs
workspaces:
- name: source

basically we have to attach volume to sidecar, since docker run happens in side card
volumeMounts:
- mountPath: /certs/client
name: dind-certs
- name: $(workspaces.source.volume)
mountPath: $(workspaces.source.path)

Related

Tekton build Docker image with Kaniko - please provide a valid path to a Dockerfile within the build context with --dockerfile

I am new to Tekton (https://tekton.dev/) and I am trying to
Clone the repository
Build a docker image with the Dockerfile
I have a Tekton pipeline and when I try to execute it, I get the following error:
Error: error resolving dockerfile path: please provide a valid path to a Dockerfile within the build context with --dockerfile
Please find the Tekton manifests below:
1. Pipeline.yml
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: clone-read
spec:
description: |
This pipeline clones a git repo, then echoes the README file to the stout.
params:
- name: repo-url
type: string
description: The git repo URL to clone from.
- name: image-name
type: string
description: for Kaniko
- name: image-path
type: string
description: path of Dockerfile for Kaniko
workspaces:
- name: shared-data
description: |
This workspace contains the cloned repo files, so they can be read by the
next task.
tasks:
- name: fetch-source
taskRef:
name: git-clone
workspaces:
- name: output
workspace: shared-data
params:
- name: url
value: $(params.repo-url)
- name: show-readme
runAfter: ["fetch-source"]
taskRef:
name: show-readme
workspaces:
- name: source
workspace: shared-data
- name: build-push
runAfter: ["show-readme"]
taskRef:
name: kaniko
workspaces:
- name: source
workspace: shared-data
params:
- name: IMAGE
value: $(params.image-name)
- name: CONTEXT
value: $(params.image-path)
1. PipelineRun.yml
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: clone-read-run
spec:
pipelineRef:
name: clone-read
podTemplate:
securityContext:
fsGroup: 65532
workspaces:
- name: shared-data
volumeClaimTemplate:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
# - name: git-credentials
# secret:
# secretName: git-credentials
params:
- name: repo-url
value: https://github.com/iamdempa/tekton-demos.git
- name: image-name
value: "python-test"
- name: image-path
value: $(workspaces.shared-data.path)/BuildDockerImage2
And here's my repository structure:
. . .
.
├── BuildDockerImage2
│ ├── 1.show-readme.yml
│ ├── 2. Pipeline.yml
│ ├── 3. PipelineRun.yml
│ └── Dockerfile
├── README.md
. . .
7 directories, 25 files
Could someone help me what is wrong here?
Thank you
I was able to find the issue. Issue was with the way I have provided the path.
In the kaniko task, the CONTEXT variable determines the path of the Dockerfile. And the default value is set to ./ and with some additional prefix as below:
$(workspaces.source.path)/$(params.CONTEXT)
That mean, the path of the workspaces is already being appended and I don't need to append that part as I mentioned in the image-path value below:
$(workspaces.shared-data.path)/BuildDockerImage2
Instead, I had to put just the folder name as below:
- name: image-path
value: BuildDockerImage2
This fixed the problem I had.

task hello-world has failed: declared workspace "output" is required but has not been bound

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: hello-world
spec:
workspaces:
- name: output
description: folder where output goes
steps:
- name: hello-world1
image: ubuntu
command: ["/bin/bash"]
args: ["-c", "echo Hello World 1! > $(workspaces.output.path)<200b>/message1.txt"]
- name: hello-world2
image: ubuntu
script: |
#!/usr/bin/env bash
set -xe
echo Hello World 2! > $(workspaces.output.path)/message2.txt
From your error message, we can guess that the TaskRun (and PipelineRun) trying to run this task does not define a workspace to be used with your Task.
Say I would like to call your Task: I would write a Pipeline, which should include something like:
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: hello-world
spec:
tasks:
- name: hello-world-task
taskRef:
name: hello-world
workspaces:
- name: output
workspace: my-workspace
workspaces:
- name: my-workspace
optional: true
And then, start this pipeline with the following PipelineRun:
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: hello-world-0
spec:
pipelineRef: hello-world
workspaces:
- name: my-workspace
volumeClaimTemplate:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
See Tekton Pipelines Workspaces docs.

Tekton - mount path workspace issue - Error of path

Currently, I am trying to deploy tutum-hello-world. I have written a script for the same, but it does not work as it is supposed to.
I am certain that this issue is related to workspace.
UPDATE
Here is my code for task-tutum-deploy.yaml-
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: tutum-deploy
spec:
steps:
- name: tutum-deploy
image: bitnami/kubectl
script: |
kubectl apply -f /root/tekton-scripts/tutum-deploy.yaml
workspaces:
- name: messages
optional: true
mountPath: /root/tekton-scripts/
Error -
root#master1:~/tekton-scripts# tkn taskrun logs tutum-deploy-run-8sq8s -f -n default
[tutum-deploy] + kubectl apply -f /root/tekton-scripts/tutum-deploy.yaml
[tutum-deploy] error: the path "/root/tekton-scripts/tutum-deploy.yaml" cannot be accessed: stat /root/tekton-scripts/tutum-deploy.yaml: permission denied
container step-tutum-deploy has failed : [{"key":"StartedAt","value":"2021-06-14T12:54:01.096Z","type":"InternalTektonResult"}]
PS - I have placed my script on the master node at - /root/tekton-scripts/tutum-deploy.yaml
root#master1:~/tekton-scripts# ls -l tutum-deploy.yaml
-rwxrwxrwx 1 root root 626 Jun 11 11:31 tutum-deploy.yaml
OLD SCRIPT
Here is my code for task-tutum-deploy.yaml-
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: tutum-deploy
spec:
workspaces:
- name: messages
optional: true
mountPath: /root/tekton-scripts/tutum-deploy.yaml
steps:
- name: tutum-deploy
image: bitnami/kubectl
command: ["kubectl"]
args:
- "apply"
- "-f"
- "./tutum-deploy.yaml"
Here is my code for tutum-deploy.yaml which is present on the machine (master node) of Kubernetes cluster with read, write and execute permissions -
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world-tutum
labels:
service: hello-world-tutum
spec:
replicas: 1
selector:
matchLabels:
service: hello-world-tutum
template:
metadata:
labels:
service: hello-world-tutum
spec:
containers:
- name: tutum-hello-world
image: tutum/hello-world:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: hello-world-tutum
spec:
type: NodePort
selector:
service: hello-world-tutum
ports:
- name: "80"
port: 80
targetPort: 80
nodePort: 30050
I ran the following commands from my master node of Kubernetes cluster -
1. kubectl apply -f task-tutum-deploy.yaml
2. tkn task start tutum-deploy
Error -
Using tekton command - $ tkn taskrun logs tutum-deploy-run-tvlll -f -n default
task tutum-deploy has failed: "step-tutum-deploy" exited with code 1 (image: "docker-pullable://bitnami/kubectl#sha256:b83299ee1d8657ab30fb7b7925b42a12c613e37609d2b4493b4b27b057c21d0f"); for logs run: kubectl -n default logs tutum-deploy-run-tvlll-pod-vbl5g -c step-tutum-deploy
[tutum-deploy] error: the path "./tutum-deploy.yaml" does not exist
container step-tutum-deploy has failed : [{"key":"StartedAt","value":"2021-06-11T14:01:49.786Z","type":"InternalTektonResult"}]
The error is from this part of your YAML:
spec:
workspaces:
- name: messages
optional: true
mountPath: /root/tekton-scripts/tutum-deploy.yaml
spec.workspaces.mountPath expects a directory, rather than a file, as you have specified here. You may mean /root/tekton-scripts/ instead but I am unfamiliar with tutum-hello-world.
If you look at the documentation you will see that all references to mountPath are directories rather than files.

K3s Vault Cluster -- http: server gave HTTP response to HTTPS client

I am trying to setup a 3 node vault cluster with raft storage enabled. I am currently at a loss to why the readiness probe (also the liveness probe) is returning
Readiness probe failed: Get "https://10.42.4.82:8200/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204": http: server gave HTTP response to HTTPS client
I am using helm 3 for 'helm install vault hashicorp/vault --namespace vault -f override-values.yaml'
global:
enabled: true
tlsDisable: false
injector:
enabled: false
server:
image:
repository: "hashicorp/vault"
tag: "1.5.5"
resources:
requests:
memory: 1Gi
cpu: 2000m
limits:
memory: 2Gi
cpu: 2000m
readinessProbe:
enabled: true
path: "/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204"
livenessProbe:
enabled: true
path: "/v1/sys/health?standbyok=true"
initialDelaySeconds: 60
VAULT_CACERT: /vault/userconfig/tls-ca/ca.crt
# extraVolumes is a list of extra volumes to mount. These will be exposed
# to Vault in the path `/vault/userconfig/<name>/`.
extraVolumes:
# holds the cert file and the key file
- type: secret
name: tls-server
# holds the ca certificate
- type: secret
name: tls-ca
auditStorage:
enabled: true
standalone:
enabled: false
# Run Vault in "HA" mode.
ha:
enabled: true
replicas: 3
raft:
enabled: true
setNodeId: true
config: |
ui = true
listener "tcp" {
address = "[::]:8200"
cluster_address = "[::]:8201"
tls_cert_file = "/vault/userconfig/tls-server/tls.crt"
tls_key_file = "/vault/userconfig/tls-server/tls.key"
tls_ca_cert_file = "/vault/userconfig/tls-ca/ca.crt"
}
storage "raft" {
path = "/vault/data"
retry_join {
leader_api_addr = "https://vault-0.vault-internal:8200"
leader_ca_cert_file = "/vault/userconfig/tls-ca/ca.crt"
leader_client_cert_file = "/vault/userconfig/tls-server/tls.crt"
leader_client_key_file = "/vault/userconfig/tls-server/tls.key"
}
retry_join {
leader_api_addr = "https://vault-1.vault-internal:8200"
leader_ca_cert_file = "/vault/userconfig/tls-ca/ca.crt"
leader_client_cert_file = "/vault/userconfig/tls-server/tls.crt"
leader_client_key_file = "/vault/userconfig/tls-server/tls.key"
}
retry_join {
leader_api_addr = "https://vault-2.vault-internal:8200"
leader_ca_cert_file = "/vault/userconfig/tls-ca/ca.crt"
leader_client_cert_file = "/vault/userconfig/tls-server/tls.crt"
leader_client_key_file = "/vault/userconfig/tls-server/tls.key"
}
}
service_registration "kubernetes" {}
# Vault UI
ui:
enabled: true
serviceType: "ClusterIP"
serviceNodePort: null
externalPort: 8200
Return from describe pod vault-0
Name: vault-0
Namespace: vault
Priority: 0
Node: node4/10.211.55.7
Start Time: Wed, 11 Nov 2020 15:06:47 +0700
Labels: app.kubernetes.io/instance=vault
app.kubernetes.io/name=vault
component=server
controller-revision-hash=vault-5c4b47bdc4
helm.sh/chart=vault-0.8.0
statefulset.kubernetes.io/pod-name=vault-0
vault-active=false
vault-initialized=false
vault-perf-standby=false
vault-sealed=true
vault-version=1.5.5
Annotations: <none>
Status: Running
IP: 10.42.4.82
IPs:
IP: 10.42.4.82
Controlled By: StatefulSet/vault
Containers:
vault:
Container ID: containerd://6dfde76051f44c22003cc02a880593792d304e74c56d717eef982e0e799672f2
Image: hashicorp/vault:1.5.5
Image ID: docker.io/hashicorp/vault#sha256:90cfeead29ef89fdf04383df9991754f4a54c43b2fb49ba9ff3feb713e5ef1be
Ports: 8200/TCP, 8201/TCP, 8202/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Command:
/bin/sh
-ec
Args:
cp /vault/config/extraconfig-from-values.hcl /tmp/storageconfig.hcl;
[ -n "${HOST_IP}" ] && sed -Ei "s|HOST_IP|${HOST_IP?}|g" /tmp/storageconfig.hcl;
[ -n "${POD_IP}" ] && sed -Ei "s|POD_IP|${POD_IP?}|g" /tmp/storageconfig.hcl;
[ -n "${HOSTNAME}" ] && sed -Ei "s|HOSTNAME|${HOSTNAME?}|g" /tmp/storageconfig.hcl;
[ -n "${API_ADDR}" ] && sed -Ei "s|API_ADDR|${API_ADDR?}|g" /tmp/storageconfig.hcl;
[ -n "${TRANSIT_ADDR}" ] && sed -Ei "s|TRANSIT_ADDR|${TRANSIT_ADDR?}|g" /tmp/storageconfig.hcl;
[ -n "${RAFT_ADDR}" ] && sed -Ei "s|RAFT_ADDR|${RAFT_ADDR?}|g" /tmp/storageconfig.hcl;
/usr/local/bin/docker-entrypoint.sh vault server -config=/tmp/storageconfig.hcl
State: Running
Started: Wed, 11 Nov 2020 15:25:21 +0700
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 11 Nov 2020 15:19:10 +0700
Finished: Wed, 11 Nov 2020 15:20:20 +0700
Ready: False
Restart Count: 8
Limits:
cpu: 2
memory: 2Gi
Requests:
cpu: 2
memory: 1Gi
Liveness: http-get https://:8200/v1/sys/health%3Fstandbyok=true delay=60s timeout=3s period=5s #success=1 #failure=2
Readiness: http-get https://:8200/v1/sys/health%3Fstandbyok=true&sealedcode=204&uninitcode=204 delay=5s timeout=3s period=5s #success=1 #failure=2
Environment:
HOST_IP: (v1:status.hostIP)
POD_IP: (v1:status.podIP)
VAULT_K8S_POD_NAME: vault-0 (v1:metadata.name)
VAULT_K8S_NAMESPACE: vault (v1:metadata.namespace)
VAULT_ADDR: https://127.0.0.1:8200
VAULT_API_ADDR: https://$(POD_IP):8200
SKIP_CHOWN: true
SKIP_SETCAP: true
HOSTNAME: vault-0 (v1:metadata.name)
VAULT_CLUSTER_ADDR: https://$(HOSTNAME).vault-internal:8201
VAULT_RAFT_NODE_ID: vault-0 (v1:metadata.name)
HOME: /home/vault
VAULT_CACERT: /vault/userconfig/tls-ca/ca.crt
Mounts:
/home/vault from home (rw)
/var/run/secrets/kubernetes.io/serviceaccount from vault-token-lfgnj (ro)
/vault/audit from audit (rw)
/vault/config from config (rw)
/vault/data from data (rw)
/vault/userconfig/tls-ca from userconfig-tls-ca (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-vault-0
ReadOnly: false
audit:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: audit-vault-0
ReadOnly: false
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: vault-config
Optional: false
userconfig-tls-ca:
Type: Secret (a volume populated by a Secret)
SecretName: tls-ca
Optional: false
home:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
vault-token-lfgnj:
Type: Secret (a volume populated by a Secret)
SecretName: vault-token-lfgnj
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 18m default-scheduler Successfully assigned vault/vault-0 to node4
Warning Unhealthy 17m (x2 over 17m) kubelet Liveness probe failed: Get "https://10.42.4.82:8200/v1/sys/health?standbyok=true": http: server gave HTTP response to HTTPS client
Normal Killing 17m kubelet Container vault failed liveness probe, will be restarted
Normal Pulled 17m (x2 over 18m) kubelet Container image "hashicorp/vault:1.5.5" already present on machine
Normal Created 17m (x2 over 18m) kubelet Created container vault
Normal Started 17m (x2 over 18m) kubelet Started container vault
Warning Unhealthy 13m (x56 over 18m) kubelet Readiness probe failed: Get "https://10.42.4.82:8200/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204": http: server gave HTTP response to HTTPS client
Warning BackOff 3m41s (x31 over 11m) kubelet Back-off restarting failed container
Logs from vault-0
2020-11-12T05:50:43.554426582Z ==> Vault server configuration:
2020-11-12T05:50:43.554524646Z
2020-11-12T05:50:43.554574639Z Api Address: https://10.42.4.85:8200
2020-11-12T05:50:43.554586234Z Cgo: disabled
2020-11-12T05:50:43.554596948Z Cluster Address: https://vault-0.vault-internal:8201
2020-11-12T05:50:43.554608637Z Go Version: go1.14.7
2020-11-12T05:50:43.554678454Z Listener 1: tcp (addr: "[::]:8200", cluster address: "[::]:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")
2020-11-12T05:50:43.554693734Z Log Level: info
2020-11-12T05:50:43.554703897Z Mlock: supported: true, enabled: false
2020-11-12T05:50:43.554713272Z Recovery Mode: false
2020-11-12T05:50:43.554722579Z Storage: raft (HA available)
2020-11-12T05:50:43.554732788Z Version: Vault v1.5.5
2020-11-12T05:50:43.554769315Z Version Sha: f5d1ddb3750e7c28e25036e1ef26a4c02379fc01
2020-11-12T05:50:43.554780425Z
2020-11-12T05:50:43.672225223Z ==> Vault server started! Log data will stream in below:
2020-11-12T05:50:43.672519986Z
2020-11-12T05:50:43.673078706Z 2020-11-12T05:50:43.543Z [INFO] proxy environment: http_proxy= https_proxy= no_proxy=
2020-11-12T05:51:57.838970945Z ==> Vault shutdown triggered
I am running a 6 node rancher k3s cluster v1.19.3ks2 on my mac.
Any help would be appreciated

Sql script file is not getting copied to docker-entrypoint-initdb.d folder of mysql container?

My init.sql (script) is not getting copied to docker-entrypoint-initdb.d.
Note that the problem doesn't occur when I try to run it locally or on my server. It happens only when using the azure devops by creating build and release pipeline.
There seems to be a mistake in the hostpath(containing sql script) in the persistant volume YAML file in cases where the file is placed in the azure repos.
mysqlpersistantvolume.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-initdb-pv-volume
labels:
type: local
app: mysql
spec:
storageClassName: manual
capacity:
storage: 1Mi
accessModes:
- ReadOnlyMany
hostPath:
path: "/devops-sample" // main project folder in azure repos which
contains all files including sql script.
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mysql-initdb-pv-claim
labels:
app: mysql
spec:
storageClassName: manual
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 1Mi
mysql.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
protocol: TCP
targetPort: 3306
selector:
app: mysql
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
imagePullPolicy: "IfNotPresent"
env:
- name: MYSQL_ROOT_PASSWORD
value: root
- name: MYSQL_PASSWORD
value: kovaion
- name: MYSQL_USER
value: vignesh
- name: MYSQL_DATABASE
value: data-core
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /docker-entrypoint-initdb.d
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-initdb-pv-claim
Currently the folder docker-entrypoint-initdb.d seems to be empty(nothing is getting copied).
how to set the full host path in mysql persistant volume if the sql script is placed in the azure repos inside the devops-sample folder??
Mysql data directory storage location is wrong. You should mount persistent storage to /var/lib/mysql/data