Tekton - mount path workspace issue - Error of path - tekton

Currently, I am trying to deploy tutum-hello-world. I have written a script for the same, but it does not work as it is supposed to.
I am certain that this issue is related to workspace.
UPDATE
Here is my code for task-tutum-deploy.yaml-
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: tutum-deploy
spec:
steps:
- name: tutum-deploy
image: bitnami/kubectl
script: |
kubectl apply -f /root/tekton-scripts/tutum-deploy.yaml
workspaces:
- name: messages
optional: true
mountPath: /root/tekton-scripts/
Error -
root#master1:~/tekton-scripts# tkn taskrun logs tutum-deploy-run-8sq8s -f -n default
[tutum-deploy] + kubectl apply -f /root/tekton-scripts/tutum-deploy.yaml
[tutum-deploy] error: the path "/root/tekton-scripts/tutum-deploy.yaml" cannot be accessed: stat /root/tekton-scripts/tutum-deploy.yaml: permission denied
container step-tutum-deploy has failed : [{"key":"StartedAt","value":"2021-06-14T12:54:01.096Z","type":"InternalTektonResult"}]
PS - I have placed my script on the master node at - /root/tekton-scripts/tutum-deploy.yaml
root#master1:~/tekton-scripts# ls -l tutum-deploy.yaml
-rwxrwxrwx 1 root root 626 Jun 11 11:31 tutum-deploy.yaml
OLD SCRIPT
Here is my code for task-tutum-deploy.yaml-
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: tutum-deploy
spec:
workspaces:
- name: messages
optional: true
mountPath: /root/tekton-scripts/tutum-deploy.yaml
steps:
- name: tutum-deploy
image: bitnami/kubectl
command: ["kubectl"]
args:
- "apply"
- "-f"
- "./tutum-deploy.yaml"
Here is my code for tutum-deploy.yaml which is present on the machine (master node) of Kubernetes cluster with read, write and execute permissions -
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world-tutum
labels:
service: hello-world-tutum
spec:
replicas: 1
selector:
matchLabels:
service: hello-world-tutum
template:
metadata:
labels:
service: hello-world-tutum
spec:
containers:
- name: tutum-hello-world
image: tutum/hello-world:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: hello-world-tutum
spec:
type: NodePort
selector:
service: hello-world-tutum
ports:
- name: "80"
port: 80
targetPort: 80
nodePort: 30050
I ran the following commands from my master node of Kubernetes cluster -
1. kubectl apply -f task-tutum-deploy.yaml
2. tkn task start tutum-deploy
Error -
Using tekton command - $ tkn taskrun logs tutum-deploy-run-tvlll -f -n default
task tutum-deploy has failed: "step-tutum-deploy" exited with code 1 (image: "docker-pullable://bitnami/kubectl#sha256:b83299ee1d8657ab30fb7b7925b42a12c613e37609d2b4493b4b27b057c21d0f"); for logs run: kubectl -n default logs tutum-deploy-run-tvlll-pod-vbl5g -c step-tutum-deploy
[tutum-deploy] error: the path "./tutum-deploy.yaml" does not exist
container step-tutum-deploy has failed : [{"key":"StartedAt","value":"2021-06-11T14:01:49.786Z","type":"InternalTektonResult"}]

The error is from this part of your YAML:
spec:
workspaces:
- name: messages
optional: true
mountPath: /root/tekton-scripts/tutum-deploy.yaml
spec.workspaces.mountPath expects a directory, rather than a file, as you have specified here. You may mean /root/tekton-scripts/ instead but I am unfamiliar with tutum-hello-world.
If you look at the documentation you will see that all references to mountPath are directories rather than files.

Related

I tried to resize PersistentVolumeClaim with help of Kubectl patch pvc to increase storage from 10 Gi to 70 Gi but it’s giving error:

$ k patch pvc pv-volume -p '{"spec":{"resources":{"requests":{"storage":"70Mi"}}}}'
Error from server (Forbidden): persistentvolumeclaims "pv-volume" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Question: Create a pvc
name: pv-volume, class: csi-hostpath-sc, capacity:10Mi
Create a pod which mount the pvc as a volume.
name: web-server, image:nginx,mountpath: /usr/share/nginx/html
configure the new pod to have readWriteOnce
finally using kubectl edit ot kubectl patch pvc to a capacity of 70 Mi and record that change.
Please give me solution to patch pvc and record the change.
pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv
spec:
capacity:
storage: 70Mi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: csi-hostpath-sc
hostPath:
path: /usr/share/nginx/html
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-volume
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 10Mi
storageClassName: csi-hostpath-sc
pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: web-server
spec:
containers:
- name: web-server
image: nginx
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: pv-volume

task hello-world has failed: declared workspace "output" is required but has not been bound

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: hello-world
spec:
workspaces:
- name: output
description: folder where output goes
steps:
- name: hello-world1
image: ubuntu
command: ["/bin/bash"]
args: ["-c", "echo Hello World 1! > $(workspaces.output.path)<200b>/message1.txt"]
- name: hello-world2
image: ubuntu
script: |
#!/usr/bin/env bash
set -xe
echo Hello World 2! > $(workspaces.output.path)/message2.txt
From your error message, we can guess that the TaskRun (and PipelineRun) trying to run this task does not define a workspace to be used with your Task.
Say I would like to call your Task: I would write a Pipeline, which should include something like:
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: hello-world
spec:
tasks:
- name: hello-world-task
taskRef:
name: hello-world
workspaces:
- name: output
workspace: my-workspace
workspaces:
- name: my-workspace
optional: true
And then, start this pipeline with the following PipelineRun:
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: hello-world-0
spec:
pipelineRef: hello-world
workspaces:
- name: my-workspace
volumeClaimTemplate:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
See Tekton Pipelines Workspaces docs.

Sql script file is not getting copied to docker-entrypoint-initdb.d folder of mysql container?

My init.sql (script) is not getting copied to docker-entrypoint-initdb.d.
Note that the problem doesn't occur when I try to run it locally or on my server. It happens only when using the azure devops by creating build and release pipeline.
There seems to be a mistake in the hostpath(containing sql script) in the persistant volume YAML file in cases where the file is placed in the azure repos.
mysqlpersistantvolume.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-initdb-pv-volume
labels:
type: local
app: mysql
spec:
storageClassName: manual
capacity:
storage: 1Mi
accessModes:
- ReadOnlyMany
hostPath:
path: "/devops-sample" // main project folder in azure repos which
contains all files including sql script.
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mysql-initdb-pv-claim
labels:
app: mysql
spec:
storageClassName: manual
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 1Mi
mysql.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
protocol: TCP
targetPort: 3306
selector:
app: mysql
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
imagePullPolicy: "IfNotPresent"
env:
- name: MYSQL_ROOT_PASSWORD
value: root
- name: MYSQL_PASSWORD
value: kovaion
- name: MYSQL_USER
value: vignesh
- name: MYSQL_DATABASE
value: data-core
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /docker-entrypoint-initdb.d
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-initdb-pv-claim
Currently the folder docker-entrypoint-initdb.d seems to be empty(nothing is getting copied).
how to set the full host path in mysql persistant volume if the sql script is placed in the azure repos inside the devops-sample folder??
Mysql data directory storage location is wrong. You should mount persistent storage to /var/lib/mysql/data

Zalenium: 504 Gateway Time-out in OpenShift environment

My Zalenium installation on an OpenShift environment is far from stable. The web ui (admin view with vnc, dashboard, selenium console) works about 50% of the time and connecting with a RemoteWebDriver doesn't work at all.
Error:
504 Gateway Time-out
The server didn't respond in time.
WebDriver error:
org.openqa.selenium.WebDriverException: Unable to parse remote response: <html><body><h1>504 Gateway Time-out</h1>
The server didn't respond in time.
</body></html>
at org.openqa.selenium.remote.ProtocolHandshake.createSession(ProtocolHandshake.java:115)
oc version:
oc v3.9.0+191fece
kubernetes v1.9.1+a0ce1bc657
Zalenium template:
apiVersion: v1
kind: Template
metadata:
name: zalenium
annotations:
"openshift.io/display-name": "Zalenium"
"description": "Disposable Selenium Grid for use in OpenShift"
message: |-
A Zalenium grid has been created in your project. Continue to overview to verify that it exists and start the deployment.
parameters:
- name: PROJECTNAME
description: The namespace / project name of this project
displayName: Namespace
required: true
- name: HOSTNAME
description: hostname used for route creation
displayName: route hostname
required: true
- name: "VOLUME_CAPACITY"
displayName: "Volume capacity for the disk that contains the test results."
description: "The volume is used to store all the test results, including logs and video recordings of the tests."
value: "10Gi"
required: true
objects:
- apiVersion: v1
kind: DeploymentConfig
metadata:
generation: 1
labels:
app: zalenium
role: hub
name: zalenium
spec:
replicas: 1
selector:
app: zalenium
role: hub
strategy:
activeDeadlineSeconds: 21600
resources: {}
type: Rolling
template:
metadata:
labels:
app: zalenium
role: hub
spec:
containers:
- args:
- start
- --seleniumImageName
- "elgalu/selenium:latest"
- --sendAnonymousUsageInfo
- "false"
image: dosel/zalenium:latest
imagePullPolicy: Always
name: zalenium
ports:
- containerPort: 4444
protocol: TCP
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /home/seluser/videos
name: zalenium-volume
dnsPolicy: ClusterFirst
restartPolicy: Always
securityContext: {}
serviceAccount: deployer
serviceAccountName: deployer
volumes:
- name: zalenium-volume
persistentVolumeClaim:
claimName: zalenium-pvc
test: false
triggers:
- type: ConfigChange
- apiVersion: v1
kind: Route
metadata:
labels:
app: zalenium
annotations:
openshift.io/host.generated: 'true'
haproxy.router.openshift.io/timeout: "60"
name: zalenium
spec:
host: zalenium-4444-${PROJECTNAME}.${HOSTNAME}
to:
kind: Service
name: zalenium
port:
targetPort: selenium-4444
- apiVersion: v1
kind: Route
metadata:
labels:
app: zalenium
annotations:
openshift.io/host.generated: 'true'
haproxy.router.openshift.io/timeout: "60"
name: zalenium-4445
spec:
host: zalenium-4445-${PROJECTNAME}.${HOSTNAME}
to:
kind: Service
name: zalenium
port:
targetPort: selenium-4445
- apiVersion: v1
kind: Service
metadata:
labels:
app: zalenium
name: zalenium
spec:
ports:
- name: selenium-4444
port: 4444
protocol: TCP
targetPort: 4444
- name: selenium-4445
port: 4445
protocol: TCP
targetPort: 4445
selector:
app: zalenium
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: zalenium
name: zalenium-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: ${VOLUME_CAPACITY}
Errors in main pod:
I get about 2-3 errors in 30 minutes.
[OkHttp https://172.17.0.1/ ...] ERROR i.f.k.c.d.i.ExecWebSocketListener - Exec Failure: HTTP:403. Message:pods "zalenium-40000-wvpjb" is forbidden: User "system:serviceaccount:PROJECT:deployer" cannot get pods/exec in the namespace "PROJECT": User "system:serviceaccount:PROJECT:deployer" cannot get pods/exec in project "PROJECT"
java.net.ProtocolException: Expected HTTP 101 response but was '403 Forbidden'
[OkHttp https://172.17.0.1/ ...] ERROR d.z.e.z.c.k.KubernetesContainerClient - zalenium-40000-wvpjb Failed to execute command [bash, -c, notify 'Zalenium', 'TEST COMPLETED', --icon=/home/seluser/images/completed.png]
java.net.ProtocolException: Expected HTTP 101 response but was '403 Forbidden'
With own service account:
yml template of the sa:
- apiVersion: v1
kind: Role
metadata:
name: zalenium-role
labels:
app: zalenium
rules:
- apiGroups:
- ""
attributeRestrictions: null
resources:
- pods
verbs:
- create
- delete
- deletecollection
- get
- list
- watch
- apiGroups:
- ""
attributeRestrictions: null
resources:
- pods/exec
verbs:
- create
- delete
- list
- get
- apiGroups:
- ""
attributeRestrictions: null
resources:
- services
verbs:
- create
- delete
- get
- list
- apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: zalenium
name: zalenium-sa
- apiVersion: v1
kind: RoleBinding
metadata:
labels:
app: zalenium
name: zalenium-rolebinding
roleRef:
kind: Role
name: zalenium-role
namespace: ${PROJECTNAME}
subjects:
- kind: ServiceAccount
name: zalenium-sa
namespace: ${PROJECTNAME}
userNames:
- zalenium-sa
Result:
--WARN 10:22:28:182931026 We don't have sudo
Kubernetes service account found.
Copying files for Dashboard...
Starting Nginx reverse proxy...
Starting Selenium Hub...
.....10:22:29.626 [main] INFO o.o.grid.selenium.GridLauncherV3 - Selenium server version: 3.141.59, revision: unknown
.10:22:29.771 [main] INFO o.o.grid.selenium.GridLauncherV3 - Launching Selenium Grid hub on port 4445
..10:22:30.292 [main] INFO d.z.e.z.c.k.KubernetesContainerClient - Initialising Kubernetes support
..10:22:30.700 [main] WARN d.z.e.z.c.k.KubernetesContainerClient - Error initialising Kubernetes support.
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://172.30.0.1/api/v1/namespaces/PROJECT/pods/zalenium-1-j6s4q . Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. pods "zalenium-1-j6s4q" is forbidden: User "system:serviceaccount:PROJECT:zalenium-sa" cannot get pods in the namespace "PROJECT": User "system:serviceaccount:PROJECT:zalenium-sa" cannot get pods in project "PROJECT".
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:476)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:413)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:381)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:344)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:313)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:296)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:794)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:210)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:177)
at de.zalando.ep.zalenium.container.kubernetes.KubernetesContainerClient.<init>(KubernetesContainerClient.java:91)
at de.zalando.ep.zalenium.container.ContainerFactory.createKubernetesContainerClient(ContainerFactory.java:43)
at de.zalando.ep.zalenium.container.ContainerFactory.getContainerClient(ContainerFactory.java:22)
at de.zalando.ep.zalenium.proxy.DockeredSeleniumStarter.<clinit>(DockeredSeleniumStarter.java:63)
at de.zalando.ep.zalenium.registry.ZaleniumRegistry.<init>(ZaleniumRegistry.java:97)
at de.zalando.ep.zalenium.registry.ZaleniumRegistry.<init>(ZaleniumRegistry.java:83)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at java.lang.Class.newInstance(Class.java:442)
at org.openqa.grid.web.Hub.<init>(Hub.java:94)
at org.openqa.grid.selenium.GridLauncherV3.lambda$buildLaunchers$5(GridLauncherV3.java:264)
at org.openqa.grid.selenium.GridLauncherV3.lambda$launch$0(GridLauncherV3.java:86)
at java.util.Optional.map(Optional.java:215)
at org.openqa.grid.selenium.GridLauncherV3.launch(GridLauncherV3.java:86)
at org.openqa.grid.selenium.GridLauncherV3.main(GridLauncherV3.java:70)
10:22:30.701 [main] INFO d.z.e.z.c.k.KubernetesContainerClient - About to clean up any left over docker-selenium pods created by Zalenium
Exception in thread "main" org.openqa.grid.common.exception.GridConfigurationException: Error creating class with de.zalando.ep.zalenium.registry.ZaleniumRegistry : null
at org.openqa.grid.web.Hub.<init>(Hub.java:99)
at org.openqa.grid.selenium.GridLauncherV3.lambda$buildLaunchers$5(GridLauncherV3.java:264)
at org.openqa.grid.selenium.GridLauncherV3.lambda$launch$0(GridLauncherV3.java:86)
at java.util.Optional.map(Optional.java:215)
at org.openqa.grid.selenium.GridLauncherV3.launch(GridLauncherV3.java:86)
at org.openqa.grid.selenium.GridLauncherV3.main(GridLauncherV3.java:70)
Caused by: java.lang.ExceptionInInitializerError
at de.zalando.ep.zalenium.registry.ZaleniumRegistry.<init>(ZaleniumRegistry.java:97)
at de.zalando.ep.zalenium.registry.ZaleniumRegistry.<init>(ZaleniumRegistry.java:83)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at java.lang.Class.newInstance(Class.java:442)
at org.openqa.grid.web.Hub.<init>(Hub.java:94)
... 5 more
Caused by: java.lang.NullPointerException
at java.util.TreeMap.putAll(TreeMap.java:313)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.withLabels(BaseOperation.java:426)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.withLabels(BaseOperation.java:63)
at de.zalando.ep.zalenium.container.kubernetes.KubernetesContainerClient.deleteSeleniumPods(KubernetesContainerClient.java:402)
at de.zalando.ep.zalenium.container.kubernetes.KubernetesContainerClient.initialiseContainerEnvironment(KubernetesContainerClient.java:348)
at de.zalando.ep.zalenium.container.ContainerFactory.createKubernetesContainerClient(ContainerFactory.java:46)
at de.zalando.ep.zalenium.container.ContainerFactory.getContainerClient(ContainerFactory.java:22)
at de.zalando.ep.zalenium.proxy.DockeredSeleniumStarter.<clinit>(DockeredSeleniumStarter.java:63)
... 13 more
[OkHttp https://172.17.0.1/ ...] ERROR i.f.k.c.d.i.ExecWebSocketListener - Exec Failure: HTTP:403. Message:pods "zalenium-40000-wvpjb" is forbidden: User "system:serviceaccount:PROJECT:deployer" cannot get pods/exec in the namespace "PROJECT": User "system:serviceaccount:PROJECT:deployer" cannot get pods/exec in project "PROJECT"
java.net.ProtocolException: Expected HTTP 101 response but was '403 Forbidden'
Usually this means that the service account does not have enough rights, perhaps start by checking that.

Expose every pod in Redis cluster in Kubernetes

I'm trying to setup Redis cluster in Kubernetes. The major requirement is that all of nodes from Redis cluster have to be available from outside of Kubernetes. So clients can connect every node directly. But I got no idea how to configure service that way.
Basic config of cluster right now. It's ok for services into k8s but no full access from outside.
apiVersion: v1
kind: ConfigMap
metadata:
name: redis-cluster
labels:
app: redis-cluster
data:
redis.conf: |+
cluster-enabled yes
cluster-require-full-coverage no
cluster-node-timeout 15000
cluster-config-file /data/nodes.conf
cluster-migration-barrier 1
appendonly no
protected-mode no
---
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "false"
name: redis-cluster
labels:
app: redis-cluster
spec:
type: NodePort
ports:
- port: 6379
targetPort: 6379
name: client
- port: 16379
targetPort: 16379
name: gossip
selector:
app: redis-cluster
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: redis-cluster
labels:
app: redis-cluster
spec:
serviceName: redis-cluster
replicas: 6
template:
metadata:
labels:
app: redis-cluster
spec:
hostNetwork: true
containers:
- name: redis-cluster
image: redis:4.0.10
ports:
- containerPort: 6379
name: client
- containerPort: 16379
name: gossip
command: ["redis-server"]
args: ["/conf/redis.conf"]
readinessProbe:
exec:
command:
- sh
- -c
- "redis-cli -h $(hostname) ping"
initialDelaySeconds: 15
timeoutSeconds: 5
livenessProbe:
exec:
command:
- sh
- -c
- "redis-cli -h $(hostname) ping"
initialDelaySeconds: 20
periodSeconds: 3
volumeMounts:
- name: conf
mountPath: /conf
readOnly: false
volumes:
- name: conf
configMap:
name: redis-cluster
items:
- key: redis.conf
path: redis.conf
Given:
spec:
hostNetwork: true
containers:
- name: redis-cluster
ports:
- containerPort: 6379
name: client
It appears that your StatefulSet is misconfigured, since if hostNetwork is true, you have to provide hostPort, and that value should match containerPort, according to the PodSpec docs:
hostPort integer - Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort.
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#containerport-v1-core