Zalenium: 504 Gateway Time-out in OpenShift environment - selenium

My Zalenium installation on an OpenShift environment is far from stable. The web ui (admin view with vnc, dashboard, selenium console) works about 50% of the time and connecting with a RemoteWebDriver doesn't work at all.
Error:
504 Gateway Time-out
The server didn't respond in time.
WebDriver error:
org.openqa.selenium.WebDriverException: Unable to parse remote response: <html><body><h1>504 Gateway Time-out</h1>
The server didn't respond in time.
</body></html>
at org.openqa.selenium.remote.ProtocolHandshake.createSession(ProtocolHandshake.java:115)
oc version:
oc v3.9.0+191fece
kubernetes v1.9.1+a0ce1bc657
Zalenium template:
apiVersion: v1
kind: Template
metadata:
name: zalenium
annotations:
"openshift.io/display-name": "Zalenium"
"description": "Disposable Selenium Grid for use in OpenShift"
message: |-
A Zalenium grid has been created in your project. Continue to overview to verify that it exists and start the deployment.
parameters:
- name: PROJECTNAME
description: The namespace / project name of this project
displayName: Namespace
required: true
- name: HOSTNAME
description: hostname used for route creation
displayName: route hostname
required: true
- name: "VOLUME_CAPACITY"
displayName: "Volume capacity for the disk that contains the test results."
description: "The volume is used to store all the test results, including logs and video recordings of the tests."
value: "10Gi"
required: true
objects:
- apiVersion: v1
kind: DeploymentConfig
metadata:
generation: 1
labels:
app: zalenium
role: hub
name: zalenium
spec:
replicas: 1
selector:
app: zalenium
role: hub
strategy:
activeDeadlineSeconds: 21600
resources: {}
type: Rolling
template:
metadata:
labels:
app: zalenium
role: hub
spec:
containers:
- args:
- start
- --seleniumImageName
- "elgalu/selenium:latest"
- --sendAnonymousUsageInfo
- "false"
image: dosel/zalenium:latest
imagePullPolicy: Always
name: zalenium
ports:
- containerPort: 4444
protocol: TCP
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /home/seluser/videos
name: zalenium-volume
dnsPolicy: ClusterFirst
restartPolicy: Always
securityContext: {}
serviceAccount: deployer
serviceAccountName: deployer
volumes:
- name: zalenium-volume
persistentVolumeClaim:
claimName: zalenium-pvc
test: false
triggers:
- type: ConfigChange
- apiVersion: v1
kind: Route
metadata:
labels:
app: zalenium
annotations:
openshift.io/host.generated: 'true'
haproxy.router.openshift.io/timeout: "60"
name: zalenium
spec:
host: zalenium-4444-${PROJECTNAME}.${HOSTNAME}
to:
kind: Service
name: zalenium
port:
targetPort: selenium-4444
- apiVersion: v1
kind: Route
metadata:
labels:
app: zalenium
annotations:
openshift.io/host.generated: 'true'
haproxy.router.openshift.io/timeout: "60"
name: zalenium-4445
spec:
host: zalenium-4445-${PROJECTNAME}.${HOSTNAME}
to:
kind: Service
name: zalenium
port:
targetPort: selenium-4445
- apiVersion: v1
kind: Service
metadata:
labels:
app: zalenium
name: zalenium
spec:
ports:
- name: selenium-4444
port: 4444
protocol: TCP
targetPort: 4444
- name: selenium-4445
port: 4445
protocol: TCP
targetPort: 4445
selector:
app: zalenium
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: zalenium
name: zalenium-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: ${VOLUME_CAPACITY}
Errors in main pod:
I get about 2-3 errors in 30 minutes.
[OkHttp https://172.17.0.1/ ...] ERROR i.f.k.c.d.i.ExecWebSocketListener - Exec Failure: HTTP:403. Message:pods "zalenium-40000-wvpjb" is forbidden: User "system:serviceaccount:PROJECT:deployer" cannot get pods/exec in the namespace "PROJECT": User "system:serviceaccount:PROJECT:deployer" cannot get pods/exec in project "PROJECT"
java.net.ProtocolException: Expected HTTP 101 response but was '403 Forbidden'
[OkHttp https://172.17.0.1/ ...] ERROR d.z.e.z.c.k.KubernetesContainerClient - zalenium-40000-wvpjb Failed to execute command [bash, -c, notify 'Zalenium', 'TEST COMPLETED', --icon=/home/seluser/images/completed.png]
java.net.ProtocolException: Expected HTTP 101 response but was '403 Forbidden'
With own service account:
yml template of the sa:
- apiVersion: v1
kind: Role
metadata:
name: zalenium-role
labels:
app: zalenium
rules:
- apiGroups:
- ""
attributeRestrictions: null
resources:
- pods
verbs:
- create
- delete
- deletecollection
- get
- list
- watch
- apiGroups:
- ""
attributeRestrictions: null
resources:
- pods/exec
verbs:
- create
- delete
- list
- get
- apiGroups:
- ""
attributeRestrictions: null
resources:
- services
verbs:
- create
- delete
- get
- list
- apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: zalenium
name: zalenium-sa
- apiVersion: v1
kind: RoleBinding
metadata:
labels:
app: zalenium
name: zalenium-rolebinding
roleRef:
kind: Role
name: zalenium-role
namespace: ${PROJECTNAME}
subjects:
- kind: ServiceAccount
name: zalenium-sa
namespace: ${PROJECTNAME}
userNames:
- zalenium-sa
Result:
--WARN 10:22:28:182931026 We don't have sudo
Kubernetes service account found.
Copying files for Dashboard...
Starting Nginx reverse proxy...
Starting Selenium Hub...
.....10:22:29.626 [main] INFO o.o.grid.selenium.GridLauncherV3 - Selenium server version: 3.141.59, revision: unknown
.10:22:29.771 [main] INFO o.o.grid.selenium.GridLauncherV3 - Launching Selenium Grid hub on port 4445
..10:22:30.292 [main] INFO d.z.e.z.c.k.KubernetesContainerClient - Initialising Kubernetes support
..10:22:30.700 [main] WARN d.z.e.z.c.k.KubernetesContainerClient - Error initialising Kubernetes support.
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://172.30.0.1/api/v1/namespaces/PROJECT/pods/zalenium-1-j6s4q . Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. pods "zalenium-1-j6s4q" is forbidden: User "system:serviceaccount:PROJECT:zalenium-sa" cannot get pods in the namespace "PROJECT": User "system:serviceaccount:PROJECT:zalenium-sa" cannot get pods in project "PROJECT".
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:476)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:413)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:381)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:344)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:313)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:296)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:794)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:210)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:177)
at de.zalando.ep.zalenium.container.kubernetes.KubernetesContainerClient.<init>(KubernetesContainerClient.java:91)
at de.zalando.ep.zalenium.container.ContainerFactory.createKubernetesContainerClient(ContainerFactory.java:43)
at de.zalando.ep.zalenium.container.ContainerFactory.getContainerClient(ContainerFactory.java:22)
at de.zalando.ep.zalenium.proxy.DockeredSeleniumStarter.<clinit>(DockeredSeleniumStarter.java:63)
at de.zalando.ep.zalenium.registry.ZaleniumRegistry.<init>(ZaleniumRegistry.java:97)
at de.zalando.ep.zalenium.registry.ZaleniumRegistry.<init>(ZaleniumRegistry.java:83)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at java.lang.Class.newInstance(Class.java:442)
at org.openqa.grid.web.Hub.<init>(Hub.java:94)
at org.openqa.grid.selenium.GridLauncherV3.lambda$buildLaunchers$5(GridLauncherV3.java:264)
at org.openqa.grid.selenium.GridLauncherV3.lambda$launch$0(GridLauncherV3.java:86)
at java.util.Optional.map(Optional.java:215)
at org.openqa.grid.selenium.GridLauncherV3.launch(GridLauncherV3.java:86)
at org.openqa.grid.selenium.GridLauncherV3.main(GridLauncherV3.java:70)
10:22:30.701 [main] INFO d.z.e.z.c.k.KubernetesContainerClient - About to clean up any left over docker-selenium pods created by Zalenium
Exception in thread "main" org.openqa.grid.common.exception.GridConfigurationException: Error creating class with de.zalando.ep.zalenium.registry.ZaleniumRegistry : null
at org.openqa.grid.web.Hub.<init>(Hub.java:99)
at org.openqa.grid.selenium.GridLauncherV3.lambda$buildLaunchers$5(GridLauncherV3.java:264)
at org.openqa.grid.selenium.GridLauncherV3.lambda$launch$0(GridLauncherV3.java:86)
at java.util.Optional.map(Optional.java:215)
at org.openqa.grid.selenium.GridLauncherV3.launch(GridLauncherV3.java:86)
at org.openqa.grid.selenium.GridLauncherV3.main(GridLauncherV3.java:70)
Caused by: java.lang.ExceptionInInitializerError
at de.zalando.ep.zalenium.registry.ZaleniumRegistry.<init>(ZaleniumRegistry.java:97)
at de.zalando.ep.zalenium.registry.ZaleniumRegistry.<init>(ZaleniumRegistry.java:83)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at java.lang.Class.newInstance(Class.java:442)
at org.openqa.grid.web.Hub.<init>(Hub.java:94)
... 5 more
Caused by: java.lang.NullPointerException
at java.util.TreeMap.putAll(TreeMap.java:313)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.withLabels(BaseOperation.java:426)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.withLabels(BaseOperation.java:63)
at de.zalando.ep.zalenium.container.kubernetes.KubernetesContainerClient.deleteSeleniumPods(KubernetesContainerClient.java:402)
at de.zalando.ep.zalenium.container.kubernetes.KubernetesContainerClient.initialiseContainerEnvironment(KubernetesContainerClient.java:348)
at de.zalando.ep.zalenium.container.ContainerFactory.createKubernetesContainerClient(ContainerFactory.java:46)
at de.zalando.ep.zalenium.container.ContainerFactory.getContainerClient(ContainerFactory.java:22)
at de.zalando.ep.zalenium.proxy.DockeredSeleniumStarter.<clinit>(DockeredSeleniumStarter.java:63)
... 13 more

[OkHttp https://172.17.0.1/ ...] ERROR i.f.k.c.d.i.ExecWebSocketListener - Exec Failure: HTTP:403. Message:pods "zalenium-40000-wvpjb" is forbidden: User "system:serviceaccount:PROJECT:deployer" cannot get pods/exec in the namespace "PROJECT": User "system:serviceaccount:PROJECT:deployer" cannot get pods/exec in project "PROJECT"
java.net.ProtocolException: Expected HTTP 101 response but was '403 Forbidden'
Usually this means that the service account does not have enough rights, perhaps start by checking that.

Related

Selenium 4: Chrome Node does not register correctly to the hub

I have an Openshift 3 Cluster containing the two following containers: selenium-hub and selenium-node-chrome. Please see below the attached deployment and service yaml files.
Hub Deployment:
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
labels:
app: selenium-hub
selenium-hub: master
name: selenium-hub
spec:
replicas: 1
selector:
type: selenium-hub
template:
metadata:
labels:
type: selenium-hub
name: selenium-hub
spec:
containers:
- image: 'selenium/hub:latest'
imagePullPolicy: IfNotPresent
name: master
ports:
- containerPort: 4444
protocol: TCP
- containerPort: 4442
protocol: TCP
- containerPort: 4443
protocol: TCP
triggers:
- type: ConfigChange
Hub Service:
apiVersion: v1
kind: Service
metadata:
labels:
app: selenium-hub
selenium-hub: master
name: selenium-hub
spec:
ports:
- name: selenium-hub
port: 4444
protocol: TCP
targetPort: 4444
- name: publish
port: 4442
protocol: TCP
targetPort: 4442
- name: subscribe
port: 4443
protocol: TCP
targetPort: 4443
selector:
type: selenium-hub
type: ClusterIP
Node Deployment:
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
labels:
app: selenium-node-chrome
name: selenium-node-chrome
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
browser: chrome
template:
metadata:
labels:
app: node-chrome
browser: chrome
name: selenium-node-chrome-master
spec:
containers:
- env:
- name: SE_EVENT_BUS_HOST
value: selenium-hub
- name: SE_EVENT_BUS_PUBLISH_PORT
value: '4442'
- name: SE_EVENT_BUS_SUBSCRIBE_PORT
value: '4443'
- name: SE_NODE_HOST
value: node-chrome
- name: SE_NODE_PORT
value: '5555'
image: 'selenium/node-chrome:4.0.0-20211102'
imagePullPolicy: IfNotPresent
name: master
ports:
- containerPort: 5555
protocol: TCP
triggers:
- type: ConfigChange
Node Service:
apiVersion: v1
kind: Service
metadata:
labels:
app: selenium-node-chrome
name: selenium-node-chrome
spec:
ports:
- name: node-port
port: 5555
protocol: TCP
targetPort: 5555
- name: node-port-grid
port: 4444
protocol: TCP
targetPort: 4444
selector:
browser: chrome
type: ClusterIP
My Issue:
The hub and the node are starting, but the node just keeps sending the registration event and the hub is logging some infos, which i dont really understand. Please see the logs attached below.
Node Log:
Setting up SE_NODE_GRID_URL...
Selenium Grid Node configuration:
[events]
publish = "tcp://selenium-hub:4442"
subscribe = "tcp://selenium-hub:4443"
[server]
host = "node-chrome"
port = "5555"
[node]
session-timeout = "300"
override-max-sessions = false
detect-drivers = false
max-sessions = 1
[[node.driver-configuration]]
display-name = "chrome"
stereotype = '{"browserName": "chrome", "browserVersion": "95.0", "platformName": "Linux"}'
max-sessions = 1
Starting Selenium Grid Node...
11:34:31.635 INFO [LoggingOptions.configureLogEncoding] - Using the system default encoding
11:34:31.643 INFO [OpenTelemetryTracer.createTracer] - Using OpenTelemetry for tracing
11:34:31.774 INFO [UnboundZmqEventBus.<init>] - Connecting to tcp://selenium-hub:4442 and tcp://selenium-hub:4443
11:34:31.843 INFO [UnboundZmqEventBus.<init>] - Sockets created
11:34:32.854 INFO [UnboundZmqEventBus.<init>] - Event bus ready
11:34:33.018 INFO [NodeServer.createHandlers] - Reporting self as: http://node-chrome:5555
11:34:33.044 INFO [NodeOptions.getSessionFactories] - Detected 1 available processors
11:34:33.115 INFO [NodeOptions.report] - Adding chrome for {"browserVersion": "95.0","browserName": "chrome","platformName": "Linux","se:vncEnabled": true} 1 times
11:34:33.130 INFO [Node.<init>] - Binding additional locator mechanisms: name, relative, id
11:34:33.471 INFO [NodeServer$1.start] - Starting registration process for node id 2832e819-cf31-4bd9-afcc-cd2b27578d58
11:34:33.473 INFO [NodeServer.execute] - Started Selenium node 4.0.0 (revision 3a21814679): http://node-chrome:5555
11:34:33.476 INFO [NodeServer$1.lambda$start$1] - Sending registration event...
11:34:43.479 INFO [NodeServer$1.lambda$start$1] - Sending registration event...
11:34:53.481 INFO [NodeServer$1.lambda$start$1] - Sending registration event...
Hub Log:
2021-12-07 11:14:22,663 INFO spawned: 'selenium-grid-hub' with pid 11
2021-12-07 11:14:23,664 INFO success: selenium-grid-hub entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
11:14:23.953 INFO [LoggingOptions.configureLogEncoding] - Using the system default encoding
11:14:23.961 INFO [OpenTelemetryTracer.createTracer] - Using OpenTelemetry for tracing
11:14:24.136 INFO [BoundZmqEventBus.<init>] - XPUB binding to [binding to tcp://*:4442, advertising as tcp://XXXXXXX:4442], XSUB binding to [binding to tcp://*:4443, advertising as tcp://XXXXXX:4443]
11:14:24.246 INFO [UnboundZmqEventBus.<init>] - Connecting to tcp://XXXXXX:4442 and tcp://XXXXXXX:4443
11:14:24.275 INFO [UnboundZmqEventBus.<init>] - Sockets created
11:14:25.278 INFO [UnboundZmqEventBus.<init>] - Event bus ready
11:14:26.232 INFO [Hub.execute] - Started Selenium Hub 4.1.0 (revision 87802e897b): http://XXXXXXX:4444
11:14:46.965 INFO [Node.<init>] - Binding additional locator mechanisms: name, relative, id
11:15:46.916 INFO [Node.<init>] - Binding additional locator mechanisms: relative, name, id
11:17:52.377 INFO [Node.<init>] - Binding additional locator mechanisms: relative, id, name
Can anyone tell me why the hub wont register the node?
If you need any further informations, let me know.
Thanks alot
So, bit late, but still I had this same issue - the docker-compose example gave me selenium-hub as the host, which is correct in that scenario as it points towards the container defined by the selenium-hub service.
However, in Kubernetes, the inter-pod communication needs to go via a Service. There are multiple kinds of Service, but in order to access it from inside the cluster, it's easiest in this case to use a ClusterIP (docs here for more info).
The way I resolved it was to have a Service for both the ports that the event bus uses:
bus-publisher (port 4442)
bus-subscription (port 4443)
In a manifest yaml, this looks like:
apiVersion: v1
kind: Service
metadata:
labels:
app-name: selenium
name: bus-sub
namespace: selenium
spec:
ports:
- port: 4443
protocol: TCP
targetPort: 4443
selector:
app: selenium-hub
type: ClusterIP
you didn't expose the ports 4443 and 4442 from the hub container (see ports section of spec.containers)
You are in same machine so I think you don't need to use the environment variable: SE_NODE_HOST in the node deployment only use these variables:
SE_EVENT_BUS_HOST=selenium-hub
SE_EVENT_BUS_PUBLISH_PORT=4442
SE_EVENT_BUS_SUBSCRIBE_PORT=4443
If you think you aren't in the same VM, you need to config the node deployment correctly by using these environment variables :
SE_EVENT_BUS_HOST=<ip-of-hub-machine>
SE_EVENT_BUS_PUBLISH_PORT=4442
SE_EVENT_BUS_SUBSCRIBE_PORT=4443
SE_NODE_HOST=<ip-of-node-machine>
Please don't add unused environment variables like:'SE_NODE_PORT' because selenium image doesn't support different environment variables besides the environment variables you can read in the documents in Github 'docker-selenium' project: https://github.com/SeleniumHQ/docker-selenium.
If you are so much want to use your variable. So create your own selenium image (I don't recommend that) I succuss with what I say to you.

Tekton - mount path workspace issue - Error of path

Currently, I am trying to deploy tutum-hello-world. I have written a script for the same, but it does not work as it is supposed to.
I am certain that this issue is related to workspace.
UPDATE
Here is my code for task-tutum-deploy.yaml-
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: tutum-deploy
spec:
steps:
- name: tutum-deploy
image: bitnami/kubectl
script: |
kubectl apply -f /root/tekton-scripts/tutum-deploy.yaml
workspaces:
- name: messages
optional: true
mountPath: /root/tekton-scripts/
Error -
root#master1:~/tekton-scripts# tkn taskrun logs tutum-deploy-run-8sq8s -f -n default
[tutum-deploy] + kubectl apply -f /root/tekton-scripts/tutum-deploy.yaml
[tutum-deploy] error: the path "/root/tekton-scripts/tutum-deploy.yaml" cannot be accessed: stat /root/tekton-scripts/tutum-deploy.yaml: permission denied
container step-tutum-deploy has failed : [{"key":"StartedAt","value":"2021-06-14T12:54:01.096Z","type":"InternalTektonResult"}]
PS - I have placed my script on the master node at - /root/tekton-scripts/tutum-deploy.yaml
root#master1:~/tekton-scripts# ls -l tutum-deploy.yaml
-rwxrwxrwx 1 root root 626 Jun 11 11:31 tutum-deploy.yaml
OLD SCRIPT
Here is my code for task-tutum-deploy.yaml-
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: tutum-deploy
spec:
workspaces:
- name: messages
optional: true
mountPath: /root/tekton-scripts/tutum-deploy.yaml
steps:
- name: tutum-deploy
image: bitnami/kubectl
command: ["kubectl"]
args:
- "apply"
- "-f"
- "./tutum-deploy.yaml"
Here is my code for tutum-deploy.yaml which is present on the machine (master node) of Kubernetes cluster with read, write and execute permissions -
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world-tutum
labels:
service: hello-world-tutum
spec:
replicas: 1
selector:
matchLabels:
service: hello-world-tutum
template:
metadata:
labels:
service: hello-world-tutum
spec:
containers:
- name: tutum-hello-world
image: tutum/hello-world:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: hello-world-tutum
spec:
type: NodePort
selector:
service: hello-world-tutum
ports:
- name: "80"
port: 80
targetPort: 80
nodePort: 30050
I ran the following commands from my master node of Kubernetes cluster -
1. kubectl apply -f task-tutum-deploy.yaml
2. tkn task start tutum-deploy
Error -
Using tekton command - $ tkn taskrun logs tutum-deploy-run-tvlll -f -n default
task tutum-deploy has failed: "step-tutum-deploy" exited with code 1 (image: "docker-pullable://bitnami/kubectl#sha256:b83299ee1d8657ab30fb7b7925b42a12c613e37609d2b4493b4b27b057c21d0f"); for logs run: kubectl -n default logs tutum-deploy-run-tvlll-pod-vbl5g -c step-tutum-deploy
[tutum-deploy] error: the path "./tutum-deploy.yaml" does not exist
container step-tutum-deploy has failed : [{"key":"StartedAt","value":"2021-06-11T14:01:49.786Z","type":"InternalTektonResult"}]
The error is from this part of your YAML:
spec:
workspaces:
- name: messages
optional: true
mountPath: /root/tekton-scripts/tutum-deploy.yaml
spec.workspaces.mountPath expects a directory, rather than a file, as you have specified here. You may mean /root/tekton-scripts/ instead but I am unfamiliar with tutum-hello-world.
If you look at the documentation you will see that all references to mountPath are directories rather than files.

Canary Deployment Strategy using Argocd rollout and Service Mesh Interface (Traefik Mesh)

I'm working on the Canary Deployment Strategy.
I use the Service Mesh Interface, after installing trafik mesh.
When starting the program for the first time with the command
kubectl apply -f applications.yaml
It should deploy the entire application i.e. 4 replicas, but it deploys only 20% (1 replica) of the application,
and it goes into progressing state with an error:
TrafficRoutingErro: the server could not find the requested resource (post trafficsplits.splits.smi-spec.io)
TrafficSplitNotCreated: Unable to create traffic Split 'demo-traefficsplit'
Here is my manifest:
argocd-rollout.yaml
---
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: demo
labels:
app: demo
spec:
strategy:
canary:
steps:
- setWeight: 20
- pause:
duration: "1m"
- setWeight: 50
- pause:
duration: "2m"
canaryService: demo-canary
stableService: demo
trafficRouting:
smi:
rootService: demo-smi
trafficSplitName: demo-trafficsplit
replicas: 4
revisionHistoryLimit: 2
selector:
matchLabels:
app: demo
version: blue
template:
metadata:
labels:
app: demo
version: blue
spec:
containers:
- name: demo
image: argoproj/rollouts-demo:blue
imagePullPolicy: Always
ports:
- name: web
containerPort: 8080
resources:
requests:
memory: "64Mi"
cpu: "100m"
limits:
memory: "128Mi"
cpu: "140m"
---
apiVersion: split.smi-spec.io/v1alpha3
kind: TrafficSplit
metadata:
name: demo-trafficsplit
spec:
service: demo-smi # controller uses the stableService if Rollout does not specify the rootService field
backends:
- service: demo
weight: 10
- service: demo-canary
weight: 90
---
apiVersion: v1
kind: Service
metadata:
name: demo-smi
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: demo
version: blue
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: demo
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: demo
version: blue
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: demo-canary
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: demo
version: blue
type: ClusterIP
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: rollout-ing
spec:
entryPoints:
- websecure
routes:
- kind: Rule
match: Host(`mycompagny.com`)
services:
- name: demo-smi
port: 80
tls:
certResolver: myresolver
applications.yaml
apiVersion: v1
kind: Namespace
metadata:
name: net
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: rollout
namespace: argocd
spec:
project: default
source:
repoURL: git#github.com:telemaqueHQ/DevOps.git
targetRevision: master
path: gitOps/test/argocd
destination:
server: https://kubernetes.default.svc
namespace: net
syncPolicy:
automated:
prune: true
selfHeal: true

Sql script file is not getting copied to docker-entrypoint-initdb.d folder of mysql container?

My init.sql (script) is not getting copied to docker-entrypoint-initdb.d.
Note that the problem doesn't occur when I try to run it locally or on my server. It happens only when using the azure devops by creating build and release pipeline.
There seems to be a mistake in the hostpath(containing sql script) in the persistant volume YAML file in cases where the file is placed in the azure repos.
mysqlpersistantvolume.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-initdb-pv-volume
labels:
type: local
app: mysql
spec:
storageClassName: manual
capacity:
storage: 1Mi
accessModes:
- ReadOnlyMany
hostPath:
path: "/devops-sample" // main project folder in azure repos which
contains all files including sql script.
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mysql-initdb-pv-claim
labels:
app: mysql
spec:
storageClassName: manual
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 1Mi
mysql.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
protocol: TCP
targetPort: 3306
selector:
app: mysql
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
imagePullPolicy: "IfNotPresent"
env:
- name: MYSQL_ROOT_PASSWORD
value: root
- name: MYSQL_PASSWORD
value: kovaion
- name: MYSQL_USER
value: vignesh
- name: MYSQL_DATABASE
value: data-core
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /docker-entrypoint-initdb.d
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-initdb-pv-claim
Currently the folder docker-entrypoint-initdb.d seems to be empty(nothing is getting copied).
how to set the full host path in mysql persistant volume if the sql script is placed in the azure repos inside the devops-sample folder??
Mysql data directory storage location is wrong. You should mount persistent storage to /var/lib/mysql/data

I have a problem with Kubernetes depoyment. Can anybody help I always get this error when trying to connect to the cluster IP

I have problems with Kubernetes. I try to deploy my service for two days now bu I'm doing something wrong.
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\": No policy matched.",
"reason": "Forbidden",
"details": {
},
"code": 403
}
Does anybody knows what the problem could be?
Here is also my yaml file:
# Certificate
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: ${APP_NAME}
spec:
secretName: ${APP_NAME}-cert
dnsNames:
- ${URL}
- www.${URL}
acme:
config:
- domains:
- ${URL}
- www.${URL}
http01:
ingressClass: nginx
issuerRef:
name: ${CERT_ISSUER}
kind: ClusterIssuer
---
# Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ${APP_NAME}
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: 'true'
nginx.ingress.kubernetes.io/from-to-www-redirect: 'true'
spec:
tls:
- secretName: ${APP_NAME}-cert
hosts:
- ${URL}
- www.${URL}
rules:
- host: ${URL}
http:
paths:
- backend:
serviceName: ${APP_NAME}-service
servicePort: 80
---
# Service
apiVersion: v1
kind: Service
metadata:
name: ${APP_NAME}-service
labels:
app: ${CI_PROJECT_NAME}
spec:
selector:
name: ${APP_NAME}
app: ${CI_PROJECT_NAME}
ports:
- name: http
port: 80
targetPort: http
---
# Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: ${APP_NAME}
labels:
app: ${CI_PROJECT_NAME}
spec:
replicas: ${REPLICAS}
revisionHistoryLimit: 0
selector:
matchLabels:
app: ${CI_PROJECT_NAME}
template:
metadata:
labels:
name: ${APP_NAME}
app: ${CI_PROJECT_NAME}
spec:
containers:
- name: webapp
image: eu.gcr.io/my-site/my-site.com:latest
imagePullPolicy: Always
ports:
- name: http
containerPort: 80
env:
- name: COMMIT_SHA
value: ${CI_COMMIT_SHA}
livenessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 30
timeoutSeconds: 1
readinessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 5
timeoutSeconds: 1
resources:
requests:
memory: '16Mi'
limits:
memory: '64Mi'
imagePullSecrets:
- name: ${REGISTRY_PULL_SECRET}
Can anybody help me with this? I'm stuck and I've no idea what could be the problem. This is also my first Kubernetes project.
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\": No policy matched.",
.. means just what it says: your request to the kubernetes api was not authenticated (that's the system:anonymous part), and your RBAC configuration does not tolerate the anonymous user making any requests to the API
No one here is going to be able to help you straighten out that problem, because fixing that depends on a horrific number of variables. Perhaps ask your cluster administrator to provide you with the correct credentials.
I have explained it in this post. You will need ServiceAccount, ClusterRole and RoleBinding. You can find explanation in this article. Or as Matthew L Daniel mentioned in the Kubernetes documentation.
If you still have problems, provide the method/tutorial you have used to deploy the cluster (as "Gitlab Kubernetes integration" does not tell much on the method you have used).