I tried to resize PersistentVolumeClaim with help of Kubectl patch pvc to increase storage from 10 Gi to 70 Gi but it’s giving error: - edit

$ k patch pvc pv-volume -p '{"spec":{"resources":{"requests":{"storage":"70Mi"}}}}'
Error from server (Forbidden): persistentvolumeclaims "pv-volume" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Question: Create a pvc
name: pv-volume, class: csi-hostpath-sc, capacity:10Mi
Create a pod which mount the pvc as a volume.
name: web-server, image:nginx,mountpath: /usr/share/nginx/html
configure the new pod to have readWriteOnce
finally using kubectl edit ot kubectl patch pvc to a capacity of 70 Mi and record that change.
Please give me solution to patch pvc and record the change.
pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv
spec:
capacity:
storage: 70Mi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: csi-hostpath-sc
hostPath:
path: /usr/share/nginx/html
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-volume
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 10Mi
storageClassName: csi-hostpath-sc
pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: web-server
spec:
containers:
- name: web-server
image: nginx
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: pv-volume

Related

why i get the error "backend - 404 error" when trying to deploy tls ingress in kubernetes with no errors on events

I'm trying to deploy a simple Ingress service and works when is Ingress without the Secure function(tls), but when I include the cert tls it always returns me "backend - 404 error"
I already installed "cert manager", "ingress-nginx" and already checked if this install is ok
EDIT: I explained all the steps I'm doing
EDIT2: I updated the cert-manager's version to v1.5.4
these were the steps:
1.- install nginx controller for my ip
helm install bitnami/nginx-ingress-controller --set controller.service.loadBalancerIP="[MY-STATIC-IP]",rbac.create=true --generate-name
2.- Apply deployment and service (app.yaml)
apiVersion: apps/v1
kind: Deployment
metadata:
name: taxisbahiadeploy
labels:
type: endpoints-app
spec:
replicas: 1
selector:
matchLabels:
app: taxisbahiadeploy
template:
metadata:
labels:
app: taxisbahiadeploy
spec:
containers:
- name: taxisbahiadeploy
image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: Always
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: taxisbahia
spec:
ports:
- port: 8080
targetPort: 8080
selector:
app: taxisbahiadeploy
3.- Configure let's encrypt
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.5.4/cert-manager.crds.yaml
kubectl create namespace cert-manager
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install \
cert-manager \
--namespace cert-manager \
--version v1.5.4 \
jetstack/cert-manager
4- Apply the Issuer (issuer.yaml)
apiVersion: cert-manager.io/v1alpha2
kind: Issuer
metadata:
name: letsencrypt-staging
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: 'fco#ggggg.com'
privateKeySecretRef:
name: letsencrypt-staging
solvers:
- http01:
ingress:
class: nginx
---
apiVersion: cert-manager.io/v1alpha2
kind: Issuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: 'fco#ggggg.com'
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: nginx
5.- Final Step, this is the Ingress where it fails (ingress-tls.yaml)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: esp-ingress
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/issuer: "letsencrypt-staging"
spec:
tls:
- hosts:
- domain.com
secretName: esp-tls
rules:
- host: domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: taxisbahia
port:
number: 8080
i think your TLS domain part should be something like check your host
spec:
tls:
- hosts:
- example.example.com
secretName: quickstart-example-tls
Reference : https://cert-manager.io/docs/tutorials/acme/ingress/
First of all make sure that you are actually visiting https://yourapp.com
Had the same issue but then I realized I was actually trying HTTP, which is no longer available after TLS is added.

Tekton - mount path workspace issue - Error of path

Currently, I am trying to deploy tutum-hello-world. I have written a script for the same, but it does not work as it is supposed to.
I am certain that this issue is related to workspace.
UPDATE
Here is my code for task-tutum-deploy.yaml-
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: tutum-deploy
spec:
steps:
- name: tutum-deploy
image: bitnami/kubectl
script: |
kubectl apply -f /root/tekton-scripts/tutum-deploy.yaml
workspaces:
- name: messages
optional: true
mountPath: /root/tekton-scripts/
Error -
root#master1:~/tekton-scripts# tkn taskrun logs tutum-deploy-run-8sq8s -f -n default
[tutum-deploy] + kubectl apply -f /root/tekton-scripts/tutum-deploy.yaml
[tutum-deploy] error: the path "/root/tekton-scripts/tutum-deploy.yaml" cannot be accessed: stat /root/tekton-scripts/tutum-deploy.yaml: permission denied
container step-tutum-deploy has failed : [{"key":"StartedAt","value":"2021-06-14T12:54:01.096Z","type":"InternalTektonResult"}]
PS - I have placed my script on the master node at - /root/tekton-scripts/tutum-deploy.yaml
root#master1:~/tekton-scripts# ls -l tutum-deploy.yaml
-rwxrwxrwx 1 root root 626 Jun 11 11:31 tutum-deploy.yaml
OLD SCRIPT
Here is my code for task-tutum-deploy.yaml-
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: tutum-deploy
spec:
workspaces:
- name: messages
optional: true
mountPath: /root/tekton-scripts/tutum-deploy.yaml
steps:
- name: tutum-deploy
image: bitnami/kubectl
command: ["kubectl"]
args:
- "apply"
- "-f"
- "./tutum-deploy.yaml"
Here is my code for tutum-deploy.yaml which is present on the machine (master node) of Kubernetes cluster with read, write and execute permissions -
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world-tutum
labels:
service: hello-world-tutum
spec:
replicas: 1
selector:
matchLabels:
service: hello-world-tutum
template:
metadata:
labels:
service: hello-world-tutum
spec:
containers:
- name: tutum-hello-world
image: tutum/hello-world:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: hello-world-tutum
spec:
type: NodePort
selector:
service: hello-world-tutum
ports:
- name: "80"
port: 80
targetPort: 80
nodePort: 30050
I ran the following commands from my master node of Kubernetes cluster -
1. kubectl apply -f task-tutum-deploy.yaml
2. tkn task start tutum-deploy
Error -
Using tekton command - $ tkn taskrun logs tutum-deploy-run-tvlll -f -n default
task tutum-deploy has failed: "step-tutum-deploy" exited with code 1 (image: "docker-pullable://bitnami/kubectl#sha256:b83299ee1d8657ab30fb7b7925b42a12c613e37609d2b4493b4b27b057c21d0f"); for logs run: kubectl -n default logs tutum-deploy-run-tvlll-pod-vbl5g -c step-tutum-deploy
[tutum-deploy] error: the path "./tutum-deploy.yaml" does not exist
container step-tutum-deploy has failed : [{"key":"StartedAt","value":"2021-06-11T14:01:49.786Z","type":"InternalTektonResult"}]
The error is from this part of your YAML:
spec:
workspaces:
- name: messages
optional: true
mountPath: /root/tekton-scripts/tutum-deploy.yaml
spec.workspaces.mountPath expects a directory, rather than a file, as you have specified here. You may mean /root/tekton-scripts/ instead but I am unfamiliar with tutum-hello-world.
If you look at the documentation you will see that all references to mountPath are directories rather than files.

Canary Deployment Strategy using Argocd rollout and Service Mesh Interface (Traefik Mesh)

I'm working on the Canary Deployment Strategy.
I use the Service Mesh Interface, after installing trafik mesh.
When starting the program for the first time with the command
kubectl apply -f applications.yaml
It should deploy the entire application i.e. 4 replicas, but it deploys only 20% (1 replica) of the application,
and it goes into progressing state with an error:
TrafficRoutingErro: the server could not find the requested resource (post trafficsplits.splits.smi-spec.io)
TrafficSplitNotCreated: Unable to create traffic Split 'demo-traefficsplit'
Here is my manifest:
argocd-rollout.yaml
---
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: demo
labels:
app: demo
spec:
strategy:
canary:
steps:
- setWeight: 20
- pause:
duration: "1m"
- setWeight: 50
- pause:
duration: "2m"
canaryService: demo-canary
stableService: demo
trafficRouting:
smi:
rootService: demo-smi
trafficSplitName: demo-trafficsplit
replicas: 4
revisionHistoryLimit: 2
selector:
matchLabels:
app: demo
version: blue
template:
metadata:
labels:
app: demo
version: blue
spec:
containers:
- name: demo
image: argoproj/rollouts-demo:blue
imagePullPolicy: Always
ports:
- name: web
containerPort: 8080
resources:
requests:
memory: "64Mi"
cpu: "100m"
limits:
memory: "128Mi"
cpu: "140m"
---
apiVersion: split.smi-spec.io/v1alpha3
kind: TrafficSplit
metadata:
name: demo-trafficsplit
spec:
service: demo-smi # controller uses the stableService if Rollout does not specify the rootService field
backends:
- service: demo
weight: 10
- service: demo-canary
weight: 90
---
apiVersion: v1
kind: Service
metadata:
name: demo-smi
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: demo
version: blue
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: demo
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: demo
version: blue
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: demo-canary
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: demo
version: blue
type: ClusterIP
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: rollout-ing
spec:
entryPoints:
- websecure
routes:
- kind: Rule
match: Host(`mycompagny.com`)
services:
- name: demo-smi
port: 80
tls:
certResolver: myresolver
applications.yaml
apiVersion: v1
kind: Namespace
metadata:
name: net
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: rollout
namespace: argocd
spec:
project: default
source:
repoURL: git#github.com:telemaqueHQ/DevOps.git
targetRevision: master
path: gitOps/test/argocd
destination:
server: https://kubernetes.default.svc
namespace: net
syncPolicy:
automated:
prune: true
selfHeal: true

Populating AWS Alb Ingress Annotations from ConfigMap

I am creating a 'alb.ingress' resource as part of my Helm chart.
apiVersion: extenstions/v1beta1
kind: Ingress
metadate:
annotation:
alb.ingress.kubernetes.io/certification-arn: $cert_arn
alb.ingress.kubernetes.io/security-group: $sg
...
The values required in the 'alb.ingress' resource annotation sections, are available in my ConfigMap.
env:
- name: cert_arn
valueFrom:
configMapKeyRef:
name: environmental-variables
key: certification_arn
- name: sg
valueFrom:
configMapKeyRef:
name: environmental-variables
key: security-groups
...
Is there a way to populate the annotations using the config-map?
The way I solved this challenge was to create the ingress resource using Helm and the variables I had prior to creating the resource, such as name of the application, namespaces etc.
apiVersion: extenstions/v1beta1
kind: Ingress
metadata:
name: "{{ .Values.application.name }}-ingress"
namespace: "{{ .Values.env.name }}"
labels:
app: "{{ .Values.application.name }}"
specs:
rules:
- host: "{{ .Values.environment.name }}.{{ .Values.application.name }}.{{ .Values.domain.name }}"
https:
....
I used a pod (a job is also an option) to annotate the newly created ingress resource using the environmental values from the configmap.
apiVersion: extenstions/v1beta1
kind: Ingress
metadate:
name: annotate-ingress-alb
spec:
serviceAccountName: internal-kubectl
containers:
- name: modify-alb-ingress-controller
image: "{{ .Values.images.varion }}"
command: ["sh", "-c"]
args:
- '...
kubectl annotate ingress -n {{ .Values.env.name }} {{ .Values.application.name }}-ingress alb.ingress.kubernetes.io/certificate-arn=$CERT_ARN;
env:
- name: cert_arn
valueFrom:
configMapKeyRef:
name: environmental-variables
key: certification_arn
Note that the pod should have the right service account with the right permission roles are attached to it. For instance, in this case for the pod to be able to annotate the ALB, it had to have extensions apiGroup and the ingress resources in the list of permissions (I have not restricted the verbiage yet).
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: service-account-role
rules:
- apiGroups:
- ""
- extensions
resources:
- ingresses
verbs: ["*"]
Hope this helps someone in the future.

Sql script file is not getting copied to docker-entrypoint-initdb.d folder of mysql container?

My init.sql (script) is not getting copied to docker-entrypoint-initdb.d.
Note that the problem doesn't occur when I try to run it locally or on my server. It happens only when using the azure devops by creating build and release pipeline.
There seems to be a mistake in the hostpath(containing sql script) in the persistant volume YAML file in cases where the file is placed in the azure repos.
mysqlpersistantvolume.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-initdb-pv-volume
labels:
type: local
app: mysql
spec:
storageClassName: manual
capacity:
storage: 1Mi
accessModes:
- ReadOnlyMany
hostPath:
path: "/devops-sample" // main project folder in azure repos which
contains all files including sql script.
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mysql-initdb-pv-claim
labels:
app: mysql
spec:
storageClassName: manual
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 1Mi
mysql.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
protocol: TCP
targetPort: 3306
selector:
app: mysql
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
imagePullPolicy: "IfNotPresent"
env:
- name: MYSQL_ROOT_PASSWORD
value: root
- name: MYSQL_PASSWORD
value: kovaion
- name: MYSQL_USER
value: vignesh
- name: MYSQL_DATABASE
value: data-core
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /docker-entrypoint-initdb.d
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-initdb-pv-claim
Currently the folder docker-entrypoint-initdb.d seems to be empty(nothing is getting copied).
how to set the full host path in mysql persistant volume if the sql script is placed in the azure repos inside the devops-sample folder??
Mysql data directory storage location is wrong. You should mount persistent storage to /var/lib/mysql/data