Kubernetes PersistentVolume and PersistentVolumeClaim could be causing issues for my pod which crashes while copying logs - selenium

I have a PersistentVolume that I specified as the following:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mypv-shared
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 5Gi
hostPath:
path: /data/mypv-shared/
Then I created a PersistentVolumeClaim with the following specifications:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mypv-shared-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
But when I create the PVC, running kubectl get pv shows that it is bound to a randomly generated PV
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-38c77920-a223-11e7-89cc-08002719b642 5Gi RWX Delete Bound default/mypv-shared standard 16m
I believe this is causing issues for my pods when running tests because I am not sure if the pod is correctly mounting the specified directory. My pods crash at the end of the test when trying to copy over the test logs at the end of the run.
Is the cause really the persistentVolume/Claim or should I be looking into something else? Thanks!

Creating the PVC dynamically provisioned a PV instead of using the one you created manually with the hostpath. On the PVC simply set .spec.storageClassName to and an empty string ("")
From the documentation:
A PVC with its storageClassName set equal to "" is always interpreted to be requesting a PV with no class, so it can only be bound to PVs with no class (no annotation or one set equal to ""). A PVC with no storageClassName is not quite the same ...
So create something like this (I've also added labels and selectors to make sure that the intended PV is paired up the PVC; you might not need that constraint):
apiVersion: v1
kind: PersistentVolume
metadata:
name: mypv-shared
labels:
name: mypv-shared
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 5Gi
hostPath:
path: /data/mypv-shared/
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mypv-shared-claim
spec:
storageClassName: ""
selector:
matchLabels:
name: mypv-shared
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi

Related

AWS EKS ingress - Entity too large

I am running Laravel 8 api in the cluster and I have this ingress:
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"alb.ingress.kubernetes.io/scheme":"internet-facing","alb.ingress.kubernetes.io/target-type":"ip","kubernetes.io/ingress.class":"alb"},"labels":{"app":"voterapi"},"name":"rapp-ingress","namespace":"voterapp"},"spec":{"rules":[{"http":{"paths":[{"backend":{"serviceName":"app-service","servicePort":80},"path":"/*"}]}}]}}
kubernetes.io/ingress.class: alb
nginx.ingress.kubernetes.io/proxy-body-size: 100m
creationTimestamp: "2022-05-26T08:25:50Z"
finalizers:
- ingress.k8s.aws/resources
generation: 1
labels:
app: appapi
name: app-ingress
namespace: app
resourceVersion: "94262558"
uid: ec29661a-f4be-4ae1-a0e0-29c3d8bff0e5
spec:
rules:
- http:
paths:
- backend:
service:
name: app-service
port:
number: 80
path: /*
pathType: ImplementationSpecific
status:
loadBalancer:
ingress:
- hostname: XXX
I am trying to upload the file using the API and I am getting
413 Request Entity Too Large
I dont see this error in my PHP log so looks like it is not even getting there.
Can anyone help me to solve the issue?
Update: Try to update your ingress adding nginx.org/client-max-body-size
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.org/proxy-read-timeout: "40s"
nginx.org/proxy-connect-timeout: "40s"
nginx.org/client-max-body-size: "100m"
In some cases, you might need to increase the max size for all post body data and file uploads.
Try to update the post_max_size and upload_max_file_size values in the php.ini configuration:
post_max_size = 100M
upload_max_filesize = 100M
Reference:
NGINX Ingress Controller to increase the client request body
NGINX Ingress Controller: Advanced Configuration with Annotations
https://laracasts.com/discuss/channels/laravel/increase-file-upload-size

Metallb L2Advertisement/IPAdressPools assignments behave strangely

I'm using metallb 0.13.4 L2, I have below IP advertisements and pools. Nginx grabs the right IP address and metallb speakers announce it properly. So IP addresses are correctly assigned.
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: external-advertisement
namespace: metallb-system
spec:
ipAddressPools:
- external-pool
nodeSelectors:
- matchLabels:
kubernetes.io/os: linux
kubernetes.io/arch: amd64
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: internal-advertisement
namespace: metallb-system
spec:
ipAddressPools:
- internal-pool
nodeSelectors:
- matchLabels:
kubernetes.io/os: linux
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: external-pool
namespace: metallb-system
spec:
addresses:
- x.x.x.204/32
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: internal-pool
namespace: metallb-system
spec:
addresses:
- x.x.x.203/32
Nginx configs
....
controller:
annotations:
metallb.universe.tf/address-pool: external-pool
....
---
....
controller:
annotations:
metallb.universe.tf/address-pool: internal-pool
....
and from nginx controller events
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal nodeAssigned 4m6s (x1173 over 19h) metallb-speaker announcing from node [redacted] with protocol "layer2"
See the (x1173 over 19h) so weird ? And when I look at the Ingresses their IP addresses change constantly but they are assigned to either internal or external nginx classes.
$ kl get ingressclass
NAME CONTROLLER PARAMETERS AGE
nginx k8s.io/ingress-nginx <none> 5d6h
nginx-internal k8s.io/ingress-nginx <none> 5d6h
Although Ingress IPs constantly change between x.x.x.203 and x.x.x.204???, they always responds on the assigned IP address!!! This definitely looks very strange.
Note: I wasn't sure about the help in metallb project, that's why I'm creating the issue here.
The problem was the annotations on the controller, they should be at controller.service; Here is the working configuration;
controller:
service:
externalTrafficPolicy: Local
type: LoadBalancer
loadBalancerIP: x.x.x.203
annotations:
metallb.universe.tf/address-pool: "internal-pool"
Additionally, service must be type LoadBalancer and IP is specified.

Finding the apiVersion for EKS in a region

I am learning EKS -- and have been provided the following example YAML to create cluster.
Where does the 'apiVersion' in the example YAML derived from? Is it eksctl or eks?
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: my-demo-cluster
region: us-west-2
nodeGroups:
- name: my-demo-workers
instanceType: t3.medium
desiredCapacity: 4
minSize: 1
maxSize: 4
It's a standard approach to version API, and here it's definitely coming from eksctl: https://github.com/weaveworks/eksctl/tree/master/pkg/apis/eksctl.io

How to create a limited access token for Kubernetes

I want to have a token that to use in a piece of code that has limited access to my k8s cluster and just be able to read the replicas of statefulsets and if required scale them, I don't want the person who uses that code be able to launch new stuff or delete running ones.
Is this possible? If yes how I can do it?
You need RBAC (Role Based Access Control) for this job.
Sample pod-reader role:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: pod-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "list"]
Bind this role to a user:
apiVersion: rbac.authorization.k8s.io/v1
# This role binding allows "jane" to read pods in the "default" namespace.
kind: RoleBinding
metadata:
name: read-pods
namespace: default
subjects:
- kind: User
name: jane # Name is case sensitive
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role #this must be Role or ClusterRole
name: pod-reader # this must match the name of the Role or ClusterRole you wish to bind to
apiGroup: rbac.authorization.k8s.io
Change the apiGroup and verbs based on your requirements.

How to create service account for Spinnaker

I want to automate pipeline triggers by using fiat service account. So I follow the Spinnaker doc: https://www.spinnaker.io/setup/security/authorization/service-accounts/ Then i have trouble to run the curl command. Where should I run it? I tried to run in local machine which is installed halyard and fiat pod in Kubernetes. However, I got cannot resolve http://front50.url:8080.
Create Role for spinnaker with role name spinnaker-role you can edit role as per you need
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: spinnaker-role
namespace: default
rules:
- apiGroups: [""]
resources: ["namespaces", "configmaps", "events", "replicationcontrollers", "serviceaccounts", "pods/logs"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["pods", "services", "secrets"]
verbs: ["*"]
- apiGroups: ["autoscaling"]
resources: ["horizontalpodautoscalers"]
verbs: ["list", "get"]
- apiGroups: [“apps”]
resources: [“controllerrevisions”, "statefulsets"]
verbs: [“list”]
- apiGroups: ["extensions", "app"]
resources: ["deployments", "replicasets", "ingresses"]
verbs: ["*"]
Service account for spinnaker
apiVersion: v1
kind: ServiceAccount
metadata:
name: spinnaker-service-account
namespace: default
Main part role binding
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: spinnaker-role-binding
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: spinnaker-role
subjects:
- namespace: default
kind: ServiceAccount
name: spinnaker-service-account
You can edit it as per your need changing statefulset adding deployments
This url is just an example and won't work. You need to access it using the service that exposes front50. If you installed using Halyard, probably the service is exposed as spin-front50:8080
I ran it in halyard and used the URL
(I know its really long time after your question :), I just happened to see this and it's better late than never.)
You have to port-forward into the pod, and curl your localhost with the port created for that pod, during port-forwarding.