i have made a pvc,why my openpai dashboard can't see any storages? - openpai

here are my pvc.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: csi-s3-pvc
namespace: pai-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: csi-s3
and here is my dashboard:
is my Authority wrong?

You need to call group API to link this PVC to OpenPAI group. Please refer: https://openpai.readthedocs.io/en/latest/manual/cluster-admin/how-to-set-up-storage.html#assign-storage-to-pai-groups

Related

I tried to resize PersistentVolumeClaim with help of Kubectl patch pvc to increase storage from 10 Gi to 70 Gi but it’s giving error:

$ k patch pvc pv-volume -p '{"spec":{"resources":{"requests":{"storage":"70Mi"}}}}'
Error from server (Forbidden): persistentvolumeclaims "pv-volume" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Question: Create a pvc
name: pv-volume, class: csi-hostpath-sc, capacity:10Mi
Create a pod which mount the pvc as a volume.
name: web-server, image:nginx,mountpath: /usr/share/nginx/html
configure the new pod to have readWriteOnce
finally using kubectl edit ot kubectl patch pvc to a capacity of 70 Mi and record that change.
Please give me solution to patch pvc and record the change.
pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv
spec:
capacity:
storage: 70Mi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: csi-hostpath-sc
hostPath:
path: /usr/share/nginx/html
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-volume
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 10Mi
storageClassName: csi-hostpath-sc
pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: web-server
spec:
containers:
- name: web-server
image: nginx
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: pv-volume

Cert-Manager Certificate creation stuck at Created new CertificateRequest resource

I am using cert-manager v1.0.0 on GKE, I tried to use the staging environment for acme and it worked fine but when shifting to production I can find the created certificate stuck at Created new CertificateRequest resource and nothing changes after that
I expect to see the creation of the certificate to be succeeded and change the status of the certificate from false to true as happens in staging
Environment details::
Kubernetes version (v1.18.9):
Cloud-provider/provisioner (GKE):
cert-manager version (v1.0.0):
Install method (helm)
Here is my clusterIssuer yaml file
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: i-storage-ca-issuer-prod
namespace: default
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: MY_EMAIL_HERE
privateKeySecretRef:
name: i-storage-ca-issuer-prod
solvers:
- http01:
ingress:
class: gce
And here is my ingress yaml file
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: i-storage-core
namespace: i-storage
annotations:
kubernetes.io/ingress.global-static-ip-name: i-storage-core-ip
cert-manager.io/cluster-issuer: i-storage-ca-issuer-prod
labels:
app: i-storage-core
spec:
tls:
- hosts:
- i-storage.net
secretName: i-storage-core-prod-cert
rules:
- host: i-storage.net
http:
paths:
- path: /*
backend:
serviceName: i-storage-core-service
servicePort: 80
describe certificateRequest output
Name: i-storage-core-prod-cert-stb6l
Namespace: i-storage
Labels: app=i-storage-core
Annotations: cert-manager.io/certificate-name: i-storage-core-prod-cert
cert-manager.io/certificate-revision: 1
cert-manager.io/private-key-secret-name: i-storage-core-prod-cert-2pw26
API Version: cert-manager.io/v1
Kind: CertificateRequest
Metadata:
Creation Timestamp: 2020-10-31T15:44:57Z
Generate Name: i-storage-core-prod-cert-
Generation: 1
Managed Fields:
API Version: cert-manager.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:cert-manager.io/certificate-name:
f:cert-manager.io/certificate-revision:
f:cert-manager.io/private-key-secret-name:
f:generateName:
f:labels:
.:
f:app:
f:ownerReferences:
.:
k:{"uid":"f3442651-3941-49af-81de-dcb937e8ba40"}:
.:
f:apiVersion:
f:blockOwnerDeletion:
f:controller:
f:kind:
f:name:
f:uid:
f:spec:
.:
f:issuerRef:
.:
f:group:
f:kind:
f:name:
f:request:
f:status:
.:
f:conditions:
Manager: controller
Operation: Update
Time: 2020-10-31T15:44:57Z
Owner References:
API Version: cert-manager.io/v1
Block Owner Deletion: true
Controller: true
Kind: Certificate
Name: i-storage-core-prod-cert
UID: f3442651-3941-49af-81de-dcb937e8ba40
Resource Version: 18351251
Self Link: /apis/cert-manager.io/v1/namespaces/i-storage/certificaterequests/i-storage-core-prod-cert-stb6l
UID: 83412862-903f-4fff-a736-f170e840748e
Spec:
Issuer Ref:
Group: cert-manager.io
Kind: ClusterIssuer
Name: i-storage-ca-issuer-prod
Request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ2ZUQ0NBV1VDQVFBd0FEQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU5HcQovRDRVZlRhV0xFa01GUzdsdVN1RmRlR0NNVjJ4czREcG5Pem1HbjJxSlRUTlBnS2hHbGVEd0p2TkZIaTc5WWxHCmpYcjhjNDFHU1JUT2U4UDdUS3AvWXpBSUtxSXpPMllIeHY5VzA5bEZDWWQ4MTByMUNsOG5jb2NYa3BGZlAxMzAKZURlczZ6SUkwZW9ZTW1uRXQ3cmRUNk52dHhuZ1ZZVmlnai9VcXpxSkZ4NmlLa0R6V1VHK3lNcWtQM1ZKa1lYeApZUFNTNWZsWXlTdkI4emdxb3pnNUNJUndra09KTU1aRlNoWHVxYkpNZnJvQmR2YW9nQWtEYmZYSWs0SVRIaXlYCkV4UDFBaFdieGhPbndDd2h5bXpGWmgzSkZUZHhzeFdtRDZJMmp3MzV1SXZ1WWlIWEJ4VTBCMG50K3FYMVVWaWwKSkRlOFdNcTdjT3AzWmtlT2FHa0NBd0VBQWFBNE1EWUdDU3FHU0liM0RRRUpEakVwTUNjd0dBWURWUjBSQkJFdwpENElOYVMxemRHOXlZV2RsTG01bGREQUxCZ05WSFE4RUJBTUNCYUF3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCCkFLMkhhSEQxd3dDZVFqS1diU1N0SFkxMm1Da1A1amQ0RnFmZFFYRG5XR3grK3FCWExGY0F4TVZhbVF2cStQK0gKLzExQjhvdlUydU9icGRHRktoak9aNDJsdjNNMVllRWk5UG5nS0RFdndCbER0Q0Vsa0lHQzV4T1ZENCtheVlmaApEMUI2L20vdEJsdlhYNS8zRDlyejJsTWNRSzRnSTNVQ3Mxd0Y0bmduQ3JYMEhoSDJEendheXI5d2QvY1V1clZlClloYS9HZjcyaEFCcGQxSmkrR2hKaGxzVDlGbTNVZVNUTi9OYkpVWmk4NkM1S1dTRW1DblNjV3dzWGNoVW1vVisKVHpGQmNhOEhqOUxsVFdJVVBSYVl0bFQ2TEhrUjVLUW1EL2tJRTZDajlidTNXMG9oWDZ2UC9CQ012SWdaTVZEUgoyeFVwY3lhUmJad2ttWTQ2MktNZ25wUT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUgUkVRVUVTVC0tLS0tCg==
Status:
Conditions:
Last Transition Time: 2020-10-31T15:44:57Z
Message: Waiting on certificate issuance from order i-storage/i-storage-core-prod-cert-stb6l-177980933: "pending"
Reason: Pending
Status: False
Type: Ready
Events: <none>
describe order output
Name: i-storage-core-prod-cert-stb6l-177980933
Namespace: i-storage
Labels: app=i-storage-core
Annotations: cert-manager.io/certificate-name: i-storage-core-prod-cert
cert-manager.io/certificate-revision: 1
cert-manager.io/private-key-secret-name: i-storage-core-prod-cert-2pw26
API Version: acme.cert-manager.io/v1
Kind: Order
Metadata:
Creation Timestamp: 2020-10-31T15:44:57Z
Generation: 1
Managed Fields:
API Version: acme.cert-manager.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:cert-manager.io/certificate-name:
f:cert-manager.io/certificate-revision:
f:cert-manager.io/private-key-secret-name:
f:labels:
.:
f:app:
f:ownerReferences:
.:
k:{"uid":"83412862-903f-4fff-a736-f170e840748e"}:
.:
f:apiVersion:
f:blockOwnerDeletion:
f:controller:
f:kind:
f:name:
f:uid:
f:spec:
.:
f:dnsNames:
f:issuerRef:
.:
f:group:
f:kind:
f:name:
f:request:
f:status:
.:
f:authorizations:
f:finalizeURL:
f:state:
f:url:
Manager: controller
Operation: Update
Time: 2020-10-31T15:44:57Z
Owner References:
API Version: cert-manager.io/v1
Block Owner Deletion: true
Controller: true
Kind: CertificateRequest
Name: i-storage-core-prod-cert-stb6l
UID: 83412862-903f-4fff-a736-f170e840748e
Resource Version: 18351252
Self Link: /apis/acme.cert-manager.io/v1/namespaces/i-storage/orders/i-storage-core-prod-cert-stb6l-177980933
UID: 92165d9c-e57e-4d6e-803d-5d28e8f3033a
Spec:
Dns Names:
i-storage.net
Issuer Ref:
Group: cert-manager.io
Kind: ClusterIssuer
Name: i-storage-ca-issuer-prod
Request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ2ZUQ0NBV1VDQVFBd0FEQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU5HcQovRDRVZlRhV0xFa01GUzdsdVN1RmRlR0NNVjJ4czREcG5Pem1HbjJxSlRUTlBnS2hHbGVEd0p2TkZIaTc5WWxHCmpYcjhjNDFHU1JUT2U4UDdUS3AvWXpBSUtxSXpPMllIeHY5VzA5bEZDWWQ4MTByMUNsOG5jb2NYa3BGZlAxMzAKZURlczZ6SUkwZW9ZTW1uRXQ3cmRUNk52dHhuZ1ZZVmlnai9VcXpxSkZ4NmlLa0R6V1VHK3lNcWtQM1ZKa1lYeApZUFNTNWZsWXlTdkI4emdxb3pnNUNJUndra09KTU1aRlNoWHVxYkpNZnJvQmR2YW9nQWtEYmZYSWs0SVRIaXlYCkV4UDFBaFdieGhPbndDd2h5bXpGWmgzSkZUZHhzeFdtRDZJMmp3MzV1SXZ1WWlIWEJ4VTBCMG50K3FYMVVWaWwKSkRlOFdNcTdjT3AzWmtlT2FHa0NBd0VBQWFBNE1EWUdDU3FHU0liM0RRRUpEakVwTUNjd0dBWURWUjBSQkJFdwpENElOYVMxemRHOXlZV2RsTG01bGREQUxCZ05WSFE4RUJBTUNCYUF3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCCkFLMkhhSEQxd3dDZVFqS1diU1N0SFkxMm1Da1A1amQ0RnFmZFFYRG5XR3grK3FCWExGY0F4TVZhbVF2cStQK0gKLzExQjhvdlUydU9icGRHRktoak9aNDJsdjNNMVllRWk5UG5nS0RFdndCbER0Q0Vsa0lHQzV4T1ZENCtheVlmaApEMUI2L20vdEJsdlhYNS8zRDlyejJsTWNRSzRnSTNVQ3Mxd0Y0bmduQ3JYMEhoSDJEendheXI5d2QvY1V1clZlClloYS9HZjcyaEFCcGQxSmkrR2hKaGxzVDlGbTNVZVNUTi9OYkpVWmk4NkM1S1dTRW1DblNjV3dzWGNoVW1vVisKVHpGQmNhOEhqOUxsVFdJVVBSYVl0bFQ2TEhrUjVLUW1EL2tJRTZDajlidTNXMG9oWDZ2UC9CQ012SWdaTVZEUgoyeFVwY3lhUmJad2ttWTQ2MktNZ25wUT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUgUkVRVUVTVC0tLS0tCg==
Status:
Authorizations:
Challenges:
Token: EMTpMo_Jt5YkITiwk_lOuL66Xu_Q38scNMf1o0LPgvs
Type: http-01
URL: https://acme-v02.api.letsencrypt.org/acme/chall-v3/8230128790/0EcdqA
Token: EMTpMo_Jt5YkITiwk_lOuL66Xu_Q38scNMf1o0LPgvs
Type: dns-01
URL: https://acme-v02.api.letsencrypt.org/acme/chall-v3/8230128790/9chkYQ
Token: EMTpMo_Jt5YkITiwk_lOuL66Xu_Q38scNMf1o0LPgvs
Type: tls-alpn-01
URL: https://acme-v02.api.letsencrypt.org/acme/chall-v3/8230128790/BaReZw
Identifier: i-storage.net
Initial State: pending
URL: https://acme-v02.api.letsencrypt.org/acme/authz-v3/8230128790
Wildcard: false
Finalize URL: https://acme-v02.api.letsencrypt.org/acme/finalize/100748195/5939190036
State: pending
URL: https://acme-v02.api.letsencrypt.org/acme/order/100748195/5939190036
Events: <none>
List all certificates that you have:
kubectl get certificate --all-namespaces
Try to figure out the problem using describe command:
kubectl describe certificate CERTIFICATE_NAME -n YOUR_NAMESPACE
The output of the above command contains the name of the associated certificate request. Dig into more details using describe command once again:
kubectl describe certificaterequest CERTTIFICATE_REQUEST_NAME -n YOUR_NAMESPACE
You may also want to troubleshoot challenges with the following command:
kubectl describe challenges --all-namespaces
In my case, to make it work, I had to replace ClusterIssuer with just Issuer for reasons explained in the comment.
Here is my Issuer manifest:
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: cert-manager-staging
namespace: YOUR_NAMESPACE
spec:
acme:
# You must replace this email address with your own.
# Let's Encrypt will use this to contact you about expiring
# certificates, and issues related to your account.
email: example#example.com
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Secret resource that will be used to store the account's private key.
name: cert-manager-staging-private-key
# Add a single challenge solver, HTTP01 using nginx
solvers:
- http01:
ingress:
class: nginx
Here is my simple Ingress manifest:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/issuer: cert-manager-staging
name: YOUR_NAME
namespace: YOUR_NAMESPACE
spec:
tls:
- hosts:
- example.com
secretName: example-com-staging-certificate
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: example.com
port:
number: 80

Sql script file is not getting copied to docker-entrypoint-initdb.d folder of mysql container?

My init.sql (script) is not getting copied to docker-entrypoint-initdb.d.
Note that the problem doesn't occur when I try to run it locally or on my server. It happens only when using the azure devops by creating build and release pipeline.
There seems to be a mistake in the hostpath(containing sql script) in the persistant volume YAML file in cases where the file is placed in the azure repos.
mysqlpersistantvolume.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-initdb-pv-volume
labels:
type: local
app: mysql
spec:
storageClassName: manual
capacity:
storage: 1Mi
accessModes:
- ReadOnlyMany
hostPath:
path: "/devops-sample" // main project folder in azure repos which
contains all files including sql script.
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mysql-initdb-pv-claim
labels:
app: mysql
spec:
storageClassName: manual
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 1Mi
mysql.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
protocol: TCP
targetPort: 3306
selector:
app: mysql
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
imagePullPolicy: "IfNotPresent"
env:
- name: MYSQL_ROOT_PASSWORD
value: root
- name: MYSQL_PASSWORD
value: kovaion
- name: MYSQL_USER
value: vignesh
- name: MYSQL_DATABASE
value: data-core
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /docker-entrypoint-initdb.d
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-initdb-pv-claim
Currently the folder docker-entrypoint-initdb.d seems to be empty(nothing is getting copied).
how to set the full host path in mysql persistant volume if the sql script is placed in the azure repos inside the devops-sample folder??
Mysql data directory storage location is wrong. You should mount persistent storage to /var/lib/mysql/data

I have a problem with Kubernetes depoyment. Can anybody help I always get this error when trying to connect to the cluster IP

I have problems with Kubernetes. I try to deploy my service for two days now bu I'm doing something wrong.
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\": No policy matched.",
"reason": "Forbidden",
"details": {
},
"code": 403
}
Does anybody knows what the problem could be?
Here is also my yaml file:
# Certificate
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: ${APP_NAME}
spec:
secretName: ${APP_NAME}-cert
dnsNames:
- ${URL}
- www.${URL}
acme:
config:
- domains:
- ${URL}
- www.${URL}
http01:
ingressClass: nginx
issuerRef:
name: ${CERT_ISSUER}
kind: ClusterIssuer
---
# Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ${APP_NAME}
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: 'true'
nginx.ingress.kubernetes.io/from-to-www-redirect: 'true'
spec:
tls:
- secretName: ${APP_NAME}-cert
hosts:
- ${URL}
- www.${URL}
rules:
- host: ${URL}
http:
paths:
- backend:
serviceName: ${APP_NAME}-service
servicePort: 80
---
# Service
apiVersion: v1
kind: Service
metadata:
name: ${APP_NAME}-service
labels:
app: ${CI_PROJECT_NAME}
spec:
selector:
name: ${APP_NAME}
app: ${CI_PROJECT_NAME}
ports:
- name: http
port: 80
targetPort: http
---
# Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: ${APP_NAME}
labels:
app: ${CI_PROJECT_NAME}
spec:
replicas: ${REPLICAS}
revisionHistoryLimit: 0
selector:
matchLabels:
app: ${CI_PROJECT_NAME}
template:
metadata:
labels:
name: ${APP_NAME}
app: ${CI_PROJECT_NAME}
spec:
containers:
- name: webapp
image: eu.gcr.io/my-site/my-site.com:latest
imagePullPolicy: Always
ports:
- name: http
containerPort: 80
env:
- name: COMMIT_SHA
value: ${CI_COMMIT_SHA}
livenessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 30
timeoutSeconds: 1
readinessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 5
timeoutSeconds: 1
resources:
requests:
memory: '16Mi'
limits:
memory: '64Mi'
imagePullSecrets:
- name: ${REGISTRY_PULL_SECRET}
Can anybody help me with this? I'm stuck and I've no idea what could be the problem. This is also my first Kubernetes project.
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\": No policy matched.",
.. means just what it says: your request to the kubernetes api was not authenticated (that's the system:anonymous part), and your RBAC configuration does not tolerate the anonymous user making any requests to the API
No one here is going to be able to help you straighten out that problem, because fixing that depends on a horrific number of variables. Perhaps ask your cluster administrator to provide you with the correct credentials.
I have explained it in this post. You will need ServiceAccount, ClusterRole and RoleBinding. You can find explanation in this article. Or as Matthew L Daniel mentioned in the Kubernetes documentation.
If you still have problems, provide the method/tutorial you have used to deploy the cluster (as "Gitlab Kubernetes integration" does not tell much on the method you have used).

Failed to discover supported resources

I am trying to Create User With Limited Namespace Access. created namespace named as test and also created Group:programmers, User:frontend. Generated credentials for user:frontend by the help of following http://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/
I created a role. Here is my role.yml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: test
name: frontend-developer
rules:
- apiGroups: ["","extensions","apps"]
resources: ["deployments","replicasets","pods"]
verbs: ["get","list","watch","create","patch"]`
I created rolebinding. Here is role-binding.yml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: frontend-deploy
namespace: test
subjects:
- kind: User
name: frontend
namespace: test
roleRef:
kind: Role
name: frontend-developer
apiGroup: rbac.authorization.k8s.io
I am talking my deployment file as
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nodefrontend
namespace: test
spec:
replicas: 3
template:
metadata:
labels:
app: bookstore
spec:
containers:
- name: nodeweb
image: balipalligayathri/devops-develop
ports:
- name: http
containerPort: 3000
protocol: TCP
I am using following commands while creating the role and rolebinding
$ kubectl create -f role.yml
$ kubectl create -f role-binding.yml
frontend developer Role and frontend-deploy Rolebindings were created.
Again, i am using the command kubectl create -f node-deployment.yml for deployment creation. Deployment was created and deleted successfully. here, i didn't mention any user while creating the deployment.so, I am trying to create deployment with user by using the below command.
kubectl create -f node-deployment.yml --as=frontend --context=frontend-context
I am facing the error like this
Error from server (Forbidden):
<html><head><meta http-equiv='refresh' content='1;url=/login?from=%2Fswagger-2.0.0.pb-v1%3Ftimeout%3D32s'/><script>window.location.replace('/login?from=%2Fswagger-2.0.0.pb-v1%3Ftimeout%3D32s');</script></head><body style='background-color:white; color:white;'>
Authentication requiredhttps://stackoverflow.com/questions/48164369/kubernetes- 1-8-dashboard-configurations-fails-with-error-no-kind-role-is-regi
You are authenticated as: anonymous
Groups that you are in:
Permission you need to have (but didn't): hudson.model.Hudson.Read
which is implied by: hudson.security.Permission.GenericRead
which is implied by: hudson.model.Hudson.Administer </body></html>
My doubt is: is there any necessity to mention the user in deployment.yml file?
You need to create a serviceAccount, take a look at this snippet:
apiVersion: v1
kind: ServiceAccount
metadata:
name: myAccount
bind it to your role:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: myBinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: frontend-developer
subjects:
- kind: ServiceAccount
name: myAccount
and use it in your Deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nodefrontend
namespace: test
spec:
template:
metadata:
labels:
...
spec:
serviceAccountName: myAccount
Ref:
https://kubernetes.io/docs/reference/access-authn-authz/authentication/
https://kubernetes.io/docs/reference/access-authn-authz/rbac/