Populating AWS Alb Ingress Annotations from ConfigMap - amazon-eks

I am creating a 'alb.ingress' resource as part of my Helm chart.
apiVersion: extenstions/v1beta1
kind: Ingress
metadate:
annotation:
alb.ingress.kubernetes.io/certification-arn: $cert_arn
alb.ingress.kubernetes.io/security-group: $sg
...
The values required in the 'alb.ingress' resource annotation sections, are available in my ConfigMap.
env:
- name: cert_arn
valueFrom:
configMapKeyRef:
name: environmental-variables
key: certification_arn
- name: sg
valueFrom:
configMapKeyRef:
name: environmental-variables
key: security-groups
...
Is there a way to populate the annotations using the config-map?

The way I solved this challenge was to create the ingress resource using Helm and the variables I had prior to creating the resource, such as name of the application, namespaces etc.
apiVersion: extenstions/v1beta1
kind: Ingress
metadata:
name: "{{ .Values.application.name }}-ingress"
namespace: "{{ .Values.env.name }}"
labels:
app: "{{ .Values.application.name }}"
specs:
rules:
- host: "{{ .Values.environment.name }}.{{ .Values.application.name }}.{{ .Values.domain.name }}"
https:
....
I used a pod (a job is also an option) to annotate the newly created ingress resource using the environmental values from the configmap.
apiVersion: extenstions/v1beta1
kind: Ingress
metadate:
name: annotate-ingress-alb
spec:
serviceAccountName: internal-kubectl
containers:
- name: modify-alb-ingress-controller
image: "{{ .Values.images.varion }}"
command: ["sh", "-c"]
args:
- '...
kubectl annotate ingress -n {{ .Values.env.name }} {{ .Values.application.name }}-ingress alb.ingress.kubernetes.io/certificate-arn=$CERT_ARN;
env:
- name: cert_arn
valueFrom:
configMapKeyRef:
name: environmental-variables
key: certification_arn
Note that the pod should have the right service account with the right permission roles are attached to it. For instance, in this case for the pod to be able to annotate the ALB, it had to have extensions apiGroup and the ingress resources in the list of permissions (I have not restricted the verbiage yet).
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: service-account-role
rules:
- apiGroups:
- ""
- extensions
resources:
- ingresses
verbs: ["*"]
Hope this helps someone in the future.

Related

I tried to resize PersistentVolumeClaim with help of Kubectl patch pvc to increase storage from 10 Gi to 70 Gi but it’s giving error:

$ k patch pvc pv-volume -p '{"spec":{"resources":{"requests":{"storage":"70Mi"}}}}'
Error from server (Forbidden): persistentvolumeclaims "pv-volume" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Question: Create a pvc
name: pv-volume, class: csi-hostpath-sc, capacity:10Mi
Create a pod which mount the pvc as a volume.
name: web-server, image:nginx,mountpath: /usr/share/nginx/html
configure the new pod to have readWriteOnce
finally using kubectl edit ot kubectl patch pvc to a capacity of 70 Mi and record that change.
Please give me solution to patch pvc and record the change.
pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv
spec:
capacity:
storage: 70Mi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: csi-hostpath-sc
hostPath:
path: /usr/share/nginx/html
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-volume
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 10Mi
storageClassName: csi-hostpath-sc
pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: web-server
spec:
containers:
- name: web-server
image: nginx
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: pv-volume

why i get the error "backend - 404 error" when trying to deploy tls ingress in kubernetes with no errors on events

I'm trying to deploy a simple Ingress service and works when is Ingress without the Secure function(tls), but when I include the cert tls it always returns me "backend - 404 error"
I already installed "cert manager", "ingress-nginx" and already checked if this install is ok
EDIT: I explained all the steps I'm doing
EDIT2: I updated the cert-manager's version to v1.5.4
these were the steps:
1.- install nginx controller for my ip
helm install bitnami/nginx-ingress-controller --set controller.service.loadBalancerIP="[MY-STATIC-IP]",rbac.create=true --generate-name
2.- Apply deployment and service (app.yaml)
apiVersion: apps/v1
kind: Deployment
metadata:
name: taxisbahiadeploy
labels:
type: endpoints-app
spec:
replicas: 1
selector:
matchLabels:
app: taxisbahiadeploy
template:
metadata:
labels:
app: taxisbahiadeploy
spec:
containers:
- name: taxisbahiadeploy
image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: Always
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: taxisbahia
spec:
ports:
- port: 8080
targetPort: 8080
selector:
app: taxisbahiadeploy
3.- Configure let's encrypt
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.5.4/cert-manager.crds.yaml
kubectl create namespace cert-manager
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install \
cert-manager \
--namespace cert-manager \
--version v1.5.4 \
jetstack/cert-manager
4- Apply the Issuer (issuer.yaml)
apiVersion: cert-manager.io/v1alpha2
kind: Issuer
metadata:
name: letsencrypt-staging
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: 'fco#ggggg.com'
privateKeySecretRef:
name: letsencrypt-staging
solvers:
- http01:
ingress:
class: nginx
---
apiVersion: cert-manager.io/v1alpha2
kind: Issuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: 'fco#ggggg.com'
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: nginx
5.- Final Step, this is the Ingress where it fails (ingress-tls.yaml)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: esp-ingress
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/issuer: "letsencrypt-staging"
spec:
tls:
- hosts:
- domain.com
secretName: esp-tls
rules:
- host: domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: taxisbahia
port:
number: 8080
i think your TLS domain part should be something like check your host
spec:
tls:
- hosts:
- example.example.com
secretName: quickstart-example-tls
Reference : https://cert-manager.io/docs/tutorials/acme/ingress/
First of all make sure that you are actually visiting https://yourapp.com
Had the same issue but then I realized I was actually trying HTTP, which is no longer available after TLS is added.

Cert-Manager Certificate creation stuck at Created new CertificateRequest resource

I am using cert-manager v1.0.0 on GKE, I tried to use the staging environment for acme and it worked fine but when shifting to production I can find the created certificate stuck at Created new CertificateRequest resource and nothing changes after that
I expect to see the creation of the certificate to be succeeded and change the status of the certificate from false to true as happens in staging
Environment details::
Kubernetes version (v1.18.9):
Cloud-provider/provisioner (GKE):
cert-manager version (v1.0.0):
Install method (helm)
Here is my clusterIssuer yaml file
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: i-storage-ca-issuer-prod
namespace: default
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: MY_EMAIL_HERE
privateKeySecretRef:
name: i-storage-ca-issuer-prod
solvers:
- http01:
ingress:
class: gce
And here is my ingress yaml file
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: i-storage-core
namespace: i-storage
annotations:
kubernetes.io/ingress.global-static-ip-name: i-storage-core-ip
cert-manager.io/cluster-issuer: i-storage-ca-issuer-prod
labels:
app: i-storage-core
spec:
tls:
- hosts:
- i-storage.net
secretName: i-storage-core-prod-cert
rules:
- host: i-storage.net
http:
paths:
- path: /*
backend:
serviceName: i-storage-core-service
servicePort: 80
describe certificateRequest output
Name: i-storage-core-prod-cert-stb6l
Namespace: i-storage
Labels: app=i-storage-core
Annotations: cert-manager.io/certificate-name: i-storage-core-prod-cert
cert-manager.io/certificate-revision: 1
cert-manager.io/private-key-secret-name: i-storage-core-prod-cert-2pw26
API Version: cert-manager.io/v1
Kind: CertificateRequest
Metadata:
Creation Timestamp: 2020-10-31T15:44:57Z
Generate Name: i-storage-core-prod-cert-
Generation: 1
Managed Fields:
API Version: cert-manager.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:cert-manager.io/certificate-name:
f:cert-manager.io/certificate-revision:
f:cert-manager.io/private-key-secret-name:
f:generateName:
f:labels:
.:
f:app:
f:ownerReferences:
.:
k:{"uid":"f3442651-3941-49af-81de-dcb937e8ba40"}:
.:
f:apiVersion:
f:blockOwnerDeletion:
f:controller:
f:kind:
f:name:
f:uid:
f:spec:
.:
f:issuerRef:
.:
f:group:
f:kind:
f:name:
f:request:
f:status:
.:
f:conditions:
Manager: controller
Operation: Update
Time: 2020-10-31T15:44:57Z
Owner References:
API Version: cert-manager.io/v1
Block Owner Deletion: true
Controller: true
Kind: Certificate
Name: i-storage-core-prod-cert
UID: f3442651-3941-49af-81de-dcb937e8ba40
Resource Version: 18351251
Self Link: /apis/cert-manager.io/v1/namespaces/i-storage/certificaterequests/i-storage-core-prod-cert-stb6l
UID: 83412862-903f-4fff-a736-f170e840748e
Spec:
Issuer Ref:
Group: cert-manager.io
Kind: ClusterIssuer
Name: i-storage-ca-issuer-prod
Request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ2ZUQ0NBV1VDQVFBd0FEQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU5HcQovRDRVZlRhV0xFa01GUzdsdVN1RmRlR0NNVjJ4czREcG5Pem1HbjJxSlRUTlBnS2hHbGVEd0p2TkZIaTc5WWxHCmpYcjhjNDFHU1JUT2U4UDdUS3AvWXpBSUtxSXpPMllIeHY5VzA5bEZDWWQ4MTByMUNsOG5jb2NYa3BGZlAxMzAKZURlczZ6SUkwZW9ZTW1uRXQ3cmRUNk52dHhuZ1ZZVmlnai9VcXpxSkZ4NmlLa0R6V1VHK3lNcWtQM1ZKa1lYeApZUFNTNWZsWXlTdkI4emdxb3pnNUNJUndra09KTU1aRlNoWHVxYkpNZnJvQmR2YW9nQWtEYmZYSWs0SVRIaXlYCkV4UDFBaFdieGhPbndDd2h5bXpGWmgzSkZUZHhzeFdtRDZJMmp3MzV1SXZ1WWlIWEJ4VTBCMG50K3FYMVVWaWwKSkRlOFdNcTdjT3AzWmtlT2FHa0NBd0VBQWFBNE1EWUdDU3FHU0liM0RRRUpEakVwTUNjd0dBWURWUjBSQkJFdwpENElOYVMxemRHOXlZV2RsTG01bGREQUxCZ05WSFE4RUJBTUNCYUF3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCCkFLMkhhSEQxd3dDZVFqS1diU1N0SFkxMm1Da1A1amQ0RnFmZFFYRG5XR3grK3FCWExGY0F4TVZhbVF2cStQK0gKLzExQjhvdlUydU9icGRHRktoak9aNDJsdjNNMVllRWk5UG5nS0RFdndCbER0Q0Vsa0lHQzV4T1ZENCtheVlmaApEMUI2L20vdEJsdlhYNS8zRDlyejJsTWNRSzRnSTNVQ3Mxd0Y0bmduQ3JYMEhoSDJEendheXI5d2QvY1V1clZlClloYS9HZjcyaEFCcGQxSmkrR2hKaGxzVDlGbTNVZVNUTi9OYkpVWmk4NkM1S1dTRW1DblNjV3dzWGNoVW1vVisKVHpGQmNhOEhqOUxsVFdJVVBSYVl0bFQ2TEhrUjVLUW1EL2tJRTZDajlidTNXMG9oWDZ2UC9CQ012SWdaTVZEUgoyeFVwY3lhUmJad2ttWTQ2MktNZ25wUT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUgUkVRVUVTVC0tLS0tCg==
Status:
Conditions:
Last Transition Time: 2020-10-31T15:44:57Z
Message: Waiting on certificate issuance from order i-storage/i-storage-core-prod-cert-stb6l-177980933: "pending"
Reason: Pending
Status: False
Type: Ready
Events: <none>
describe order output
Name: i-storage-core-prod-cert-stb6l-177980933
Namespace: i-storage
Labels: app=i-storage-core
Annotations: cert-manager.io/certificate-name: i-storage-core-prod-cert
cert-manager.io/certificate-revision: 1
cert-manager.io/private-key-secret-name: i-storage-core-prod-cert-2pw26
API Version: acme.cert-manager.io/v1
Kind: Order
Metadata:
Creation Timestamp: 2020-10-31T15:44:57Z
Generation: 1
Managed Fields:
API Version: acme.cert-manager.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:cert-manager.io/certificate-name:
f:cert-manager.io/certificate-revision:
f:cert-manager.io/private-key-secret-name:
f:labels:
.:
f:app:
f:ownerReferences:
.:
k:{"uid":"83412862-903f-4fff-a736-f170e840748e"}:
.:
f:apiVersion:
f:blockOwnerDeletion:
f:controller:
f:kind:
f:name:
f:uid:
f:spec:
.:
f:dnsNames:
f:issuerRef:
.:
f:group:
f:kind:
f:name:
f:request:
f:status:
.:
f:authorizations:
f:finalizeURL:
f:state:
f:url:
Manager: controller
Operation: Update
Time: 2020-10-31T15:44:57Z
Owner References:
API Version: cert-manager.io/v1
Block Owner Deletion: true
Controller: true
Kind: CertificateRequest
Name: i-storage-core-prod-cert-stb6l
UID: 83412862-903f-4fff-a736-f170e840748e
Resource Version: 18351252
Self Link: /apis/acme.cert-manager.io/v1/namespaces/i-storage/orders/i-storage-core-prod-cert-stb6l-177980933
UID: 92165d9c-e57e-4d6e-803d-5d28e8f3033a
Spec:
Dns Names:
i-storage.net
Issuer Ref:
Group: cert-manager.io
Kind: ClusterIssuer
Name: i-storage-ca-issuer-prod
Request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ2ZUQ0NBV1VDQVFBd0FEQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU5HcQovRDRVZlRhV0xFa01GUzdsdVN1RmRlR0NNVjJ4czREcG5Pem1HbjJxSlRUTlBnS2hHbGVEd0p2TkZIaTc5WWxHCmpYcjhjNDFHU1JUT2U4UDdUS3AvWXpBSUtxSXpPMllIeHY5VzA5bEZDWWQ4MTByMUNsOG5jb2NYa3BGZlAxMzAKZURlczZ6SUkwZW9ZTW1uRXQ3cmRUNk52dHhuZ1ZZVmlnai9VcXpxSkZ4NmlLa0R6V1VHK3lNcWtQM1ZKa1lYeApZUFNTNWZsWXlTdkI4emdxb3pnNUNJUndra09KTU1aRlNoWHVxYkpNZnJvQmR2YW9nQWtEYmZYSWs0SVRIaXlYCkV4UDFBaFdieGhPbndDd2h5bXpGWmgzSkZUZHhzeFdtRDZJMmp3MzV1SXZ1WWlIWEJ4VTBCMG50K3FYMVVWaWwKSkRlOFdNcTdjT3AzWmtlT2FHa0NBd0VBQWFBNE1EWUdDU3FHU0liM0RRRUpEakVwTUNjd0dBWURWUjBSQkJFdwpENElOYVMxemRHOXlZV2RsTG01bGREQUxCZ05WSFE4RUJBTUNCYUF3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCCkFLMkhhSEQxd3dDZVFqS1diU1N0SFkxMm1Da1A1amQ0RnFmZFFYRG5XR3grK3FCWExGY0F4TVZhbVF2cStQK0gKLzExQjhvdlUydU9icGRHRktoak9aNDJsdjNNMVllRWk5UG5nS0RFdndCbER0Q0Vsa0lHQzV4T1ZENCtheVlmaApEMUI2L20vdEJsdlhYNS8zRDlyejJsTWNRSzRnSTNVQ3Mxd0Y0bmduQ3JYMEhoSDJEendheXI5d2QvY1V1clZlClloYS9HZjcyaEFCcGQxSmkrR2hKaGxzVDlGbTNVZVNUTi9OYkpVWmk4NkM1S1dTRW1DblNjV3dzWGNoVW1vVisKVHpGQmNhOEhqOUxsVFdJVVBSYVl0bFQ2TEhrUjVLUW1EL2tJRTZDajlidTNXMG9oWDZ2UC9CQ012SWdaTVZEUgoyeFVwY3lhUmJad2ttWTQ2MktNZ25wUT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUgUkVRVUVTVC0tLS0tCg==
Status:
Authorizations:
Challenges:
Token: EMTpMo_Jt5YkITiwk_lOuL66Xu_Q38scNMf1o0LPgvs
Type: http-01
URL: https://acme-v02.api.letsencrypt.org/acme/chall-v3/8230128790/0EcdqA
Token: EMTpMo_Jt5YkITiwk_lOuL66Xu_Q38scNMf1o0LPgvs
Type: dns-01
URL: https://acme-v02.api.letsencrypt.org/acme/chall-v3/8230128790/9chkYQ
Token: EMTpMo_Jt5YkITiwk_lOuL66Xu_Q38scNMf1o0LPgvs
Type: tls-alpn-01
URL: https://acme-v02.api.letsencrypt.org/acme/chall-v3/8230128790/BaReZw
Identifier: i-storage.net
Initial State: pending
URL: https://acme-v02.api.letsencrypt.org/acme/authz-v3/8230128790
Wildcard: false
Finalize URL: https://acme-v02.api.letsencrypt.org/acme/finalize/100748195/5939190036
State: pending
URL: https://acme-v02.api.letsencrypt.org/acme/order/100748195/5939190036
Events: <none>
List all certificates that you have:
kubectl get certificate --all-namespaces
Try to figure out the problem using describe command:
kubectl describe certificate CERTIFICATE_NAME -n YOUR_NAMESPACE
The output of the above command contains the name of the associated certificate request. Dig into more details using describe command once again:
kubectl describe certificaterequest CERTTIFICATE_REQUEST_NAME -n YOUR_NAMESPACE
You may also want to troubleshoot challenges with the following command:
kubectl describe challenges --all-namespaces
In my case, to make it work, I had to replace ClusterIssuer with just Issuer for reasons explained in the comment.
Here is my Issuer manifest:
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: cert-manager-staging
namespace: YOUR_NAMESPACE
spec:
acme:
# You must replace this email address with your own.
# Let's Encrypt will use this to contact you about expiring
# certificates, and issues related to your account.
email: example#example.com
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Secret resource that will be used to store the account's private key.
name: cert-manager-staging-private-key
# Add a single challenge solver, HTTP01 using nginx
solvers:
- http01:
ingress:
class: nginx
Here is my simple Ingress manifest:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/issuer: cert-manager-staging
name: YOUR_NAME
namespace: YOUR_NAMESPACE
spec:
tls:
- hosts:
- example.com
secretName: example-com-staging-certificate
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: example.com
port:
number: 80

Sql script file is not getting copied to docker-entrypoint-initdb.d folder of mysql container?

My init.sql (script) is not getting copied to docker-entrypoint-initdb.d.
Note that the problem doesn't occur when I try to run it locally or on my server. It happens only when using the azure devops by creating build and release pipeline.
There seems to be a mistake in the hostpath(containing sql script) in the persistant volume YAML file in cases where the file is placed in the azure repos.
mysqlpersistantvolume.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-initdb-pv-volume
labels:
type: local
app: mysql
spec:
storageClassName: manual
capacity:
storage: 1Mi
accessModes:
- ReadOnlyMany
hostPath:
path: "/devops-sample" // main project folder in azure repos which
contains all files including sql script.
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mysql-initdb-pv-claim
labels:
app: mysql
spec:
storageClassName: manual
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 1Mi
mysql.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
protocol: TCP
targetPort: 3306
selector:
app: mysql
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
imagePullPolicy: "IfNotPresent"
env:
- name: MYSQL_ROOT_PASSWORD
value: root
- name: MYSQL_PASSWORD
value: kovaion
- name: MYSQL_USER
value: vignesh
- name: MYSQL_DATABASE
value: data-core
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /docker-entrypoint-initdb.d
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-initdb-pv-claim
Currently the folder docker-entrypoint-initdb.d seems to be empty(nothing is getting copied).
how to set the full host path in mysql persistant volume if the sql script is placed in the azure repos inside the devops-sample folder??
Mysql data directory storage location is wrong. You should mount persistent storage to /var/lib/mysql/data

Failed to discover supported resources

I am trying to Create User With Limited Namespace Access. created namespace named as test and also created Group:programmers, User:frontend. Generated credentials for user:frontend by the help of following http://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/
I created a role. Here is my role.yml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: test
name: frontend-developer
rules:
- apiGroups: ["","extensions","apps"]
resources: ["deployments","replicasets","pods"]
verbs: ["get","list","watch","create","patch"]`
I created rolebinding. Here is role-binding.yml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: frontend-deploy
namespace: test
subjects:
- kind: User
name: frontend
namespace: test
roleRef:
kind: Role
name: frontend-developer
apiGroup: rbac.authorization.k8s.io
I am talking my deployment file as
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nodefrontend
namespace: test
spec:
replicas: 3
template:
metadata:
labels:
app: bookstore
spec:
containers:
- name: nodeweb
image: balipalligayathri/devops-develop
ports:
- name: http
containerPort: 3000
protocol: TCP
I am using following commands while creating the role and rolebinding
$ kubectl create -f role.yml
$ kubectl create -f role-binding.yml
frontend developer Role and frontend-deploy Rolebindings were created.
Again, i am using the command kubectl create -f node-deployment.yml for deployment creation. Deployment was created and deleted successfully. here, i didn't mention any user while creating the deployment.so, I am trying to create deployment with user by using the below command.
kubectl create -f node-deployment.yml --as=frontend --context=frontend-context
I am facing the error like this
Error from server (Forbidden):
<html><head><meta http-equiv='refresh' content='1;url=/login?from=%2Fswagger-2.0.0.pb-v1%3Ftimeout%3D32s'/><script>window.location.replace('/login?from=%2Fswagger-2.0.0.pb-v1%3Ftimeout%3D32s');</script></head><body style='background-color:white; color:white;'>
Authentication requiredhttps://stackoverflow.com/questions/48164369/kubernetes- 1-8-dashboard-configurations-fails-with-error-no-kind-role-is-regi
You are authenticated as: anonymous
Groups that you are in:
Permission you need to have (but didn't): hudson.model.Hudson.Read
which is implied by: hudson.security.Permission.GenericRead
which is implied by: hudson.model.Hudson.Administer </body></html>
My doubt is: is there any necessity to mention the user in deployment.yml file?
You need to create a serviceAccount, take a look at this snippet:
apiVersion: v1
kind: ServiceAccount
metadata:
name: myAccount
bind it to your role:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: myBinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: frontend-developer
subjects:
- kind: ServiceAccount
name: myAccount
and use it in your Deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nodefrontend
namespace: test
spec:
template:
metadata:
labels:
...
spec:
serviceAccountName: myAccount
Ref:
https://kubernetes.io/docs/reference/access-authn-authz/authentication/
https://kubernetes.io/docs/reference/access-authn-authz/rbac/