Kubernetes 403 Forbidden querying API with good RBAC credentials - api

Final update: Apparently there was some sort of issue with the variables defined in the curl command when redefined them after closing the connection to the cluster, command started working.
The setup is simple, on learning environment.
i created ServiceAccount, Role & Rolebinding
Trying to query pods or Services, i'm getting:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "services is forbidden: User \"system:serviceaccount:default:myscript\" cannot list resource \"services\" in API group \"\" in the namespace \"default\"",
"reason": "Forbidden",
"details": {
"kind": "services"
},
"code": 403
I don't know where i'm failing.
Originally i had only get, list and delete verbs. but even after using wildcard '*' keeps saying forbidden.
Here's some info from the cluster:
Query command: curl -X GET $SERVER/api/v1/namespaces/default/services --header "Authorization: Bearer $MYSCRIPT_TOKEN" --cacert /etc/kubernetes/pki/ca.crt
ubuntu#master:~/$ kubectl describe sa myscript
Name: myscript
Namespace: default
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: <none>
Tokens: myscript-token
Events: <none>
ubuntu#master:~/$ kubectl get role script-role
NAME CREATED AT
script-role 2022-09-04T10:44:22Z
ubuntu#master:~/$ kubectl get rolebinding script-rb -o wide
NAME ROLE AGE USERS GROUPS SERVICEACCOUNTS
script-rb Role/script-role 57m default/myscript
ubuntu#master:~/$ kubectl describe role script-role
Name: script-role
Labels: <none>
Annotations: <none>
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
pods [] [] [*]
services [] [] [*]
deployments.apps [] [] [get list delete]
Update:
few can-i commands that evidence RBAC should be good.
ubuntu#master:~$ kubectl auth can-i get services --as system:serviceaccount:default:myscript
yes
ubuntu#master:~$ kubectl auth can-i list services --as system:serviceaccount:default:myscript
yes
ubuntu#master:~$ kubectl auth can-i delete services --as system:serviceaccount:default:myscript
yes
ubuntu#master:~$ kubectl auth can-i delete deploy --as system:serviceaccount:default:myscript
yes
ubuntu#master:~$ kubectl auth can-i update deploy --as system:serviceaccount:default:myscript
no
ServiceAccount manifest.
ubuntu#master:~$ kubectl get sa myscript -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2022-09-04T10:35:47Z"
name: myscript
namespace: default
resourceVersion: "675592"
uid: ab3b3c20-e3b9-405a-a9e9-e4f65ac13f5c
Role manifest
ubuntu#master:~$ kubectl get role script-role -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"Role","metadata":{"annotations":{},"name":"script-role","namespace":"default"},"rules":[{"apiGroups":[""],"resources":["pods","services"],"verbs":["get","list","delete"]},{"apiGroups":["apps"],"resources":["deployments"],"verbs":["get","list","delete"]}]}
creationTimestamp: "2022-09-04T10:44:22Z"
name: script-role
namespace: default
resourceVersion: "681508"
uid: a1b03864-081e-4d0a-bf54-9c69f6f6c17e
rules:
- apiGroups:
- ""
resources:
- pods
- services
verbs:
- '*'
- apiGroups:
- apps
resources:
- deployments
verbs:
- get
- list
- delete
RoleBinding manifest
ubuntu#master:~$ kubectl get rolebinding script-rb -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"RoleBinding","metadata":{"annotations":{},"creationTimestamp":null,"name":"script-rb","namespace":"default"},"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"Role","name":"script-role"},"subjects":[{"kind":"ServiceAccount","name":"myscript","namespace":"default"}]}
creationTimestamp: "2022-09-04T10:46:05Z"
name: script-rb
namespace: default
resourceVersion: "676627"
uid: dbdcef8f-6a30-4cd3-8152-2626c2284c83
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: script-role
subjects:
- kind: ServiceAccount
name: myscript
namespace: default

2 questions:
Can you share the manifests of Role, RoleBinding and ServiceAccount?
Are you able to verify the working of your Role & RoleBinding with ServiceAccount using the kubectl auth can-i command?
// kubectl auth can-i <verb> <resource> -n <namespace> --as system:service:<namespace>:<service-account-name>
kubectl auth can-i get service --as system:serviceaccount:default:myscript

Related

Registered Targets Disappear

I have a working EKS cluster. It is using a ALB for ingress.
When I apply a service and then an ingress most of these work as expected. However some target groups eventually have no registered targets. If I get the service IP address kubectl describe svc my-service-name and manually register the EndPoints in the target group the pods are reachable again but that's not a sustainable process.
Any ideas on what might be happening? Why doesn't EKS find the target groups as pods cycle?
Each service (secrets, deployment, service and ingress consists of a set of .yaml files applied like:
deploy.sh
#!/bin/bash
set -e
kubectl apply -f ./secretsMap.yaml
kubectl apply -f ./configMap.yaml
kubectl apply -f ./deployment.yaml
kubectl apply -f ./service.yaml
kubectl apply -f ./ingress.yaml
service.yaml
apiVersion: v1
kind: Service
metadata:
name: "site-bob"
namespace: "next-sites"
spec:
ports:
- port: 80
targetPort: 3000
protocol: TCP
type: NodePort
selector:
app: "site-bob"
ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: "site-bob"
namespace: "next-sites"
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/tags: Environment=Production,Group=api
alb.ingress.kubernetes.io/backend-protocol: HTTP
alb.ingress.kubernetes.io/ip-address-type: ipv4
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80},{"HTTPS":443}]'
alb.ingress.kubernetes.io/load-balancer-name: eks-ingress-1
alb.ingress.kubernetes.io/group.name: eks-ingress-1
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-2:402995436123:certificate/9db9dce3-055d-4655-842e-xxxxx
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
alb.ingress.kubernetes.io/healthcheck-path: /
alb.ingress.kubernetes.io/healthcheck-interval-seconds: '30'
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '16'
alb.ingress.kubernetes.io/success-codes: 200,201
alb.ingress.kubernetes.io/healthy-threshold-count: '2'
alb.ingress.kubernetes.io/unhealthy-threshold-count: '2'
alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds=60
alb.ingress.kubernetes.io/target-group-attributes: deregistration_delay.timeout_seconds=30
alb.ingress.kubernetes.io/actions.ssl-redirect: >
{
"type": "redirect",
"redirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}
}
alb.ingress.kubernetes.io/actions.svc-host: >
{
"type":"forward",
"forwardConfig":{
"targetGroups":[
{
"serviceName":"site-bob",
"servicePort": 80,"weight":20}
],
"targetGroupStickinessConfig":{"enabled":true,"durationSeconds":200}
}
}
labels:
app: site-bob
spec:
rules:
- host: "staging-bob.imgeinc.net"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ssl-redirect
port:
name: use-annotation
- backend:
service:
name: svc-host
port:
name: use-annotation
pathType: ImplementationSpecific
Something in my configuration added tagged two security groups as being owned by the cluster. When I checked the load balancer controller logs:
kubectl logs -n kube-system aws-load-balancer-controller-677c7998bb-l7mwb
I saw many lines like:
{"level":"error","ts":1641996465.6707578,"logger":"controller-runtime.manager.controller.targetGroupBinding","msg":"Reconciler error","reconciler group":"elbv2.k8s.aws","reconciler kind":"TargetGroupBinding","name":"k8s-nextsite-sitefest-89a6f0ff0a","namespace":"next-sites","error":"expect exactly one securityGroup tagged with kubernetes.io/cluster/imageinc-next-eks-4KN4v6EX for eni eni-0c5555fb9a87e93ad, got: [sg-04b2754f1c85ac8b9 sg-07b026b037dd4d6a4]"}
sg-07b026b037dd4d6a4 has description: EKS created security group applied to ENI that is attached to EKS Control Plane master nodes, as well as any managed workloads.
sg-04b2754f1c85ac8b9 has description: Security group for all nodes in the cluster.
I removed the tag:
{
Key: 'kubernetes.io/cluster/_cluster name_',
value:'owned'
}
from sg-04b2754f1c85ac8b9
and the TargetGroups started to fill in and everything is now working. Both groups were created and tagged by Terraform. I suspect my worker group configuration is off.
facing the same issue when creating the cluster with terraform. Solved updating aws load balancer controller from 2.3 to 2.4.4

cert-manager + kubernetes wildcard problem [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
Im trying create wildcard cert on Rancher kubernetes engine behind cloud loadbalancer.
After install rancher i have a Issuer:
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
annotations:
meta.helm.sh/release-name: rancher
meta.helm.sh/release-namespace: cattle-system
creationTimestamp: "2021-09-21T12:10:25Z"
generation: 1
labels:
app: rancher
app.kubernetes.io/managed-by: Helm
chart: rancher-2.5.9
heritage: Helm
release: rancher
name: rancher
namespace: cattle-system
resourceVersion: "1318"
selfLink: /apis/cert-manager.io/v1/namespaces/cattle-system/issuers/rancher
uid: #
spec:
acme:
email: #
preferredChain: ""
privateKeySecretRef:
name: letsencrypt-production
server: https://acme-v02.api.letsencrypt.org/directory
solvers:
- http01:
ingress: {}
status:
acme:
lastRegisteredEmail: #
uri: https://acme-v02.api.letsencrypt.org/#
conditions:
- lastTransitionTime: "2021-09-21T12:10:27Z"
message: The ACME account was registered with the ACME server
reason: ACMEAccountRegistered
status: "True"
type: Ready
this is order:
kubectl describe order wildcard-dev-mctqj-4171528257 -n cattle-system
Name: wildcard-dev-mctqj-4171528257
Namespace: cattle-system
Labels: <none>
Annotations: cert-manager.io/certificate-name: wildcard-dev
cert-manager.io/certificate-revision: 1
cert-manager.io/private-key-secret-name: wildcard-dev-2g4rc
API Version: acme.cert-manager.io/v1
Kind: Order
Metadata:
Creation Timestamp: 2021-09-21T14:10:50Z
Generation: 1
Managed Fields:
API Version: acme.cert-manager.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:cert-manager.io/certificate-name:
f:cert-manager.io/certificate-revision:
f:cert-manager.io/private-key-secret-name:
f:kubectl.kubernetes.io/last-applied-configuration:
f:ownerReferences:
.:
k:{"uid":"}
.:
f:apiVersion:
f:blockOwnerDeletion:
f:controller:
f:kind:
f:name:
f:uid:
f:spec:
.:
f:commonName:
f:dnsNames:
f:issuerRef:
.:
f:kind:
f:name:
f:request:
f:status:
.:
f:authorizations:
f:finalizeURL:
f:state:
f:url:
Manager: controller
Operation: Update
Time: 2021-09-21T14:10:52Z
Owner References:
API Version: cert-manager.io/v1
Block Owner Deletion: true
Controller: true
Kind: CertificateRequest
Name: wildcard-dev-mctqj
UID: #
Resource Version: 48930
Self Link: /apis/acme.cert-manager.io/v1/namespaces/cattle-system/orders/wildcard-dev-mctqj-4171528257
UID: #
Spec:
Common Name: *.
Dns Names:
*.rancher-dev.com
Issuer Ref:
Kind: Issuer
Name: rancher
Request:
Status:
Authorizations:
Challenges:
Token: #######
Type: dns-01
URL: https://acme-v02.api.letsencrypt.org/acme/chall-v3/##
Identifier: rancher.dev.com
Initial State: pending
URL: https://acme-v02.api.letsencrypt.org/acme/authz-v3/##
Wildcard: true
Finalize URL: https://acme-v02.api.letsencrypt.org/acme/finalize/###
State: pending
URL: https://acme-v02.api.letsencrypt.org/acme/order/###
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Solver 49m cert-manager Failed to determine a valid solver configuration for the set of domains on the Order: no configured challenge solvers can be used for th is challenge
dns changed ofc
Certificate:
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: wildcard-dev
namespace: cattle-system
spec:
secretName: wildcard-dev
issuerRef:
kind: Issuer
name: rancher
commonName: '*.rancher.dev.com'
dnsNames:
- '*.rancher.dev.com'
i dont create ingress yet..
i think trubl in order
Type: dns-01
What i do wrong ?
Mbe create second issuer ?
Actually, i want create wildcard certificate and clone him wit kubed, becouse i need a lot namespaces in kube with 1 wldcard cert. What can you advise me, guys?)
As it is written here serving-a-wildcard-to-ingress, http01 solver does not support wildcard. Instead you should use dns01 for wildcard certificates.
See documentation to dns01 solver.

How to create a limited access token for Kubernetes

I want to have a token that to use in a piece of code that has limited access to my k8s cluster and just be able to read the replicas of statefulsets and if required scale them, I don't want the person who uses that code be able to launch new stuff or delete running ones.
Is this possible? If yes how I can do it?
You need RBAC (Role Based Access Control) for this job.
Sample pod-reader role:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: pod-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "list"]
Bind this role to a user:
apiVersion: rbac.authorization.k8s.io/v1
# This role binding allows "jane" to read pods in the "default" namespace.
kind: RoleBinding
metadata:
name: read-pods
namespace: default
subjects:
- kind: User
name: jane # Name is case sensitive
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role #this must be Role or ClusterRole
name: pod-reader # this must match the name of the Role or ClusterRole you wish to bind to
apiGroup: rbac.authorization.k8s.io
Change the apiGroup and verbs based on your requirements.

How to create service account for Spinnaker

I want to automate pipeline triggers by using fiat service account. So I follow the Spinnaker doc: https://www.spinnaker.io/setup/security/authorization/service-accounts/ Then i have trouble to run the curl command. Where should I run it? I tried to run in local machine which is installed halyard and fiat pod in Kubernetes. However, I got cannot resolve http://front50.url:8080.
Create Role for spinnaker with role name spinnaker-role you can edit role as per you need
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: spinnaker-role
namespace: default
rules:
- apiGroups: [""]
resources: ["namespaces", "configmaps", "events", "replicationcontrollers", "serviceaccounts", "pods/logs"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["pods", "services", "secrets"]
verbs: ["*"]
- apiGroups: ["autoscaling"]
resources: ["horizontalpodautoscalers"]
verbs: ["list", "get"]
- apiGroups: [“apps”]
resources: [“controllerrevisions”, "statefulsets"]
verbs: [“list”]
- apiGroups: ["extensions", "app"]
resources: ["deployments", "replicasets", "ingresses"]
verbs: ["*"]
Service account for spinnaker
apiVersion: v1
kind: ServiceAccount
metadata:
name: spinnaker-service-account
namespace: default
Main part role binding
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: spinnaker-role-binding
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: spinnaker-role
subjects:
- namespace: default
kind: ServiceAccount
name: spinnaker-service-account
You can edit it as per your need changing statefulset adding deployments
This url is just an example and won't work. You need to access it using the service that exposes front50. If you installed using Halyard, probably the service is exposed as spin-front50:8080
I ran it in halyard and used the URL
(I know its really long time after your question :), I just happened to see this and it's better late than never.)
You have to port-forward into the pod, and curl your localhost with the port created for that pod, during port-forwarding.

Kubernetes PersistentVolume and PersistentVolumeClaim could be causing issues for my pod which crashes while copying logs

I have a PersistentVolume that I specified as the following:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mypv-shared
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 5Gi
hostPath:
path: /data/mypv-shared/
Then I created a PersistentVolumeClaim with the following specifications:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mypv-shared-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
But when I create the PVC, running kubectl get pv shows that it is bound to a randomly generated PV
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-38c77920-a223-11e7-89cc-08002719b642 5Gi RWX Delete Bound default/mypv-shared standard 16m
I believe this is causing issues for my pods when running tests because I am not sure if the pod is correctly mounting the specified directory. My pods crash at the end of the test when trying to copy over the test logs at the end of the run.
Is the cause really the persistentVolume/Claim or should I be looking into something else? Thanks!
Creating the PVC dynamically provisioned a PV instead of using the one you created manually with the hostpath. On the PVC simply set .spec.storageClassName to and an empty string ("")
From the documentation:
A PVC with its storageClassName set equal to "" is always interpreted to be requesting a PV with no class, so it can only be bound to PVs with no class (no annotation or one set equal to ""). A PVC with no storageClassName is not quite the same ...
So create something like this (I've also added labels and selectors to make sure that the intended PV is paired up the PVC; you might not need that constraint):
apiVersion: v1
kind: PersistentVolume
metadata:
name: mypv-shared
labels:
name: mypv-shared
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 5Gi
hostPath:
path: /data/mypv-shared/
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mypv-shared-claim
spec:
storageClassName: ""
selector:
matchLabels:
name: mypv-shared
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi