Rancher v2.4.4 Istio end-user authentication error no matches for kind "RequestAuthentication" - authentication

i'm trying to use istio end-user authentication example with latest rancher, but I'm getting below error
unable to recognize "STDIN": no matches for kind "RequestAuthentication" in version "security.istio.io/v1beta1"
when I use below command
kubectl apply -f - <<EOF
apiVersion: "security.istio.io/v1beta1"
kind: "RequestAuthentication"
metadata:
name: "jwt-example"
namespace: foo
spec:
selector:
matchLabels:
app: httpbin
jwtRules:
- issuer: "testing#secure.istio.io"
jwksUri: "https://raw.githubusercontent.com/istio/istio/release-1.6/security/tools/jwt/samples/jwks.json"
EOF

According to this support matrix from rancher website,the istio version given is 1.4.7.
RequestAuthentication kind was introduced in istio in the version 1.5.So you might be applying the incorrect resource in this version.See this for istio's upgrade notes on 1.5.Since rancher is having not the latest version ,you will have to apply the old policy resources.You can find 1.4 docs at https://archive.istio.io/v1.4/docs/
Hope this helps.

Related

How can I configure the AdmissionConfiguration > PodSecurity > PodSecurityConfiguration in an EKS cluster?

If I understand right from Apply Pod Security Standards at the Cluster Level, in order to have a PSS (Pod Security Standard) as default for the whole cluster I need to create an AdmissionConfiguration in a file that the API server needs to consume during cluster creation.
I don't see any way to configure / provide the AdmissionConfiguration at CreateCluster , also I'm not sure how to provide this AdmissionConfiguration in a managed EKS node.
From the tutorials that use KinD or minikube it seems that the AdmissionConfiguration must be in a file that is referenced in the cluster-config.yaml, but if I'm not mistaken the EKS API server is managed and does not allow to change or even see this file.
The GitHub issue aws/container-roadmap Allow Access to AdmissionConfiguration seems to suggest that currently there is no possibility of providing AdmissionConfiguration at creation, but on the other hand aws-eks-best-practices says These exemptions are applied statically in the PSA admission controller configuration as part of the API server configuration
so, is there a way to provide PodSecurityConfiguration for the whole cluster in EKS? or I'm forced to just use per-namespace labels?
See also Enforce Pod Security Standards by Configuration the Built-in Admission Controller and EKS Best practices PSS and PSA
I don't think there is any way currently in EKS to provide configuration for the built-in PSA controller (Pod Security Admission controller).
But if you want to implement a cluster-wide default for PSS (Pod Security Standards) you can do that by installing the the official pod-security-webhook as a Dynamic Admission Controller in EKS.
git clone https://github.com/kubernetes/pod-security-admission
cd pod-security-admission/webhook
make certs
kubectl apply -k .
The default podsecurityconfiguration.yaml in pod-security-admission/webhook/manifests/020-configmap.yaml allows EVERYTHING so you should edit it and write something like
apiVersion: v1
kind: ConfigMap
metadata:
name: pod-security-webhook
namespace: pod-security-webhook
data:
podsecurityconfiguration.yaml: |
apiVersion: pod-security.admission.config.k8s.io/v1beta1
kind: PodSecurityConfiguration
defaults:
enforce: "restricted"
enforce-version: "latest"
audit: "restricted"
audit-version: "latest"
warn: "restricted"
warn-version: "latest"
exemptions:
# Array of authenticated usernames to exempt.
usernames: []
# Array of runtime class names to exempt.
runtimeClasses: []
# Array of namespaces to exempt.
namespaces: ["policy-test2"]
then
kubectl apply -k .
kubectl -n pod-security-webhook rollout restart deployment/pod-security-webhook # otherwise the pods won't reread the configuration changes
After those changes you can verify that the default forbids privileged pods with:
kubectl --context aihub-eks-terraform create ns policy-test1
kubectl --context aihub-eks-terraform -n policy-test1 run --image=ecerulm/ubuntu-tools:latest --rm -ti rubelagu-$RANDOM --privileged
Error from server (Forbidden): admission webhook "pod-security-webhook.kubernetes.io" denied the request: pods "rubelagu-32081" is forbidden: violates PodSecurity "restricted:latest": privileged (container "rubelagu-32081" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "rubelagu-32081" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "rubelagu-32081" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "rubelagu-32081" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "rubelagu-32081" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
Note: that you get the error forbidding privileged pods even when the namespace policy-test1 has no label pod-security.kubernetes.io/enforce, so you know that this rule comes from the pod-security-webhook that we just installed and configured.
Now if you want to create a pod you will be forced to create in a way that complies with the restricted PSS, by specifying runAsNonRoot, seccompProfile.type and capabilities and For example:
apiVersion: v1
kind: Pod
metadata:
name: test-1
spec:
restartPolicy: Never
securityContext:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
seccompProfile:
type: RuntimeDefault
containers:
- name: test
image: ecerulm/ubuntu-tools:latest
imagePullPolicy: Always
command: ["/bin/bash", "-c", "--", "sleep 900"]
securityContext:
privileged: false
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL

Secure mTLS communication within Istio-knative services + external requests

We are converting existing k8s services to use istio & knative. The services receive requests from external users as well as from within the cluster. We are trying to setup Istio AuthorizationPolicy to achieve the below requirements:
Certain paths (like docs/healthchecks) should not require any special header or anything and must be accessible from anywhere
Health & metric collection paths required to be accessed by knative must be accisible only by knative controllers
Any request coming from outside the cluster (through knative-serving/knative-ingress-gateway basically) must contain a key header matching a pre-shared key
Any request coming from any service within the cluster can access all the paths
Below is a sample of what I am trying. I am able to get the first 3 requirements working but not the last one...
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: my-svc
namespace: my-ns
spec:
selector:
matchLabels:
serving.knative.dev/service: my-svc
action: "ALLOW"
rules:
- to:
- operation:
methods:
- "GET"
paths:
- "/docs"
- "/openapi.json"
- "/redoc"
- "/rest/v1/healthz"
- to:
- operation:
methods:
- "GET"
paths:
- "/healthz*"
- "/metrics*"
when:
- key: "request.headers[User-Agent]"
values:
- "Knative-Activator-Probe"
- "Go-http-client/1.1"
- to:
- operation:
paths:
- "/rest/v1/myapp*"
when:
- key: "request.headers[my-key]"
values:
- "asjhfhjgdhjsfgjhdgsfjh"
- from:
- source:
namespaces:
- "*"
We have made no changes to the mTLS configuration provided by default by istio-knative setup, so assume that the mtls mode is currently PERMISSIVE.
Details of tech stack involved
AWS EKS - Version 1.21
Knative Serving - Version 1.1 (with Istio
1.11.5)
I'm not an Istio expert, but you might be able to express the last policy based on either the ingress gateway (have one which is listening only on a ClusterIP address), or based on the SourceIP being within the cluster. For the latter, I'd want to test that Istio is using the actual SourceIP and not substituting in the Forwarded header's IP address (a different reasonable configuration).

cert-manager + kubernetes wildcard problem [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
Im trying create wildcard cert on Rancher kubernetes engine behind cloud loadbalancer.
After install rancher i have a Issuer:
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
annotations:
meta.helm.sh/release-name: rancher
meta.helm.sh/release-namespace: cattle-system
creationTimestamp: "2021-09-21T12:10:25Z"
generation: 1
labels:
app: rancher
app.kubernetes.io/managed-by: Helm
chart: rancher-2.5.9
heritage: Helm
release: rancher
name: rancher
namespace: cattle-system
resourceVersion: "1318"
selfLink: /apis/cert-manager.io/v1/namespaces/cattle-system/issuers/rancher
uid: #
spec:
acme:
email: #
preferredChain: ""
privateKeySecretRef:
name: letsencrypt-production
server: https://acme-v02.api.letsencrypt.org/directory
solvers:
- http01:
ingress: {}
status:
acme:
lastRegisteredEmail: #
uri: https://acme-v02.api.letsencrypt.org/#
conditions:
- lastTransitionTime: "2021-09-21T12:10:27Z"
message: The ACME account was registered with the ACME server
reason: ACMEAccountRegistered
status: "True"
type: Ready
this is order:
kubectl describe order wildcard-dev-mctqj-4171528257 -n cattle-system
Name: wildcard-dev-mctqj-4171528257
Namespace: cattle-system
Labels: <none>
Annotations: cert-manager.io/certificate-name: wildcard-dev
cert-manager.io/certificate-revision: 1
cert-manager.io/private-key-secret-name: wildcard-dev-2g4rc
API Version: acme.cert-manager.io/v1
Kind: Order
Metadata:
Creation Timestamp: 2021-09-21T14:10:50Z
Generation: 1
Managed Fields:
API Version: acme.cert-manager.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:cert-manager.io/certificate-name:
f:cert-manager.io/certificate-revision:
f:cert-manager.io/private-key-secret-name:
f:kubectl.kubernetes.io/last-applied-configuration:
f:ownerReferences:
.:
k:{"uid":"}
.:
f:apiVersion:
f:blockOwnerDeletion:
f:controller:
f:kind:
f:name:
f:uid:
f:spec:
.:
f:commonName:
f:dnsNames:
f:issuerRef:
.:
f:kind:
f:name:
f:request:
f:status:
.:
f:authorizations:
f:finalizeURL:
f:state:
f:url:
Manager: controller
Operation: Update
Time: 2021-09-21T14:10:52Z
Owner References:
API Version: cert-manager.io/v1
Block Owner Deletion: true
Controller: true
Kind: CertificateRequest
Name: wildcard-dev-mctqj
UID: #
Resource Version: 48930
Self Link: /apis/acme.cert-manager.io/v1/namespaces/cattle-system/orders/wildcard-dev-mctqj-4171528257
UID: #
Spec:
Common Name: *.
Dns Names:
*.rancher-dev.com
Issuer Ref:
Kind: Issuer
Name: rancher
Request:
Status:
Authorizations:
Challenges:
Token: #######
Type: dns-01
URL: https://acme-v02.api.letsencrypt.org/acme/chall-v3/##
Identifier: rancher.dev.com
Initial State: pending
URL: https://acme-v02.api.letsencrypt.org/acme/authz-v3/##
Wildcard: true
Finalize URL: https://acme-v02.api.letsencrypt.org/acme/finalize/###
State: pending
URL: https://acme-v02.api.letsencrypt.org/acme/order/###
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Solver 49m cert-manager Failed to determine a valid solver configuration for the set of domains on the Order: no configured challenge solvers can be used for th is challenge
dns changed ofc
Certificate:
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: wildcard-dev
namespace: cattle-system
spec:
secretName: wildcard-dev
issuerRef:
kind: Issuer
name: rancher
commonName: '*.rancher.dev.com'
dnsNames:
- '*.rancher.dev.com'
i dont create ingress yet..
i think trubl in order
Type: dns-01
What i do wrong ?
Mbe create second issuer ?
Actually, i want create wildcard certificate and clone him wit kubed, becouse i need a lot namespaces in kube with 1 wldcard cert. What can you advise me, guys?)
As it is written here serving-a-wildcard-to-ingress, http01 solver does not support wildcard. Instead you should use dns01 for wildcard certificates.
See documentation to dns01 solver.

502 Bad Gateway Error After Instituting AuthorizationPolicy from Istio Documentation

i'm using Istio 1.5.4 and trying apply the example referenced here:
https://istio.io/latest/docs/tasks/security/authentication/authn-policy/#end-user-authentication
Everything works as expected until defining the AuthorizationPolicy - the moment i introduce that i would get a 502 Bad Gateway error regardless if i provide a valid JWT token or not.
On a secondary note, I'm able to get the AuthorizationPolicy to work properly if i update the example to be applied at my own service namespaced level. Then RequestAuthentication + AuthorizationPolicy would work as expected, however, i would run into a different roadblock where now internal service would also require a valid jwt token.
authentication/authorization internal service issue
I've discovered that the 502 is a result of my loadbalancer health check failing due to the AuthorizationPolicy applied. Adding a conditional header User-Agent against my healh check probe seems to do the trick, but then i get back the net effect where no token provided is still getting through
No token is getting through because that´s how you configured your AuthorizationPolicy, that´s how source: requestPrincipals: ["*"] works. Take a look at this example.
RequestAuthentication defines what request authentication methods are supported by a workload. If will reject a request if the request contains invalid authentication information, based on the configured authentication rules. A request that does not contain any authentication credentials will be accepted but will not have any authenticated identity. To restrict access to authenticated requests only, this should be accompanied by an authorization rule. Examples:
Require JWT for all request for workloads that have label app:httpbin
apiVersion: security.istio.io/v1beta1
kind: RequestAuthentication
metadata:
name: httpbin
namespace: foo
spec:
selector:
matchLabels:
app: httpbin
jwtRules:
- issuer: "issuer-foo"
jwksUri: https://example.com/.well-known/jwks.json
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: httpbin
namespace: foo
spec:
selector:
matchLabels:
app: httpbin
rules:
- from:
- source:
requestPrincipals: ["*"]
Use requestPrincipals: ["testing#secure.istio.io/testing#secure.istio.io"] instead as mentioned here, then it will accept only requests with token.
apiVersion: security.istio.io/v1beta1
kind: RequestAuthentication
metadata:
name: frontend
namespace: default
spec:
selector:
matchLabels:
app: frontend
jwtRules:
- issuer: "testing#secure.istio.io"
jwksUri: "https://raw.githubusercontent.com/istio/istio/release-1.5/security/tools/jwt/samples/jwks.json"
The second resource is an AuthorizationPolicy, which ensures that all requests have a JWT - and rejects requests that do not, returning a 403 error.
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: require-jwt
namespace: default
spec:
selector:
matchLabels:
app: frontend
action: ALLOW
rules:
- from:
- source:
requestPrincipals: ["testing#secure.istio.io/testing#secure.istio.io"]
Once we apply these resources, we can curl the Istio ingress gateway without a JWT, and see that the AuthorizationPolicy is rejecting our request because we did not supply a token:
$ curl ${INGRESS_IP}
RBAC: access denied
Finally, if we curl with a valid JWT, we can successfully reach the frontend via the IngressGateway:
$ curl --header "Authorization: Bearer ${VALID_JWT}" ${INGRESS_IP}
Hello World! /

Istio Authorization with JWT

I am running isio 1.0.2 and am unable to configure service authorization based on JWT claims against Azure AD.
I have succesfully configured and validated Azure AD oidc jwt end user authentication and it works fine.
Now I'd like to configure RBAC Authorization using request.auth.claims["preferred_username"] attribute.
I've created a ServiceRoleBinding like below:
apiVersion: "rbac.istio.io/v1alpha1"
kind: ServiceRole
metadata:
name: service-reader
namespace: default
spec:
rules:
- services: ["myservice.default.svc.cluster.local"]
methods: ["GET"]
paths: ["*/products"]
---
apiVersion: "rbac.istio.io/v1alpha1"
kind: ServiceRoleBinding
metadata:
name: service-reader-binding
namespace: default
spec:
subjects:
- properties:
source.principal: "*"
request.auth.claims["preferred_username"]: "user#company.com"
roleRef:
kind: ServiceRole
name: "service-reader"
However, I keep getting 403 Forbidden from the service proxy, even though preferred_username claim from Authentication header is correct.
If I comment out request.auth.claims["preferred_username"]: "user#company.com" line the request succeeds.
Can anyone point me in the right direction regarding configuring authorization based on oidc and jwt?
Never mind. I found the problem.
I was missing user: "*" check to allow all users.
so under subjects it should say:
subjects:
- user: "*"
properties:
source.principal: "*"
request.auth.claims["preferred_username"]: "user#company.com"
That fixes it.