Kubernetes ingress custom JWT authentication cache key - authentication

We are leveraging Kubernetes ingress with external service JWT authentication using auth-url as a part of the ingress.
Now we want to use the auth-cache-key annotation to control the caching of JWT token. At current our external auth service just respond with 200/401 by looking at the token. All our components are backend micro-services with rest api. Incoming request may not be the UI request. How do we fill in the `auth-cache-key' for a JWT token coming in.
annotations:
nginx.ingress.kubernetes.io/auth-url: http://auth-service/validate
nginx.ingress.kubernetes.io/auth-response-headers: "authorization"
nginx.ingress.kubernetes.io/auth-cache-key: '$remote_user$http_authorization'
nginx.ingress.kubernetes.io/auth-cache-duration: '1m'
kubernetes.io/ingress.class: "nginx"
Looking at the example, $remote_user$http_authorization is specified as an example in K8s documentation. However not sure if $remote_user will be set in our case. Because this is not external basic auth. How do we decide on the auth cache key in case of this?
Not enough example/documentations exists around this.

Posting general answer as no further details and explanation provided.
It's true that there is not so much documentation around, so I decided to dig into NGINX Ingress source code.
The value set in annotation nginx.ingress.kubernetes.io/auth-cache-key is a variable $externalAuth.AuthCacheKey in code:
{{ if $externalAuth.AuthCacheKey }}
set $tmp_cache_key '{{ $server.Hostname }}{{ $authPath }}{{ $externalAuth.AuthCacheKey }}';
set $cache_key '';
As can see, $externalAuth.AuthCacheKey is used by variable $tmp_cache_key, which is encoded to base64 format and set as variable $cache_key using lua NGINX module:
rewrite_by_lua_block {
ngx.var.cache_key = ngx.encode_base64(ngx.sha1_bin(ngx.var.tmp_cache_key))
}
Then $cache_key is used to set variable $proxy_cache_key which defines a key for caching:
proxy_cache_key "$cache_key";
Based on the above code, we can assume that we can use any NGINX variable to set nginx.ingress.kubernetes.io/auth-cache-key annotation. Please note that some variables are only available if the corresponding module is loaded.
Example - I set following auth-cache-key annotation:
nginx.ingress.kubernetes.io/auth-cache-key: '$proxy_host$request_uri'
Then, on the NGINX Ingress controller pod, in the file /etc/nginx/nginx.conf there is following line:
set $tmp_cache_key '{my-host}/_external-auth-Lw-Prefix$proxy_host$request_uri';
If you will set auth-cache-key annotation to nonexistent NGINX variable, the NGINX will throw following error:
nginx: [emerg] unknown "nonexistent_variable" variable
It's up to you which variables you need.
Please check also following articles and topics:
A Guide to Caching with NGINX and NGINX Plus
external auth provider results in a lot of external auth requests

Related

RKE2 Authorized endpoint configuration help required

I have a rancher 2.6.67 server and RKE2 downstream cluster. The cluster was created without authorized cluster endpoint. How to add an authorised cluster endpoint to a RKE2 cluster created by Rancher article describes how to add it in an existing cluster, however although the answer looks promising, I still must miss some detail, because it does not work for me.
Here is what I did:
Created /var/lib/rancher/rke2/kube-api-authn-webhook.yaml file with contents:
apiVersion: v1
kind: Config
clusters:
- name: Default
cluster:
insecure-skip-tls-verify: true
server: http://127.0.0.1:6440/v1/authenticate
users:
- name: Default
user:
insecure-skip-tls-verify: true
current-context: webhook
contexts:
- name: webhook
context:
user: Default
cluster: Default
and added
"kube-apiserver-arg": [
"authentication-token-webhook-config-file=/var/lib/rancher/rke2/kube-api-authn-webhook.yaml"
to the /etc/rancher/rke2/config.yaml.d/50-rancher.yaml file.
After restarting rke2-server I found the network configuration tab in Rancher and was able to enable authorized endpoint. Here is where my success ends.
I tried to create a serviceaccount and got the secret to have token authorization, but it failed when connecting directly to the api endpoint on the master.
kube-api-auth pod logs this:
time="2022-10-06T08:42:27Z" level=error msg="found 1 parts of token"
time="2022-10-06T08:42:27Z" level=info msg="Processing v1Authenticate request..."
Also the log is full of messages like this:
E1006 09:04:07.868108 1 reflector.go:139] pkg/mod/github.com/rancher/client-go#v1.22.3-rancher.1/tools/cache/reflector.go:168: Failed to watch *v3.ClusterAuthToken: failed to list *v3.ClusterAuthToken: the server could not find the requested resource (get clusterauthtokens.meta.k8s.io)
E1006 09:04:40.778350 1 reflector.go:139] pkg/mod/github.com/rancher/client-go#v1.22.3-rancher.1/tools/cache/reflector.go:168: Failed to watch *v3.ClusterAuthToken: failed to list *v3.ClusterAuthToken: the server could not find the requested resource (get clusterauthtokens.meta.k8s.io)
E1006 09:04:45.171554 1 reflector.go:139] pkg/mod/github.com/rancher/client-go#v1.22.3-rancher.1/tools/cache/reflector.go:168: Failed to watch *v3.ClusterUserAttribute: failed to list *v3.ClusterUserAttribute: the server could not find the requested resource (get clusteruserattributes.meta.k8s.io)
I found that SA tokens will not work this way so I tried to use a rancher user token, but that fails as well:
time="2022-10-06T08:37:34Z" level=info msg=" ...looking up token for kubeconfig-user-qq9nrc86vv"
time="2022-10-06T08:37:34Z" level=error msg="clusterauthtokens.cluster.cattle.io \"cattle-system/kubeconfig-user-qq9nrc86vv\" not found"
Checking the cattle-system namespace, there are no SA and secret entries corresponding to the users created in rancher, however I found SA and secret entries related in cattle-impersonation-system.
I tried creating a new user, but that too, only resulted in new entries in cattle-impersonation-system namespace, so I presume kube-api-auth wrongly assumes the location of the secrets to be cattle-system namespace.
Now the questions:
Can I authenticate with downstream RKE2 cluster using normal SA tokens (not ones created through Rancher server)? If so, how?
What did I do wrong about adding the webhook authentication configuration? How to make it work?
I noticed, that since I made the modifications described above, I cannot download the kubeconfig file from the rancher UI for this cluster. What went wrong there?
Thanks in advance for any advice.

Mlflow authorization with spnego

I saw this topic about Kerberos authntication - https://github.com/mlflow/mlflow/issues/2678 . It was in 2020 . Our team trying to do authentication with kerberos by spnego. We did spnego on nginx server and it is fine - and get code 200 when we do curl to mlflow http uri . BUT we can't do it with mlflow environment variable .
The question is - Does mlflow has some feature to make authentication with spnego or not? Or it has just these environment variables for authentication and such methods :
MLFLOW_TRACKING_USERNAME and MLFLOW_TRACKING_PASSWORD - username and password to use with HTTP Basic authentication. To use Basic authentication, you must set both environment variables .
MLFLOW_TRACKING_TOKEN - token to use with HTTP Bearer authentication. Basic authentication takes precedence if set.
MLFLOW_TRACKING_INSECURE_TLS - If set to the literal true, MLflow does not verify the TLS connection, meaning it does not validate certificates or hostnames for https:// tracking URIs. This flag is not recommended for production environments. If this is set to true then MLFLOW_TRACKING_SERVER_CERT_PATH must not be set.
MLFLOW_TRACKING_SERVER_CERT_PATH - Path to a CA bundle to use. Sets the verify param of the requests.request function (see https://requests.readthedocs.io/en/master/api/). When you use a self-signed server certificate you can use this to verify it on client side. If this is set MLFLOW_TRACKING_INSECURE_TLS must not be set (false).
MLFLOW_TRACKING_CLIENT_CERT_PATH - Path to ssl client cert file (.pem). Sets the cert param of the requests.request function (see https://requests.readthedocs.io/en/master/api/). This can be used to use a (self-signed) client certificate.
I looked at the source code. No, the mlflow.utils.rest_utils.http_request function doesn't support SPNEGO in any way – it can only send HTTP 'Basic' or 'Bearer' authorization headers.
However, it should be relatively easy to change it to generate a 'Negotiate' header using pyspnego, or even to use requests-gssapi given that it already uses Requests internally:
# For Linux:
import requests_gssapi
# For Windows:
#import requests_negotiate_sspi
def http_request(...):
...
if not auth_str:
# For Linux:
kwargs["auth"] = requests_gssapi.HTTPSPNEGOAuth()
# For Windows:
#kwargs["auth"] = requests_negotiate_sspi.HttpNegotiateAuth()
...

Kubernetes internal nginx ingress controller with SSL termination & ssl-passthrough

I am very new to using helm charts for deploying containers, and I have also never worked with nginx controllers or ingress controllers.
However, I am being asked to look into improving our internal nginx ingress controllers to allow for SSL-passthrough.
Right now we have external (public facing) and internal controllers. Where the public ones allow SSL-passthrough, and the internal ones have SSL-termination.
I have also been told that nginx is a reverse proxy, and that it works based on headers in the URL.
I am hoping someone can help me out on this helm chart that I have for the internal ingress controllers.
Currently I am under the impression that having SSL termination as well as SSL-passthrough on the same ingress controllers would not be possible.
Answered this one myself: https://serversforhackers.com/c/tcp-load-balancing-with-nginx-ssl-pass-thru
Our current (internal) ingress code:
---
rbac:
create: true
controller:
ingressClass: nginx-internal
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu:110:certificate/62-b3
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: !!str 443
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: !!str 3600
targetPorts:
https: 80
replicaCount: 3
defaultBackend:
replicaCount: 3
Can I simply add the following? :
controller:
extraArgs:
enable-ssl-passthrough: ""
Note: The above piece of code is what we use on our external ingress controller.
additionally, I found this:
Ingress and SSL Passthrough
Can I just go and mix the annotations? Or do annotations only care about the 'top domain level' where the annotation comes from?
eg:
service.beta.kubernetes.io
nginx.ingress.kubernetes.io
Both come from the domain kubernetes.io, or does the sub-domain make a difference?
I mean: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md
That page doesn't show any of the service.beta annotations on it ..
What's the difference between the extraArg ssl-passthrough configuration and the ssl-passthrough configuration in the annotations?
I'm looking mostly for an answer on how to get the SSL-passthrough working without breaking the SSL-termination on the internal ingress controllers.
However, any extra information to gain more insight and knowledge as far as my other questions go would also be very appreciated :)
So I found the answer to my own question(s):
The annotations appear to be 'configuration items'. I'm using quotes because i can't find a better term.
The extraArgs parameter is where you can pass any parameter to the controller as if it were a commandline parameter.
And I think it is also safe to say that the annotations can be either any of the same top-level domain. I have not found any that weren't from another domain then kubernetes.io
To get my ingress controller to work side-by-side with the SSL-termination controller the helm chart looks as following:
---
rbac:
create: true
controller:
ingressClass: nginx-internal-ssl-passthrough
service:
annotations:
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: "tag3=value3, tag3=value3, tag3=value3, tag3=value3"
targetPorts:
https: 443
replicaCount: 2
extraArgs:
enable-ssl-passthrough: ""
defaultBackend:
replicaCount: 2
Toke me about 2 days of researching/searching the web & 6 deployments to get the whole setup working with AWS nlb, ssl-passthrough enabled, cross-zone loadbalancing, etc. But after having found the following pages it went pretty fast:
https://kubernetes.github.io/ingress-nginx/deploy/
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/
https://kubernetes.io/docs/concepts/services-networking/service/
This last page helped me a lot. If someone else gets to deploy SSL-termination and SSL-passthrough for either public or private connections, I hope this helps too.
From here you can find out how to redirect the HTTPS traffic to the pod without SSL-termination
https://stackoverflow.com/a/66767691/1938507

How to set custom headers with Keycloak Gatekeeper?

I have Keycloak and Keycloak-Gatekeeper set up in OpenShift and it's acting as a proxy for an application that is running.
The application that Keycloak Gatekeeper is proxying requires a custom cookie to be set so I figured I could use the Gatekeeper's custom header configuration to set this however I'm running into issues.
Configuration looks like:
discovery-url: https://keycloak-url.com/auth/realms/MyRealm
client-id: MyClient
client-secret: MyClientSecret
cookie-access-name: my.token
encryption_key: MY_KEY
listen: :3000
redirection-url: https://gatekeeper-url.com
upstream-url: https://app-url.com
verbose: true
resources:
- uri: /home/*
roles:
- MyClient:general-access
headers:
Set-Cookie: isLoggedIn=true
After re-deploying and running through the auth flow, the upstream URL/application is not receiving the custom header. I tried with multiple headers (key/value) but can't seem to get it working or find where that header is being injected in the flow.
I've also checked logs and haven't been able to find anything super useful.
Sample Gatekeeper Config
Gatekeeper Custom Headers Docs
Any suggestions/ideas on how to get this working?
remove Set-Cookie.
Simply add
headers:
isLoggedIn: true

ERR_SSL_VERSION_OR_CIPHER_MISMATCH from AWS API Gateway into Lambda

I have set up a lambda and attached an API Gateway deployment to it. The tests in the gateway console all work fine. I created an AWS certificate for *.hazeapp.net. I created a custom domain in the API gateway and attached that certificate. In the Route 53 zone, I created the alias record and used the target that came up under API gateway (the only one available). I named the alias rest.hazeapp.net. My client gets the ERR_SSL_VERSION_OR_CIPHER_MISMATCH error. Curl indicates that the TLS server handshake failed, which agrees with the SSL error. Curl indicates that the certificate CA checks out.
Am I doing something wrong?
I had this problem when my DNS entry pointed directly to the API gateway deployment rather than that backing the custom domain name.
To find the domain name to point to:
aws apigateway get-domain-name --domain-name "<YOUR DOMAIN>"
The response contains the domain name to use. In my case I had a Regional deployment so the result was:
{
"domainName": "<DOMAIN_NAME>",
"certificateUploadDate": 1553011117,
"regionalDomainName": "<API_GATEWAY_ID>.execute-api.eu-west-1.amazonaws.com",
"regionalHostedZoneId": "...",
"regionalCertificateArn": "arn:aws:acm:eu-west-1:<ACCOUNT>:certificate/<CERT_ID>",
"endpointConfiguration": {
"types": [
"REGIONAL"
]
}
}