How to Assert OAM token in helidon using OIDC?
I was trying to assert OAM token but getting error as shown below and I tried asserting IDCS token and it works fine
Exception in thread “main” io.helidon.common.Errors$ErrorMessagesException: [FATAL: Failed to load metadata: io.helidon.common.configurable.ResourceException: Failed to open stream to uri: https://{{OAM_host}}:{{port}}/.well-known/openid-configuration at io.helidon.common.configurable.ResourceException: Failed to open stream to uri: https://{{OAM_host}}:{{port}}/.well-known/openid-configuration, FATAL: When token_endpoint is not explicitly defined, the OIDC metadata must exist at class io.helidon.security.providers.oidc.common.OidcConfig$Builder, FATAL: When authorization_endpoint is not explicitly defined, the OIDC metadata must exist at class io.helidon.security.providers.oidc.common.OidcConfig$Builder, FATAL: When jwks_uri is not explicitly defined, the OIDC metadata must exist at class io.helidon.security.providers.oidc.common.OidcConfig$Builder]
And in the application.properties added OAM details:
providers:
- abac:
- oidc:
client-id: "${ALIAS=security.properties.client-id}"
client-secret: "${ALIAS=security.properties.client-secret}"
identity-uri: "${ALIAS=security.properties.uri}"
# A prefix used for custom scopes
scope-audience: "${ALIAS=security.properties.scope-audience}"
audience: "${ALIAS=security.properties.audience}"
proxy-host: "${ALIAS=security.properties.proxy-host}"
frontend-uri: "${ALIAS=security.properties.frontend-uri}"
cookie-name: "OIDC_SESSION"
cookie-same-site: "Lax"
header-use: true
redirect: false
Am I missing something here?
If you look at your exception, it points out the endpoint is not valid:
https://{{OAM_host}}:{{port}}/.well-known/openid-configuration
This means your configuration contains {{OAM_host}} and {{port}} - such placeholders are not replaced by Helidon configuration.
In Helidon 1.x you can use the ${ALIAS=key} to reference keys
Since Helidon 2.0.0-M2 you can use ${key} to reference key
Related
I have a rancher 2.6.67 server and RKE2 downstream cluster. The cluster was created without authorized cluster endpoint. How to add an authorised cluster endpoint to a RKE2 cluster created by Rancher article describes how to add it in an existing cluster, however although the answer looks promising, I still must miss some detail, because it does not work for me.
Here is what I did:
Created /var/lib/rancher/rke2/kube-api-authn-webhook.yaml file with contents:
apiVersion: v1
kind: Config
clusters:
- name: Default
cluster:
insecure-skip-tls-verify: true
server: http://127.0.0.1:6440/v1/authenticate
users:
- name: Default
user:
insecure-skip-tls-verify: true
current-context: webhook
contexts:
- name: webhook
context:
user: Default
cluster: Default
and added
"kube-apiserver-arg": [
"authentication-token-webhook-config-file=/var/lib/rancher/rke2/kube-api-authn-webhook.yaml"
to the /etc/rancher/rke2/config.yaml.d/50-rancher.yaml file.
After restarting rke2-server I found the network configuration tab in Rancher and was able to enable authorized endpoint. Here is where my success ends.
I tried to create a serviceaccount and got the secret to have token authorization, but it failed when connecting directly to the api endpoint on the master.
kube-api-auth pod logs this:
time="2022-10-06T08:42:27Z" level=error msg="found 1 parts of token"
time="2022-10-06T08:42:27Z" level=info msg="Processing v1Authenticate request..."
Also the log is full of messages like this:
E1006 09:04:07.868108 1 reflector.go:139] pkg/mod/github.com/rancher/client-go#v1.22.3-rancher.1/tools/cache/reflector.go:168: Failed to watch *v3.ClusterAuthToken: failed to list *v3.ClusterAuthToken: the server could not find the requested resource (get clusterauthtokens.meta.k8s.io)
E1006 09:04:40.778350 1 reflector.go:139] pkg/mod/github.com/rancher/client-go#v1.22.3-rancher.1/tools/cache/reflector.go:168: Failed to watch *v3.ClusterAuthToken: failed to list *v3.ClusterAuthToken: the server could not find the requested resource (get clusterauthtokens.meta.k8s.io)
E1006 09:04:45.171554 1 reflector.go:139] pkg/mod/github.com/rancher/client-go#v1.22.3-rancher.1/tools/cache/reflector.go:168: Failed to watch *v3.ClusterUserAttribute: failed to list *v3.ClusterUserAttribute: the server could not find the requested resource (get clusteruserattributes.meta.k8s.io)
I found that SA tokens will not work this way so I tried to use a rancher user token, but that fails as well:
time="2022-10-06T08:37:34Z" level=info msg=" ...looking up token for kubeconfig-user-qq9nrc86vv"
time="2022-10-06T08:37:34Z" level=error msg="clusterauthtokens.cluster.cattle.io \"cattle-system/kubeconfig-user-qq9nrc86vv\" not found"
Checking the cattle-system namespace, there are no SA and secret entries corresponding to the users created in rancher, however I found SA and secret entries related in cattle-impersonation-system.
I tried creating a new user, but that too, only resulted in new entries in cattle-impersonation-system namespace, so I presume kube-api-auth wrongly assumes the location of the secrets to be cattle-system namespace.
Now the questions:
Can I authenticate with downstream RKE2 cluster using normal SA tokens (not ones created through Rancher server)? If so, how?
What did I do wrong about adding the webhook authentication configuration? How to make it work?
I noticed, that since I made the modifications described above, I cannot download the kubeconfig file from the rancher UI for this cluster. What went wrong there?
Thanks in advance for any advice.
We are leveraging Kubernetes ingress with external service JWT authentication using auth-url as a part of the ingress.
Now we want to use the auth-cache-key annotation to control the caching of JWT token. At current our external auth service just respond with 200/401 by looking at the token. All our components are backend micro-services with rest api. Incoming request may not be the UI request. How do we fill in the `auth-cache-key' for a JWT token coming in.
annotations:
nginx.ingress.kubernetes.io/auth-url: http://auth-service/validate
nginx.ingress.kubernetes.io/auth-response-headers: "authorization"
nginx.ingress.kubernetes.io/auth-cache-key: '$remote_user$http_authorization'
nginx.ingress.kubernetes.io/auth-cache-duration: '1m'
kubernetes.io/ingress.class: "nginx"
Looking at the example, $remote_user$http_authorization is specified as an example in K8s documentation. However not sure if $remote_user will be set in our case. Because this is not external basic auth. How do we decide on the auth cache key in case of this?
Not enough example/documentations exists around this.
Posting general answer as no further details and explanation provided.
It's true that there is not so much documentation around, so I decided to dig into NGINX Ingress source code.
The value set in annotation nginx.ingress.kubernetes.io/auth-cache-key is a variable $externalAuth.AuthCacheKey in code:
{{ if $externalAuth.AuthCacheKey }}
set $tmp_cache_key '{{ $server.Hostname }}{{ $authPath }}{{ $externalAuth.AuthCacheKey }}';
set $cache_key '';
As can see, $externalAuth.AuthCacheKey is used by variable $tmp_cache_key, which is encoded to base64 format and set as variable $cache_key using lua NGINX module:
rewrite_by_lua_block {
ngx.var.cache_key = ngx.encode_base64(ngx.sha1_bin(ngx.var.tmp_cache_key))
}
Then $cache_key is used to set variable $proxy_cache_key which defines a key for caching:
proxy_cache_key "$cache_key";
Based on the above code, we can assume that we can use any NGINX variable to set nginx.ingress.kubernetes.io/auth-cache-key annotation. Please note that some variables are only available if the corresponding module is loaded.
Example - I set following auth-cache-key annotation:
nginx.ingress.kubernetes.io/auth-cache-key: '$proxy_host$request_uri'
Then, on the NGINX Ingress controller pod, in the file /etc/nginx/nginx.conf there is following line:
set $tmp_cache_key '{my-host}/_external-auth-Lw-Prefix$proxy_host$request_uri';
If you will set auth-cache-key annotation to nonexistent NGINX variable, the NGINX will throw following error:
nginx: [emerg] unknown "nonexistent_variable" variable
It's up to you which variables you need.
Please check also following articles and topics:
A Guide to Caching with NGINX and NGINX Plus
external auth provider results in a lot of external auth requests
I have a Swagger 2.0 file that has an auth mechanism defined but am getting errors that tell me that we aren't using it. The exact error message is “Security scheme was defined but never used”.
How do I make sure my endpoints are protected using the authentication I created? I have tried a bunch of different things but nothing seems to work.
I am not sure if the actual security scheme is defined, I think it is because we are using it in production.
I would really love to have some help with this as I am worried that our competitor might use this to their advantage and steal some of our data.
swagger: "2.0"
# basic info is basic
info:
version: 1.0.0
title: Das ERP
# host config info
# Added by API Auto Mocking Plugin
host: virtserver.swaggerhub.com
basePath: /rossja/whatchamacallit/1.0.0
#host: whatchamacallit.lebonboncroissant.com
#basePath: /v1
# always be schemin'
schemes:
- https
# we believe in security!
securityDefinitions:
api_key:
type: apiKey
name: api_key
in: header
description: API Key
# a maze of twisty passages all alike
paths:
/dt/invoicestatuses:
get:
tags:
- invoice
summary: Returns a list of invoice statuses
produces:
- application/json
operationId: listInvoiceStatuses
responses:
200:
description: OK
schema:
type: object
properties:
code:
type: integer
value:
type: string
securityDefinitions alone is not enough, this section defines available security schemes but does not apply them.
To actually apply a security scheme to your API, you need to add security requirements on the root level or to individual operations.
security:
- api_key: []
See the API Keys guide for details.
I have Keycloak and Keycloak-Gatekeeper set up in OpenShift and it's acting as a proxy for an application that is running.
The application that Keycloak Gatekeeper is proxying requires a custom cookie to be set so I figured I could use the Gatekeeper's custom header configuration to set this however I'm running into issues.
Configuration looks like:
discovery-url: https://keycloak-url.com/auth/realms/MyRealm
client-id: MyClient
client-secret: MyClientSecret
cookie-access-name: my.token
encryption_key: MY_KEY
listen: :3000
redirection-url: https://gatekeeper-url.com
upstream-url: https://app-url.com
verbose: true
resources:
- uri: /home/*
roles:
- MyClient:general-access
headers:
Set-Cookie: isLoggedIn=true
After re-deploying and running through the auth flow, the upstream URL/application is not receiving the custom header. I tried with multiple headers (key/value) but can't seem to get it working or find where that header is being injected in the flow.
I've also checked logs and haven't been able to find anything super useful.
Sample Gatekeeper Config
Gatekeeper Custom Headers Docs
Any suggestions/ideas on how to get this working?
remove Set-Cookie.
Simply add
headers:
isLoggedIn: true
What I am trying to do is -
For Clients to Broker communication - use OAUTHBEARER authentication
For Broker to Broker communication - use PLAIN authentication
I Have following JAAS configuration:
{
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="inter"
password="inter-secret"
user_inter="inter-secret"
user_admin="YvNzcbmqhA0DfxjP";
org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required;
};
Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="zookeeper"
password="zookeeper-secret";
};
}
And I have following configs in server.properties:
sasl.enabled.mechanisms=PLAIN,OAUTHBEARER
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.server.callback.handler.class=br.com.jairsjunior.security.oauthbearer.OauthAuthenticateValidatorCallbackHandler
But if start the kafka service I am seeing the error like below:
used by: java.lang.IllegalArgumentException: Must supply exactly 1 non-null JAAS mechanism configuration (size was 2)
at org.apache.kafka.common.security.oauthbearer.internals.unsecured.OAuthBearerUnsecuredValidatorCallbackHandler.configure(OAuthBearerUnsecuredValidatorCallbackHandler.java:114)
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:122)
... 17 more
which indicates kafka is not allowing to specify multiple JAAS mechanism configurations.
So how can I specify multiple JAAS configs, and setup authentication mechanisms like below:
CLient to Broker ----> OAUTHBEARER
Broker to Broker ----> PLAIN
Thanks!
I am currently also working on the problem to use plain and oauthbearer simultaniously, which I have not solved yet but I solved your specific question in the following way.
This is my Jaas Configuration:
internal.KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret"
user_test="test";
};
external.KafkaServer {
org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required
};
Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="username"
password="pw";
};
Then I set the setting in the server.properties the following way:
inter.broker.listener.name: INTERNAL
sasl.mechanism.inter.broker.protocol: PLAIN
listener.security.protocol.map: INTERNAL:SASL_PLAINTEXT,EXTERNAL:SASL_SSL
listeners: "INTERNAL://0.0.0.0:9092,EXTERNAL://0.0.0.0:19092"
sasl.enabled.mechanisms: PLAIN,OAUTHBEARER
listener.name.external.oauthbearer.sasl.server.callback.handler.class: my.module.kafka.security.oauthbearer.OauthAuthenticateValidatorCallbackHandler
listener.name.external.oauthbearer.sasl.login.callback.handler.class: my.module.kafka.security.oauthbearer.OauthAuthenticateLoginCallbackHandler
When you it this way you won't get your error. Sadly I get another error when the broker want to set up the external connection:
javax.security.auth.callback.UnsupportedCallbackException: Unrecognized SASL Login callback
at org.apache.kafka.common.security.authenticator.AbstractLogin$DefaultLoginCallbackHandler.handle(AbstractLogin.java:105)
at org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule.identifyToken(OAuthBearerLoginModule.java:316)
at org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule.login(OAuthBearerLoginModule.java:301)
... 32 more
It seems like the kafka brokers are ignoring oauthbearer callbackhandler. This is a bit strange because external is working perfectly when I configure it as the only listener.
I hope it helps you with your problem!