IDX20803: Unable to obtain configuration - openiddict

I have an authorization server running on different port and I can access the configuration url from browser.
However, when I configure the auth issuer in .net core webapi, I get following error:
IDX20803: Unable to obtain configuration from: http://localhost:5001/auth/.well-known/openid-configuration
It only happens after I changed the auth server base path from http://localhost:5001 to http://localhost:5001/auth
services.AddOpenIddict()
.AddValidation(options =>
{
options.SetIssuer("http://localhost:5001/auth");
options.AddAudiences("resource_server");
options.AddEncryptionKey(new SymmetricSecurityKey(
Convert.FromBase64String("DRjd/GnduI3Efzen9V9BvbNUfc/VKgXltV7Kbk9sMkY=")));
options.UseSystemNetHttp();
options.UseAspNetCore();
});
Is there any reason or solution to this issue?

Related

How to connect to a gRPC Server hosted in Kestrel as HTTPS from a gRPC C++ Client using default certificates (Windows 10)?

I am using ASP.Net Core (Grpc.Net) for creating a HTTPS gRPC server hosted in Kestrel. The communication between C# Client to the server (HTTPS) works fine with out adding any certificate.
It looks like they are using the default certificates for communication.
Now I have a C++ gRPC Client in Windows 10 and I'm trying to connect to the same server from a client, the endpoint is https://localhost:50051.
This is my Kestrel configuration #server.
webBuilder.ConfigureKestrel(serverOptions =>
{
serverOptions.Listen(IPAddress.Any, 50051, listenOptions =>
{
listenOptions.Protocols = Microsoft.AspNetCore.Server.Kestrel.Core.HttpProtocols.Http2;
listenOptions.UseHttps();
});
}).UseStartup<Startup>();
As you could see I am not using certificates in the server (wanted to use default certificates) the same way I used C# gRPC Client.
i.e. connection to server works using C# Client
var channel = GrpcChannel.ForAddress("https://localhost:50051");
ecgDataClient = new Data.DataClient(channel);
But with C++ gRPC Client Client I am unable to connect (tried both InSecure & SslCredentials):
auto channel_creds = grpc::SslCredentials(grpc::SslCredentialsOptions());
DataGrpcClient grpcClient( grpc::CreateChannel("localhost:50051", channel_creds));
With grpc::SslCredentials(grpc::SslCredentialsOptions()) I get this error:
E0709 19:46:20.488000000 6724 ssl_utils.cc:570] load_file: {"created":"#1625840180.488000000","description":"Failed to load file","file":"D:\DEV\vcpkg\buildtrees\grpc\src\17cc203898-db2679e7f2.clean\src\core\lib\iomgr\load_file.cc","file_line":72,"filename":"/usr/share/grpc/roots.pem","referenced_errors":[{"created":"#1625840180.488000000","description":"No such file or directory","errno":2,"file":"D:\DEV\vcpkg\buildtrees\grpc\src\17cc203898-db2679e7f2.clean\src\core\lib\iomgr\load_file.cc","file_line":45,"os_error":"No such file or directory","syscall":"fopen"}]}
E0709 19:46:20.509000000 6724 ssl_security_connector.cc:413] Could not get default pem root certs.
E0709 19:46:20.512000000 6724 secure_channel_create.cc:108] Failed to create secure subchannel for secure name 'localhost:50051'
E0709 19:46:20.517000000 6724 secure_channel_create.cc:50] Failed to create channel args during subchannel creation.
E0709 19:46:20.521000000 6724 ssl_security_connector.cc:413] Could not get default pem root certs.
E0709 19:46:20.525000000 6724 secure_channel_create.cc:108] Failed to create secure subchannel for secure name 'localhost:50051'
E0709 19:46:20.529000000 6724 secure_channel_create.cc:50] Failed to create channel args during subchannel creation.
It looks like unable to find the default certificates.
I am running my C++ gRPC Client in Windows 10, should I need to do anything so that the client picks the default certificates?
Thanks
Basanth

Kestrel Fails TLS Handshake after Attempt to Download Intermediate Certificate Fails

Kestrel's web server is timing out, saying Connection Closed, after loading a publicly-signed SSL Certificate.
Background - we have a docker container that hosts a dotnet 3.1 webapi/react app, where the user can upload a custom SSL certificate. The PKCS#12 certificate is stored in our database and bound at startup using .ConfigureKestrel((context,options)) and options.ConfigureHttpsDefaults(listenOptions=>{listenOptions.ServerCertificate = certFromDatabase; }). This has been working flawless.
However, the problem now is that a user is attempting to run this app in a restrictive firewalled environment and is receiving HTTP connection closed errors when attempting to access Kestrel immediately after loading a new certificate and restarting the app.
Whenever Kestrel receives an incoming request, it begins attempting to download the intermediate certificate from the certificate's CA's public CDN repository via http on port 80. It appears to be using the URL from the Authority Information Access portion of the certificate. Since the firewall is blocking this, it retries repeatedly for about 20 seconds, during which time the client's TLS handshake sits waiting on a server response. When the server eventually fails to fetch the intermediate certificate, it cancels the TLS handshake and closes the connection.
I can't figure out why it's attempting to download this certificate, considering the same certificate is embedded in the PKCS#12 PFX bundle that is bound to Kestrel. Am I supposed to load either the root CA or intermediate into the CA trust folder in file system? (Ex. /usr/local/share/ca-certificates/ - I can't load the intermediate there, only the CA?)
public static IWebHost BuildFullWebHost(string[] args)
{
var webHostBuilder = GetBaseWebHostBuilder(args);
return webHostBuilder
.ConfigureAppConfiguration((context, builder) => { [...] })
.ConfigureLogging((hostingContext, logging) => { [...] })
.UseStartup<Startup>()
.ConfigureKestrel((context, options) =>
{
var sp = options.ApplicationServices;
using (var scope = sp.CreateScope())
{
var dbContext = scope.ServiceProvider.GetService<DbContext>();
var cert = Example.Services.HttpsCertificateService.GetHttpsCert(dbContext);
//this returns a new X509Certificate2(certificate.HttpsCertificate, certificate.Password);
options.ConfigureHttpsDefaults(listenOptions =>
{
listenOptions.ServerCertificate = cert;
listenOptions.CheckCertificateRevocation = false;
});
}
})
.Build();
}
Not a great solution, but upgrading to .NET 5.0 resolved the issue. It seems that in .NET 5.0, Kestrel attempts to fetch the certificate chain during initial application startup only (and fails). Subsequent incoming HTTP requests don't trigger the fetch process and requests are served as expected.

How to Assert OAM token in helidon using OIDC?

How to Assert OAM token in helidon using OIDC?
I was trying to assert OAM token but getting error as shown below and I tried asserting IDCS token and it works fine
Exception in thread “main” io.helidon.common.Errors$ErrorMessagesException: [FATAL: Failed to load metadata: io.helidon.common.configurable.ResourceException: Failed to open stream to uri: https://{{OAM_host}}:{{port}}/.well-known/openid-configuration at io.helidon.common.configurable.ResourceException: Failed to open stream to uri: https://{{OAM_host}}:{{port}}/.well-known/openid-configuration, FATAL: When token_endpoint is not explicitly defined, the OIDC metadata must exist at class io.helidon.security.providers.oidc.common.OidcConfig$Builder, FATAL: When authorization_endpoint is not explicitly defined, the OIDC metadata must exist at class io.helidon.security.providers.oidc.common.OidcConfig$Builder, FATAL: When jwks_uri is not explicitly defined, the OIDC metadata must exist at class io.helidon.security.providers.oidc.common.OidcConfig$Builder]
And in the application.properties added OAM details:
providers:
- abac:
- oidc:
client-id: "${ALIAS=security.properties.client-id}"
client-secret: "${ALIAS=security.properties.client-secret}"
identity-uri: "${ALIAS=security.properties.uri}"
# A prefix used for custom scopes
scope-audience: "${ALIAS=security.properties.scope-audience}"
audience: "${ALIAS=security.properties.audience}"
proxy-host: "${ALIAS=security.properties.proxy-host}"
frontend-uri: "${ALIAS=security.properties.frontend-uri}"
cookie-name: "OIDC_SESSION"
cookie-same-site: "Lax"
header-use: true
redirect: false
Am I missing something here?
If you look at your exception, it points out the endpoint is not valid:
https://{{OAM_host}}:{{port}}/.well-known/openid-configuration
This means your configuration contains {{OAM_host}} and {{port}} - such placeholders are not replaced by Helidon configuration.
In Helidon 1.x you can use the ${ALIAS=key} to reference keys
Since Helidon 2.0.0-M2 you can use ${key} to reference key

[Error: Unspecified GSS failure. Minor code may provide more information: No key table entry found matching

I have been working on implementing SSO in a NodeJS application using an AD hosted on an azure VM. I am using npm-kerberos in my application. Here is how I have configured everything:
I have created an SPN for the service
Generated a keytab with that SPN
Replicated keytab in my ubuntu server /etc/
Installed kerberos client and configued krb5.conf accordingly
In my apllication I have installed kerberos npm:
principalDetails method return HTTP/enpast.com#REALM.COM which is what I want.
checkPassword also works all good.
initializeServer fails using the SPN that I get from principalDetails.
Here is code:
const service = 'HTTP/enpast.com#REALM.COM'
kerberos.initializeServer(service, (err, data)=>{
if (err) {
console.log('Failed intialization---->', err);
} else {
console.log('Successfully initialized server', data);
}
});
Here is the error message I get:
[Error: Unspecified GSS failure. Minor code may provide more information: No key table entry found matching HTTP/enpast.com/realm.com#]
Any leads to the cause will be highly appreciated. Thank you
I had a similar issue and to fix it I had to declare service this way:
const service = 'HTTP#enpast.com'
note where the '#' character is.
The error log is self-explanatory. Double-check that you have the principal at the server's keytab: klist -k path_to_keytabfile

Kafka SASL: OAUTHBEARER and PLAIN simultaniously

What I am trying to do is -
For Clients to Broker communication - use OAUTHBEARER authentication
For Broker to Broker communication - use PLAIN authentication
I Have following JAAS configuration:
{
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="inter"
password="inter-secret"
user_inter="inter-secret"
user_admin="YvNzcbmqhA0DfxjP";
org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required;
};
Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="zookeeper"
password="zookeeper-secret";
};
}
And I have following configs in server.properties:
sasl.enabled.mechanisms=PLAIN,OAUTHBEARER
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.server.callback.handler.class=br.com.jairsjunior.security.oauthbearer.OauthAuthenticateValidatorCallbackHandler
But if start the kafka service I am seeing the error like below:
used by: java.lang.IllegalArgumentException: Must supply exactly 1 non-null JAAS mechanism configuration (size was 2)
at org.apache.kafka.common.security.oauthbearer.internals.unsecured.OAuthBearerUnsecuredValidatorCallbackHandler.configure(OAuthBearerUnsecuredValidatorCallbackHandler.java:114)
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:122)
... 17 more
which indicates kafka is not allowing to specify multiple JAAS mechanism configurations.
So how can I specify multiple JAAS configs, and setup authentication mechanisms like below:
CLient to Broker ----> OAUTHBEARER
Broker to Broker ----> PLAIN
Thanks!
I am currently also working on the problem to use plain and oauthbearer simultaniously, which I have not solved yet but I solved your specific question in the following way.
This is my Jaas Configuration:
internal.KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret"
user_test="test";
};
external.KafkaServer {
org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required
};
Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="username"
password="pw";
};
Then I set the setting in the server.properties the following way:
inter.broker.listener.name: INTERNAL
sasl.mechanism.inter.broker.protocol: PLAIN
listener.security.protocol.map: INTERNAL:SASL_PLAINTEXT,EXTERNAL:SASL_SSL
listeners: "INTERNAL://0.0.0.0:9092,EXTERNAL://0.0.0.0:19092"
sasl.enabled.mechanisms: PLAIN,OAUTHBEARER
listener.name.external.oauthbearer.sasl.server.callback.handler.class: my.module.kafka.security.oauthbearer.OauthAuthenticateValidatorCallbackHandler
listener.name.external.oauthbearer.sasl.login.callback.handler.class: my.module.kafka.security.oauthbearer.OauthAuthenticateLoginCallbackHandler
When you it this way you won't get your error. Sadly I get another error when the broker want to set up the external connection:
javax.security.auth.callback.UnsupportedCallbackException: Unrecognized SASL Login callback
at org.apache.kafka.common.security.authenticator.AbstractLogin$DefaultLoginCallbackHandler.handle(AbstractLogin.java:105)
at org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule.identifyToken(OAuthBearerLoginModule.java:316)
at org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule.login(OAuthBearerLoginModule.java:301)
... 32 more
It seems like the kafka brokers are ignoring oauthbearer callbackhandler. This is a bit strange because external is working perfectly when I configure it as the only listener.
I hope it helps you with your problem!