What is the Signing Credential in IdentityServer4? - asp.net-core

We are in the process of implementing Identity Server 4 with our .NET Core web app.
I went trough the Identity Server documentation. When configuring the Identity server (using DI) there is the line:
.AddTemporarySigningCredential
I'm trying to understand what this Signing credential is but couldn't figure out. Therefore I don't know if it's ok to use the built in temporary, or if I should provide a different one.
My question is, what is a signing credential and how should I use it?
In the Identity server documentation this is the definition:
Adds a signing key service that provides the specified key material to
the various token creation/validation services. You can pass in either
an X509Certificate2, a SigningCredential or a reference to a
certificate from the certificate store.
So it seems important :)

The Authorization Server will sign tokens with a key. Resource Server(s) should verify that the token's integrity with a key. Together they form a (usually asymmetric, e.g. public/private) key (pair). By default IdentityServer will publish the public key for verifying tokens via the /.well-known/openid-configuration endpoint.
For development scenarios, you typically want to skip the fuss of properly managing secrets like said keys (which is really important to do properly in production!). For these development scenarios you have the option of using adhoc solutions like AddTemporarySigningCredential, which was used for .NET Core 1.x.
With .NET Core 2.x this will change and you will need the AddDeveloperSigningCredential() extension method.
That answers the question of what it is. On how to use it: you simply call the method you need depending on your .NET Core version inside the ConfigureServices(...) method of your application's Startup class.
Apart from that you don't need to do anything special, except of course take care that you use a proper key pair in production.
See also the docs on Cryptography, Keys and HTTPS and the bit on Configuring Services for Keys. From the latter document, here's a relevant alternative for production cases:
AddSigningCredential
Adds a signing key service that provides the specified key material to the various token creation/validation services. You can pass in either an X509Certificate2, a SigningCredential or a reference to a certificate from the certificate store.

Related

Is Api Keys authentication sufficient for Google reCAPTCHA Enterprise?

There are primarily two ways to authenticate using Google's reCAPTCHA Enterprise in a non-Google cloud environment (we use AWS). Google recommends using Service Accounts along with their Java client. However, this approach strikes me as less preferable than the second way Google suggests we can can authenticate, namely, using Api Keys. Authenticating via an Api Key is easy, and it’s similar to how we commonly integrate with other 3rd party services, except rather than a username and password that we must secure, we have an Api Key that we must secure. Authenticating using Service Accounts, however, requires that we store a Json file on each environment in which we run reCAPTCHA, create an environment variable (GOOGLE_APPLICATION_CREDENTIALS) to point to that file, and then use Google's provided Java client (which leverages the environment variable/Json file) to request reCAPTCHA resources.
If we opt to leverage Api Key authentication, we can use Google’s REST Api along with our preferred Http Client (Akka-Http). We merely include the Api Key in a header that is encrypted as part of TLS in transit. However, if we opt for the Service Accounts method, we must use Google’s Java client, which not only adds a new dependency to our service, but also requires its own execution context because it uses blocking i/o.
My view is that unless there is something that I’m overlooking--something that the Service Accounts approach provides that an encrypted Api Key does not--then we should just encrypt an Api Key per environment.
Our Site Key is locked down by our domain, and our Api Key would be encrypted in our source code. Obviously, if someone were to gain access to our unencrypted Api Key, they could perform assessments using our account, but not only would it be exceedingly difficult for an attacker to retrieve our Api Key, the scenario simply doesn't strike me as likely. Performing free reCAPTCHA assessments does not strike me as among the things that a sophisticated attacker would be inclined to do. Am I missing something? I suppose my question is why would we go through the trouble of creating a service account, using the (inferior) Java client, storing a Json file on each pod, create an environment variable, etc. What does that provide us that the Api Key option does not? Does it open up some functionality that I'm overlooking?
I've successfully used Api Keys and it seems to work fine. I have not yet attempted to use Service Accounts as it requires a number of things that I'm disinclined to do. But my worry is that I'm neglecting some security vulnerability of the Api Keys.
After poring a bit more over the documentation, it would seem that there are only two reasons why you'd want to explicitly choose API key-based authentication over an Oauth flow with dedicated service accounts:
Your server's execution environment does not support Oauth flows
You're migrating from reCAPTCHA (non-enterprise) to reCAPTCHA Enterprise and want to ease migration
Beyond that, it seems the choice really comes down to considerations like your organization's security posture and approved authentication patterns. The choice does also materially affect things like how the credentials themselves are provisioned & managed, so if your org happens to already have a robust set of policies in place for the creation and maintenance of service accounts, it'd probably behoove you to go that route.
Google does mention sparingly in the docs that their preferred method for authentication for reCAPTCHA Enterprise is via service accounts, but they also don't give a concrete rationale anywhere I saw.

ASP.NET Core Data Protection with Azure Key Vault for containerized app deployment to Azure Kubernetes Service

I have an ASP.NET Core app that I deploy in a containerized manner to Azure Kubernetes Service (AKS) and when running just a single replica of the app - it is functional and works as expected.
However, when I run multiple replicas - I run into am error - “Unable to protect the message.State” from the OIDC provider.
Upon further research I have figured out that using ASP.NET Core Data Protection as depicted here is the solution -
https://learn.microsoft.com/en-us/aspnet/core/security/data-protection/configuration/overview?view=aspnetcore-5.0#persisting-keys-when-hosting-in-a-docker-container
However - the above link does not expand upon the usage pattern of it while storing the key in Azure Key Vault. Assuming I have protected my keys in AKV how do I actually get to use it in my app? Is there sample or guidance on this aspect?
First of all I would recommend that the same client instance (With AddOpenIDConnect(...) is the same that also handles the callback from your Identity Provider (/signin-oidc). The state parameter that it sets when it first redirects you to the identity provider must match the returned response (for security reasons).
To make sure that issued cookies in the users browser is valid all the time, you need to make sure:
All client instances uses the same data protection encryption key
The key is the same during redeployment.
You can for example store this key in Azure Key Vault, SQL-Server or somewhere else.
btw, I did a blog post about the Data Protection API here and how you could store the key-ring in AKV as well.

Identity Server 4 Custom Scheme

I'm currently working on setting up Identity Server 4 as a centralized authn point for multiple products as well as a federation gateway. It's all pretty standard:
I have users that can authenticate into an SPA that uses the OIDC-Client js lib to interact with my identity server using the implicit flow. User stores are as follows:
a user store local to IDSRV (Asp.net identity). They'd enter their credentials into a form hosted in IDSRV, just as seen in the docs
an openid connect or oauth 2 store, either a social (google, linkedin, etc) integration or an IDP provided by one of our clients. Also working, just like in the docs.
a "destination key", described below
Destination key - the application in question has the ability to generate a unique link with a key (pretend it's a guid, for example purposes). This key maps to a specific destination in the app, and serves as a defacto authentication. It's a lot like the resource owner password flow, except that the key is the sole component needed to authenticate. (I'm aware that this isn't the utmost in security, but it's a business decision, taking into account the lower levels of protection).
Which brings me to my question: what is the proper "identity server" way of accomplishing this destination key authentication mechanism. Some things I've considered:
a custom authentication scheme configured in IDSRV. I added a generic scheme called "destkey", with accompanying AuthenticationHandler and AuthenticationSchemeOptions implementations. The HandleAuthenticateAsync method would use an injected service to validate the destination key. For some reason, it ignores this and continues to validate against Asp.Net identity
a custom grant type. I looked to create an implementation of IExtensionGrantValidator that would utilize the destination key service to validate the key. I haven't been able to get this working, at least in part because the OIDC lib doesn't allow the configuration of a grant type.
repurposing the "Login" method of the AccountController from the IDSRV Quickstart [HttpGet]
public async Task<IActionResult> Login(string returnUrl) This would basically strip the destination key off the URL and call HttpContext.SignInAsync using the dest key as the subject. This isn't working, as it seems to check the database for the existence of the subject (which is how I ended up attempting to create a custom scheme as described above)
Any thoughts on the proper extensibility point to accomplish this would be most welcome...
Not sure if this is the best approach, but I ended up creating a custom implementation of IProfileService. It wraps an instance of IdentityServer4.AspNetIdentity.ProfileService, and checks for the existence of a "destination_key" claim. If the dest claim exists, it references the destination key service for validation - otherwise, it delegates the logic to the underlying ProfileService instance, which uses Asp.net identity.
In the Login method of the AccountController, I simply check the acr_values for a destination key passed from the client. This is set in the signinRedirect method of the OIDC-Client.js lib.

Authentication of mobile apps using Identity server

What is the reference architecture for adding authentication and authorization to a mobile application. Do I need access tokens infrastructure or can I just use validation of a token data using private-public key pair. Do I need a dedicated Identity server(like wso2 identity server) incase I also want to release a developer API.
Thanks in advnace
Update
Things I have tried: I have worked on a project which uses the PKI based validation for every request(token data encrypted at client, token and encrypted data sent over to the server with every request and server decrypts to validate the client) this is a custom implementation, this I feel not the best way to do this, done some basic research to find the right way to do it. Found OpenAM and WSO2 IS, which can connect against multiple user store. They support token based authentication and policy based access control among other features.
What I'am looking for here: Am I on the right track, shall I goahead evaluating the two products, given that I also want to use the same platform another part of the same application which is web-based.

Windows Authentication / Encryption in WCF With NetTcpBinding

I'm trying to understand how windows authentication / encryption works with the NetTcpBinding in WCF. I need to know exactly what encryption algorithm is used to encrypt the data going across the wire (and some documentation to prove it). Will windows authentication / encryption still work if the client and or host is not on a domain?
The netTcpBinding using Windows Credentials requires the caller and the service to be on the same domain - or at least on mutually trusting domains. Otherwise, the server won't be able to verify the Windows credentials and will refuse the service call.
As for encryption : you can even pick and choose which one you'd like ! :-) TripleDES, AES - you name it, with varying key lengths, too.
See the Fundamentals of WCF Security article - it talks about all aspects of security and encryption; also see the MSDN Docs on Securing Services which goes into some more detail; a good overview can be found here showing the properties of the basicHttp transport security element.
Last year I had to implement a distributed system using wcf that required a mechanism both safe and performant across all layers of the system. We decided for creating our own security architecture by creating a binary encrypted token. The encrypted token contained all permissions a given user had.
So for example a user would log in into the system and if successfully authenticated it would receive an encrypted token back. This token was stored locally on the web client. All further requests by the user would contain that token. The token was used in several levels of the architecture. The web server would use it to decide what visual elements to enable or disable. Since the service layer was exposed to the internet, each open door would check the token for authentication and check if that token had the proper permission to execute a given task. The business layer could check again for a more specific right included in the token.
The advantages:
It didn't matter if we were using NetTcpBinding or any other type of binding (and we did use more than one type of binding).
We saved a lot of round trips to the database
We could use the same token on different platforms
I know it probably doesn't answer your specific questions, but it will maybe give you some for food for thought while you're still deciding on the intra-layer architecture of your system.