ASP.NET Core Data Protection on Web Farm - asp.net-core

I have a ASP.NET Core application that uses cookie authentication and runs on web farm. The data protection keys are stored in DB. My application implements the IXmlRepository and the ASP.NET Core will call the IXmlRepository.GetAllElements to get the key ring. So, application in all nodes are using the same key ring and cookie encrypted in Node1 can be decrypted in Node2. It works fine.
However, data protection key will expire and ASP.NET Core will generate a new key. ASP.NET Core also cache the keys and will refresh each 18-24 hours. So, when the key expires, Node1 may generate a new key, but all other nodes may not refresh right away and get the new key. Cookie encrypted by Node1 cannot be decrypted in all other nodes.
How can this situation by handled by ASP.NET Core?
I read this https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/web-farm?view=aspnetcore-2.2, it states
The default configuration isn't generally appropriate for hosting apps in a web farm. An alternative to implementing a shared key ring is to always route user requests to the same node.
Is it the only option to route all requests to the same node after user login using that node?

In principle, this is no different than any other shared configuration scenario. You need to do two things:
Persist the key ring to a common filesystem or network location path all the processes/apps can access:
services.AddDataProtection()
.PersistKeysToFileSystem(new DirectoryInfo(#"\\server\share\directory\"));
Ensure that all the process/apps are using the same application name:
services.AddDataProtection()
.SetApplicationName("shared app name");
The second item is less important for a web farm scenario, since it's all the same app and will have the same app name by default. However, it's always better to be explicit, and then if you do need to share with an entirely different app, you're already set up and ready to go.
In short, you need to add the following code:
services.AddDataProtection()
.SetApplicationName("shared app name")
.PersistKeysToFileSystem(new DirectoryInfo(#"\\server\share\directory\"));
To #LGSon's point in the comments below your question, you'll also run into an issue with caching/sessions eventually, as well. Regardless of being the same "app", you should think of the each instance in the web farm as a separate app. Each is a separate process, which means they have their own separate memory allocation. If you use memory caching (which is also the default storage for sessions), then that will be available only to the individual process that created it. Each process will therefore end up with separate cache and separate sessions. To share this information, you need to employ a distributed cache like Redis or SQL Server. See: https://learn.microsoft.com/en-us/aspnet/core/performance/caching/distributed?view=aspnetcore-2.2#establish-distributed-caching-services.
Note: Even though there's a "distributed" memory cache, it's not actually distributed. This is just an implementation of the IDistributedCache that stores in memory. Since it stores in-memory, it's still process bound.

There's an interesting GitHub issue discussing the key ring storage and key rotation here
https://github.com/dotnet/aspnetcore/issues/26786
Also, if you want to know more about how you can store the key ring in Azure Key Vault, then I recently blogged about that here:
Storing the ASP.NET Core Data Protection Key Ring in Azure Key Vault

Related

.net core distributed apps key/secret storage

We are making an app that will be distributed to clients to host themselves, or with a provider, and are wondering what the proper method of key storage is for a .net core web app.
We would like the database connection string and a few needed api keys to be included with the server, but not accessible to the clients. Most search results we could find relate to user secrets for development, but they don't explain what to do for releases/production.
I did find some information on data protection, but I also read that it is meant for keys that are changing, not keys that are static. Any help is much appreciated!

Sending central config to microservices in .net core

I am researching how to have a centralized data store for a multi-tenant application. This is a microservice application.
For example, each tenant will have different values for some keys. Some tenants will have their own unique keys, but let's assume they are all the same for now.
I have one microservice responsible for CRUD of the tenant configuration.
I need that configuration to be pushed to all of the other microservices so that each sub-service can have its own copy of the configuration and there is no reliance on the configuration service being up.
To me, this should be infrastructure instead of custom C# code but I am new to the microservices scene so wondering if there is a known approach.
I considered using MassTransit/messages but then wouldn't every microservice need a duplicated POCO/Contract class? Also if there is a new key required, all microservices would need to be updated and deployed, which breaks the purpose of microservices.
In short, how can I have a config micro service sync tenant-specific data to multiple other microservices?

Sharing user login between Blazor WebServer and ASP.NET Core API

I am building a service-oriented system for personal use (plus few friends may have limited access as well) - the aim is to have a dashboard for controlling my apps running on various machines such as Raspberry Pis (and potentially to be expanded to a VPS or few in future).
The architecture itself is pretty simple. For authentication I want to use AWS Cognito. Services would communicate with WebAPI (and potentially with eachother) using gRPC within a VPN, and dashboard would be served by Blazor server-side (may move to Blazor WASM Hosted model if I find a need for it). Each of the processes may or may not be on the same machine as any other (depending on the purpose). Blazor server may or may not run within VPN (I might want to move it to a separate web hosting later).
I created a simple diagram to visualize it:
The problem comes with authentication. I want to have Blazor server-side and API as a separate processes (for now they're going to run on the same machine, but I may want to move it elsewhere eventually). Ideally authentication should be handled by API, so authentication is client-agnostic, and the API can use it to verify if the logged in user can perform an action - which by itself is simple.
However, I want Blazor server to use and validate that token as well in order to determine what to display to the user. I want to do with the least amount of calls possible - ideally avoiding querying API for every 'should I display it or not?' choice.
I could easily do it by sacrificing possibility to move API elsewhere, and just merge Blazor Server and API Gateway into one project. For my current purpose it would be enough, but it's not an ideal solution, so first I want to look into how could I achieve my original vision.
How could I achieve this (with minimal amount of Blazor server to API queries)?
I was googling for solution a lot, but so far only found either examples of using Blazor server and API as one project, or using client-side calls to API directly.
Thank you good folks in advance.

ASP.NET 5.0 Session State and Azure cache

I am trying to use the ASP.NET 5.0 Session State middleware and i want to use it with Azure cache as the session store?
Can someone point out samples to implement this?
In ASP.NET 5 the session state system requires an implementation of IDistributedCache to be available in the service provider (via dependency injection). So, the session-state system should be usable as-is; you'll just need a Redis implementation of IDistributedCache.
The ASP.NET 5 Caching repo has a sample Redis distributed cache provider that uses Redis as the backing store.
There is also an accompanying sample app that shows direct usage of the distributed cache provider.
Plugging in Azure Cache (which is based on Redis) is left as an exercise to the reader (that's you!).

API Level Caching in Ektron

I am working in Ektron 8.6.
Does anyone know how API level caching is managed in ektron?
Is there is any config settings to manage API level caching?(web.config or any other config files).Whether API level caching is enabled by default?Is it different in the previous version(Ektron 8.5)?
Starting in version 8.5, Ektron introduced a caching layer that sits beneath its Framework APIs. It is configurable (enable, disable, set ttl, etc) and extensible (provider based so you can implement providers for various cache servers like Reddis, etc).
It is not enabled by default. By default, each API call ultimately hits the database (or the search index). Since this is new in version 8.5+, older versions of Ektron do not have any sort of built-in API level caching, though can obviously take advantage of any standard .NET caching you'd want to create on your own.
Here's a technical webinar that goes into detail on the API level caching in v8.5+. The piece relevant to your question starts at 26:25, but I'd watch the whole thing if you haven't seen it already.
http://www.ektron.com/Webinars/Details/Optimize-Site-Performance-through-Caching/
The default Ektron cache provider uses in-memory / in-proc application scope storage. Once you use that, you may want to take a look at this open source project which implements a 3rd party cache provider for Redis. You can use this as is, or us it as the stub for your own cache provider for another system, or just stick with the OOB in-proc cache provider.
https://github.com/ektron/EktronContrib/blob/master/README.md
Bill