ASP.NET 5.0 Session State and Azure cache - asp.net-core

I am trying to use the ASP.NET 5.0 Session State middleware and i want to use it with Azure cache as the session store?
Can someone point out samples to implement this?

In ASP.NET 5 the session state system requires an implementation of IDistributedCache to be available in the service provider (via dependency injection). So, the session-state system should be usable as-is; you'll just need a Redis implementation of IDistributedCache.
The ASP.NET 5 Caching repo has a sample Redis distributed cache provider that uses Redis as the backing store.
There is also an accompanying sample app that shows direct usage of the distributed cache provider.
Plugging in Azure Cache (which is based on Redis) is left as an exercise to the reader (that's you!).

Related

Create SignalR server to use across multiple applications

I am building a micro-service-oriented .NET Core web application and now I want to add real-time communication. It is possible to create a SignalR server and publish it on Azure? I want to use it in my microservices to send messages to users when a certain even occurs.
Yes, you can deploy your app to Azure and point your users to your hub endpoint with no problems. You have two options here:
Use SignalR and manually manage the connections and other signalR stuff if you will scale your application. For example, when you have 2 web apps and the client connects to one of them, you need to "tell" to other app that you have a new client connected using for example Redis Blackplane.
Use Azure SignalR and this kind of management is not needed, what you need to provide is only 1 app with the hub logic. So when a client connects to your hub it is automaticaly redirected to the Azure SignalR.
You can read more about this two options here:
https://learn.microsoft.com/pt-pt/azure/azure-signalr/signalr-concept-scale-aspnet-core
Why not deploy SignalR myself?ยด
It is still a valid approach to deploy your own Azure web app supporting ASP.NET Core SignalR as a backend component to your overall web application.
One of the key reasons to use the Azure SignalR Service is simplicity. With Azure SignalR Service, you don't need to handle problems like performance, scalability, availability. These issues are handled for you with a 99.9% service-level agreement.
Also, WebSockets are typically the preferred technique to support real-time content updates. However, load balancing a large number of persistent WebSocket connections becomes a complicated problem to solve as you scale. Common solutions leverage: DNS load balancing, hardware load balancers, and software load balancing. Azure SignalR Service handles this problem for you.
Another reason may be you have no requirements to actually host a web application at all. The logic of your web application may leverage Serverless computing. For example, maybe your code is only hosted and executed on demand with Azure Functions triggers. This scenario can be tricky because your code only runs on-demand and doesn't maintain long connections with clients. Azure SignalR Service can handle this situation since the service already manages connections for you. See the overview on how to use SignalR Service with Azure Functions for more details.
Yes you can, this is the official quick start sample.
https://learn.microsoft.com/en-us/azure/azure-signalr/signalr-quickstart-dotnet-core

ASP.NET Core Data Protection on Web Farm

I have a ASP.NET Core application that uses cookie authentication and runs on web farm. The data protection keys are stored in DB. My application implements the IXmlRepository and the ASP.NET Core will call the IXmlRepository.GetAllElements to get the key ring. So, application in all nodes are using the same key ring and cookie encrypted in Node1 can be decrypted in Node2. It works fine.
However, data protection key will expire and ASP.NET Core will generate a new key. ASP.NET Core also cache the keys and will refresh each 18-24 hours. So, when the key expires, Node1 may generate a new key, but all other nodes may not refresh right away and get the new key. Cookie encrypted by Node1 cannot be decrypted in all other nodes.
How can this situation by handled by ASP.NET Core?
I read this https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/web-farm?view=aspnetcore-2.2, it states
The default configuration isn't generally appropriate for hosting apps in a web farm. An alternative to implementing a shared key ring is to always route user requests to the same node.
Is it the only option to route all requests to the same node after user login using that node?
In principle, this is no different than any other shared configuration scenario. You need to do two things:
Persist the key ring to a common filesystem or network location path all the processes/apps can access:
services.AddDataProtection()
.PersistKeysToFileSystem(new DirectoryInfo(#"\\server\share\directory\"));
Ensure that all the process/apps are using the same application name:
services.AddDataProtection()
.SetApplicationName("shared app name");
The second item is less important for a web farm scenario, since it's all the same app and will have the same app name by default. However, it's always better to be explicit, and then if you do need to share with an entirely different app, you're already set up and ready to go.
In short, you need to add the following code:
services.AddDataProtection()
.SetApplicationName("shared app name")
.PersistKeysToFileSystem(new DirectoryInfo(#"\\server\share\directory\"));
To #LGSon's point in the comments below your question, you'll also run into an issue with caching/sessions eventually, as well. Regardless of being the same "app", you should think of the each instance in the web farm as a separate app. Each is a separate process, which means they have their own separate memory allocation. If you use memory caching (which is also the default storage for sessions), then that will be available only to the individual process that created it. Each process will therefore end up with separate cache and separate sessions. To share this information, you need to employ a distributed cache like Redis or SQL Server. See: https://learn.microsoft.com/en-us/aspnet/core/performance/caching/distributed?view=aspnetcore-2.2#establish-distributed-caching-services.
Note: Even though there's a "distributed" memory cache, it's not actually distributed. This is just an implementation of the IDistributedCache that stores in memory. Since it stores in-memory, it's still process bound.
There's an interesting GitHub issue discussing the key ring storage and key rotation here
https://github.com/dotnet/aspnetcore/issues/26786
Also, if you want to know more about how you can store the key ring in Azure Key Vault, then I recently blogged about that here:
Storing the ASP.NET Core Data Protection Key Ring in Azure Key Vault

FF4J: REST endpoint as a feature store

I am currently looking at implementing feature toggles using ff4j for our application. We want to have a remote central config app which will hold all the features in it and the applications will talk to this central config app via REST to get the features. We will not be able to leverage Spring Cloud Config or Archaius for this purpose.
I went through the documentation and it seems there is a support for HttpClient (https://github.com/ff4j/ff4j/wiki/Store-Technologies#httpclient). But I couldn't find any sample for the same. Can someone please let me know if I can leverage this method to build my feature store from a REST endpoint. Also, I would appreciate if someone could point me to a sample of this.
This is a common pattern.
A component holds the Administration UI (console) and the REST API. You can call it the "Admin Component". For security reasons It may be the only component to have access to persistance unit (any of the 15 DB implementation available)
For the "admin component" HERE is sample using standAlone spring-bppt application using JDBC DB, and HERE you find a simple web application.
The REST API can be secured using credentials user/password and/or API Key. More information HERE
All microservices access the REST API as clients and request feature store. You will need the dependency ff4j-webapi-jersey2x or ff4j-webapi-jersey1x that hold the client http> Then you can define the store using :
FeatureStoreHttp storeHTT = new FeatureStoreHttp("http://localhost:9998/ff4j");
Warning : Please consider using cache to limit overhead introduce by accessing the REST API at each feature usage. More info on cache HERE

Persistent session in ASP.Net core

Is it possible to make the session persistent in ASP.Net Core? So far I can only find information about cookie expiration connected to ASP.Net Identity (which I am not using), or session idle timeout (which does not persist after the user closes the browser).
Where do I find options to make the session persistent?
ASP.NET Core Session is built on top of IDistributedCache so I guess you are looking for its persistent implementation (Redis, SQL Server etc.).
Working with a distributed cache.

API Level Caching in Ektron

I am working in Ektron 8.6.
Does anyone know how API level caching is managed in ektron?
Is there is any config settings to manage API level caching?(web.config or any other config files).Whether API level caching is enabled by default?Is it different in the previous version(Ektron 8.5)?
Starting in version 8.5, Ektron introduced a caching layer that sits beneath its Framework APIs. It is configurable (enable, disable, set ttl, etc) and extensible (provider based so you can implement providers for various cache servers like Reddis, etc).
It is not enabled by default. By default, each API call ultimately hits the database (or the search index). Since this is new in version 8.5+, older versions of Ektron do not have any sort of built-in API level caching, though can obviously take advantage of any standard .NET caching you'd want to create on your own.
Here's a technical webinar that goes into detail on the API level caching in v8.5+. The piece relevant to your question starts at 26:25, but I'd watch the whole thing if you haven't seen it already.
http://www.ektron.com/Webinars/Details/Optimize-Site-Performance-through-Caching/
The default Ektron cache provider uses in-memory / in-proc application scope storage. Once you use that, you may want to take a look at this open source project which implements a 3rd party cache provider for Redis. You can use this as is, or us it as the stub for your own cache provider for another system, or just stick with the OOB in-proc cache provider.
https://github.com/ektron/EktronContrib/blob/master/README.md
Bill