What are the key difference in using Redis Cache via ConnectionMultiplexer and AddStackExchangeRedisCache(IDistributedCache) in StartUp.cs? - redis

I want to implement Distributed caching(Redis) in ASP.NET Core project. After a bit or research I found that there are two ways of creating a Redis connection using AddStackExchangeRedisCache in Startup.cs and ConnectionMultiplexer
AddStackExchangeRedisCache - This happens in Startup.cs.
Doubts in above approach:
Does this work in Prod environment?
When and how the connection is initialized?
Is it thread safe way to create the connection?
By using the ConnectionMultiplexer, we can initialize the DB instance. As per few articles, Lazy initialization will take care of the Thread safety as well
Doubts:
From above approaches, which is the better approach?
I tried both approaches in my local machine both are working fine. But I could not find Pros and Cons of above approach.

With ConnectionMultiplexer, you have the full list of commands that you can execute on your Redis server. With DistributedCaching, you can only store/retrieve a byte array or a string, and you can not execute any other commands that Redis provides. So if you just want to use it as a cache store, DistributedCaching provides a good abstraction layer. However, even the simplest increment/decrement command for Redis will not be available, unless you use ConnectionMultiplexer.

The extension method AddStackExchangeRedisCache uses a ConnectionMultiplexer under the hood (see here, and here for the extension method itself).
#2: works in prod either way
#3: connection is established lazily on first use, the ConnectionMultiplexer instance is re-used (registered as DI singleton)
#4: yeah, see above resp. here, a SemaphoreSlim is used to ensure the connection is only created once
pros and cons: since both use the ConnectionMultiplexer, they are pretty similar.
You can pick between the advantages of using the implementation agnostic IDistributedCache vs. direct use of the multiplexer and the StackExchange.Redis API (which has more specific functions than the interface).

Wrappers like IDistributedCache and StackExchangeRedis.Extensions do not include all the functions possible in the original library, In particular I required to delete All the keys in Redis Cache, which was not exposed in these wrappers.

Related

Common interface for Jedis and JedisCluster

I see that Jedis and JedisCluster don't implement a common java interface, and I am wondering why. My software will be running in different environments where redis may or may not run in cluster mode, so how do I implement a common piece of code using Jedis that will run in both the environments?
The clients will be doing only basic operations and I want to hide the cluster operations within the library and not expose them. Any ideas on a modular design?
thanks.
Looks like this may be your answer redis.clients.jedis.JedisCommands.
You can use this interface as argument to your methods and pass in either a Jedis or JedisCluster instance.

Is Kernel.Get<T>() threadsafe + good pattern to share the kernel among components

Is Kernel.Get() threadsafe? My goal is share an instance of my kernel among all my componenets and they may all very well call Kernel.Get() at the same time on different threads.
Is Kernel.Get() thread safe?
What is the best pattern to share the application kernel among all application components which are sitting in different dll's? I prefer not to pass an instance of a factory to every component of my application if this makes sense.
Get is threadsafe but creating new kernel instances (ctor) is currently not threadsafe.
Generally you should try to minimize your access to the kernel to an absolute minimum. Accessing the kernel form everywhere is a very bad design and makes your code much less reusable. See Service Locator Antipattern
The only situations where you access the kernel should be:
Once in the composite root of the application (e.g. Program.Main, App.xaml, MVC Controller creation)
Inside a factory if you don't know how many instances you need when the composite root is created
Inside a factory if you don't know which implementation is required when the composite root is created
Inside a factory if you need to create a component late due to memory/resource constraints.
In all cases limit the access to the kernel to the composite root and inject factories (class or Func<T>) to the classes where you need to create objects during runtime. The best way to give those factories access to the kernel is still constructor injection even if you do not prefer doing so. Or use Func<T> ( Does Ninject support Func (auto generated factory)? ).
Yes, it is thread safe; The primary app I work on has a single kernel that serves a large SAAS app. So it gets pounded and it does just fine. We also have a multi-threaded page generator test suite that exposed a thread issue in Ninject last fall, but has been fixed and has been fine since then. So I know for sure that it's ok.
There are lots of different patterns for exposing the kernel. We use a ServiceLocator pattern (basically a static container for the container.)
For the different dll's. We have a NinjectModule in each dll that does it's own bindings and then the app does a assembly scan for NinjectModules at startup when it sets of the ServiceLocator.

C# Task Parallel Library and NHibernate/Spring.NET

I have been using Spring.NET and NHibernate for some years and I am very satisfied. However, I was always playing around with multi threading, Reactive Extensions and eventually Task Parallel Library which is a great framework. Unfortunately all kind of multithreading approaches fail because of NHiberntate's session which is not thread safe.
I am asking you how can I benefit from parallel programming and still utilising NHibernate.
For instance: I have a CustomerRegistrationService class which method Register performs several tasks:
ICustumer customer = this.CreateCustomerAndAdresses(params);
this.CreateMembership(customer);
this.CreateGeoLookups(customer.Address);
this.SendWelcomeMail(customer);
The last two methods would be ideal candidates to run parallel, CreateGeoLookups calls some web services to determine geo locations of the customer's address and creates some new entities as well as updates the customer itself. SendWelcomMail does what it says.
Because CreateGeoLookups does use NHibernate (although through repository objects so NHibernate is acutally hidden via Interfaces/Dependency Inection) it won't work with Task.Factory.StarNew(...) or other Threading mechanisms.
My question is not to solve this very issue I have described but I would like to hear from you about NHibenrate, Spring.NET and parallel approaches.
Thank you very much
Max
In NH its the ISession that isn't thread-safe but the ISessionFactory is entirely thread-safe, easily supporting what it seems you are after. If you have designed your session-lifecycle-management (and the repositories that depend upon it) such that you assume one single consistent ISession across calls, then, yes, you will have this kind of trouble. But if you have designed your session-handling pattern to only assume a single ISessionFactory but not to make assumptions about ISession, then there is nothing inherently preventing you from interacting with NH in parallel.
Although you don't specifically mention your use case as being for the web, its important to take note that in web-centric use-cases (e.g., what is a pretty common case for Spring.NET users as well as many other NH-managing-frameworks), the often-used 'Session-Per-Request' pattern of ISession management (often referred to in Spring.NET as 'Open Session In View' or just 'OSIV') will NOT work and you will need to switch to a different duration of your ISession lifecycle. This is because (as the name suggests) the session-per-request/OSIV pattern makes the (now incorrect in your case) assumption that there is only a single ISession instance for the duration of each HttpRequest (and presumably you would want to be spawning these parallel NH calls all within the context of a single HttpRequest in the web use case).
Obviously in the non-web case where there's rarely a similar concept to session-per-request you wouldn't be as likely to run into this issue as session-lifecycle management is rarely as fine-grained/short-lived as it in web-based applications.
Hope this helps.
-Steve B.
This a difficult thing you ask for. The DTC has to be taken with care.
The only solution i may know is the use of reliable, transactional messaging (e.g. MSMQ + NServiceBus/MassTransit).
Such a design enables you to do this. It would look like this:
var customerUid=CreateCustomers();
Bus.Publish(new CustomerCreatedEvent() { CustomerUid = customerUid});
Then you could use two event handlers (Reactors) that handle the event and send an EMail or create the lookups.
This won´t allow you sharing the Transaction either but will ensure that the Reactors are run (in a new Transaction) when the creation of the customer suceeded.
Also this has nothing to do with the TPL.
Well thank you for answering. I know that the 'ISession that isn't thread-safe but the ISessionFactory is entirely thread-safe'. My problem in the above code for example is that the whole operation is wrapped in one transaction. So this.CreateCustomerAndAdresses(params) on main thread #1 will use for instance ISession #1 with transaction #1. Calling the other three in parallel will create three more threads and three more sessions and transactions which leads to database timeouts in my case. My assumption is that the transaction #1 is not successfully commited because it waits for the three concurrent tasks to complete. But the three concurrent tasks try to read from the database while a transaction is still active leading to deadlocks/timeouts.
So is there some way to tell the other threads/sessions not to create a new transaction but use the main transaction #1?
I am using the TxScopeTransactionManager from Spring.NET which utilises DTC (System.Transactions). I have googled that maybe System.Transactions.DependentTransaction could work but do not have a clue how to integrate it in my Spring.NET transaction managed scenario.
Thanks

Implementing repositories using NHibernate and Spring.Net

I'm trying to get to grips with NHibernate, Fluent NHibernate and Spring.
Following domain-driven design principals, I'm writing a standard tiered web application composed of:
a presentation tier (ASP.Net)
a business tier, comprising:
an application tier (basically a set of methods made visible to UI tier)
repository interfaces and domain components (used by the application tier)
A persistence tier (basically the implementation of the repository interfaces defined in the business tier)
I would like help determining a way of instantiating an NHibernate ISession in such a way that it can be shared by multiple repositories over the lifetime of a single request to the business tier. Specifically, I would like to:
allow the ISession instance and any transaction to be controlled outwith the repository implementation (perhaps by some aspect of the IOC framework, an interceptor?)
allow the ISession instance to be available to the repositories in a test-friendly manner (perhaps via injection or trough some shared 'context' abstraction)
avoid any unnecessary transactions being created (i.e. when only read-only operations have been executed)
allow me to write tests that use SQLLite
allow me to use Fluent NHibernate
allow the repository implementation to remain ignorant of the host environment. I don't yet know if the businese tier will run in-process with the presentation tier or will be hosted separately under WCF (in IIS), so I don't want to bind my code too closely to a HTTP context (for example).
My first attempt to solve this problem had been using the Registry pattern; storing the ISession instance in a ThreadStatic property. However, subsequent reading has suggested that isn't the best solution (as ASP.Net can switch the thread within the page lifecycle, I believe).
Any thoughts, part solutions, pattern names, pointers to up-to-date samples (NHibernate 2) will be most gratefully received.
I have not used Spring.NET so I can't comment on that. However, the rest sounds remarkably (or perhaps not so remarkably; we're hardly the first to implement these things ;) similar to my own experience. I too had trouble finding a One True Best Practice so I just read as much as I could and came up with my own interpretation.
In my situation I wanted transaction/session management to be external to the repository as well as keep repository concerns from bubbling up out of them (i.e. the code using the repository should not need to know that it's using NHibernate internally and shouldn't need to know anything about NHibernate session management). In my case it was decided that transactions would be created by default lest developers forget them, so I had to have a read-only escape mechanism. I went with the Unit of Work pattern with the NHibernate ISession instance store inside. Calling code (I also created a DSL interface for the UoW) might look something like:
using (var uow = UoW.Start().ReadOnly().WithHttpContext()
.InNewScope().WithScopeContext(ScopeContextProvider.For<CRMModel>())
{
// Repository access
}
In practice, that could be as short as UoW.Start() depending on how much context is already available. The HttpContext part refers to the storage location for the UoW which is, unsurprisingly, the HttpContext in this case. As you mentioned, for a ASP .NET application, HttpContext is the safest place to store things. ScopeContextProvider basically makes sure the right data context is provided for the UoW (ISession instance to the appropriate database/server, other settings). The "ScopeContext" concept also makes it easy to insert a "test" scope context.
Going this route makes the repositories explicitly dependent on the UoW interface. Actually, you might be able to abstract it some but I'm not sure I see the benefit. What I mean is, each repository method retrieves the current UoW instance and then pulls out the ISession object (or simply a SqlConnection for those methods that don't use NHibernate) to run the NHibernate query/operation. This works for me though because it also seems like the ideal time to make sure that the current UoW is not read-only for methods that might need to run CRUD.
Overall, I think this is one approach that solves all your points:
Allows session management to be external to the repository
ISession context can be mocked or pointed at a context provider for a test environment
Avoids unnecessary transactions (well, you'd have to invert what I did and have a .Transactional() call or something)
I can't see why you couldn't test with SQLite since that's more of an NHibernate concern
I use Fluent NHibernate myself
Allows the repository to be ignorant of the host environment (that is, the repository caller controls the UoW storage context)
As for the UoW implementation, I'm partially kicking myself for not looking around more before I started. There's a project called machine.uow which I understand is fairly popular and works well with NHibernate. I haven't played with it much so I can't say if it solves all my requirements as neatly as the one I wrote myself, but it might have saved development time as well.
Perhaps we'll get some comments as to where I went wrong or how to improve things, but I hope this is at least helpful in some way.
For reference, the software stack I'm using is:
ASP.NET MVC
Fluent NHibernate on top of NHibernate
Ninject for dependency injection
What you are describing is supported by the Spring.NET framework almost out of the box. Only for FluentNHibernate you need to add a custom SessionFactory (not a lot of code, look here:Using Fluent NHibernate in Spring.NET) to Spring.NET.
Every repository can use the same ISession, just inject the SessionFactory in your repositories and use Spring.NET's transaction services.
Just try it out, they have pretty thorough documentation imho.

NHibernate, DTOs and NonUniqueObjectException

We're using the DTO pattern to marshal our domain objects from the service layer into our repository, and then down to the database via NHibernate.
I've run into an issue whereby I pull a DTO out of the repository (e.g. CustomerDTO) and then convert it into the domain object (Customer) in my service layer. I then try and save a new object back (e.g. SalesOrder) which contains the same Customer object. This is in turn converted to a SalesOrderDTO (and CustomerDTO) for pushing into the repository.
NHibernate does not like this- it complains that the CustomerDTO is a duplicate record. I'm assuming that this is because it pulled out the first CustomerDTO in the same session and because the returning has been converted back and forth it cannot recognise this as the same object.
Am I stuck here or is there a way around this?
Thanks
James
You can re-attach an object to a session in NHibernate by using Lock - e.g.
_session.Lock(myDetachedObject, NHibernate.LockMode.None);
which may or may not help depending on exactly what is happening here. On a side note, using DTO's with NHibernate is not the most common practice, the fact that NHibernate (mostly) supports persistence ignorance means that typically DTO's aren't as widely used as with some other ORM frameworks.
It's really about how NHibernate session works. So if you within a session pull an instance of your CustomerDTO and then after a while you should get the same CustomerDTO (say by primary key) - you actually will get reference to the very same object like you did in your first retrieval.
So what you do is that you either merge the objects by calling session.Merge or you ask your session for the object by calling session.Get(primaryKey) do your updates and flush the session.
However as suggested by Steve - this is not usually what you do - you really want to get your domain object from the datastore and use DTOs (if neede) for transferring the data to UI, webservice whatever...
As others have noted, implementing Equals and GetHashCode is a step in the right direction. Also look into NHibernate's support for the "attach" OR/M idiom.
You also have the nosetter.camelcase option at your disposal: http://davybrion.com/blog/2009/03/entities-required-properties-and-properties-that-shouldnt-be-modified/
Furthermore, I'd like to encourage you not to be dissuaded by the lack of information out there online. It doesn't mean you're crazy, or doing things wrong. It just means you're working in an edge case. Unfortunately the biggest consumers of libraries like NHibernate are smallish in-house and/or web apps, where there exists the freedom to lean all your persistence needs against a single database. In reality, there are many exceptions to this rule.
For example, I'm currently working on a commercial desktop app where one of my domain objects has its data spread between a SQL CE database and image files on disk. Unfortunately NHibernate can only help me with the SQL CE persistence. I'm forced to use a sort of "Double Mapping" (see Martin Fowler's "Patterns of Enterprise Application Architecture") map my domain model through a repository layer that knows what data goes to NHibernate and what to disk.
It happens. It's a real need. Sometimes an apparent lack in a tool indicates you're taking a bad approach. But sometimes the truth is that you just truly are in an edge case, and need to build out some of these patterns for yourself to get it done.
I'm assuming that this is because it
pulled out the first CustomerDTO in
the same session and because the
returning has been converted back and
forth it cannot recognise this as the
same object.
You are right. Hibernate can't. Consider implementing Equals and Hashcode to fix this. I think a re-attach may only work if you haven't loaded the object within this session.