What is the best approache to share a UnitOfWork between two Repositories using Unity? - repository

I want to use Unity as an IoC besides the UnitOfWork and Repository patterns. I read various related articles and questions but none of them satisfied me completely.
I have a problem with all approaches. An example would explain better my problem:
We want to work with two repositories at two separate classes (maybe the business services) but the overall works are in a unit.
Start point is the LocalService1.Method1 method.
public class LocalService1
{
public void Method1(int id)
{
var repository1 = Container.Current.Resolve<IRepository1>(); // Injects the IUnitOfWork for the repository.
var entity1 = repository1.GetEntity1(id);
var service2 = Container.Current.Resolve<LocalService2>(); // Maybe it’s better not to use IoC for business logic. This is not my issue.
service2.Method2(entity1)
}
}
...
public class LocalService2
{
public void Method2(Entity1 entity1)
{
var repository2 = Container.Current.Resolve<IRepository2>(); // Injects the IUnitOfWork for the repository.
var count = repository2.GetEntity2sCount(entity1.Id);
// Do some works with count and entity1
}
}
The main question is that “How can I share the UnitOfWork (here can be ObjectContext) between the IRepository1 and IRepsitory2 while calling the LocalService1.Method1?”.
More important thing is that “I want to be sure about UnitOfWork disposal”.
I guess the answers would focus on these issues:
IoC configuration
Life Time configuration
Disposal time (How and when?)
If you recommend using “HttpContext” please consider about non-web environments.
I know my question is almost about the “Life time management” but I’m looking for an exhaustive approach.

First: Don't use Unity as a ServiceLocator. This is considered an anti-pattern. Use constructor injection instead.
Unity's LifetimeManagers don't clean up after themselves. This feature is on the wish list for Unity vNext.
If you want your objects to be disposed you should create your own LifetimeManager and a related BuilderStrategy that do clean up.
There is a sample in the TecX project (inside TecX.Unity.Lifetime) which is taken from Mark Seemann's book Dependency Injection in .NET.

Related

ASP.Net Core Open Partial Generic Dependency Injection

I would like to register the following items for DI using an open generic implementation and interface. I know the following example will not work, as well as other combinations I've tried with MakeGenericType, or GetGenericArguments. I would like to simply call AddRepository<MyDbContext> and then be able to inject my implementation into classes without explicitly having to register the type I am using.
Interface
public interface IRepository<TEntity>
{
}
Implementation
public class Repository<TEntity, TContext> : IRepository<TEntity>
where TEntity : class
where TContext : DbContext
{
}
Registration
public static class RepositoryServiceCollectionExtensions
{
public static IServiceCollection AddRepository<TContext>(
this IServiceCollection services) where TContext : DbContext
{
services.TryAddScoped(
typeof(IRepository<>),
typeof(Repository< , TContext>));
return services;
}
}
The dependency injection container Microsoft.Extensions.DependencyInjection and its abstraction layer does not support open generic factories. So you generally cannot achieve what you would like to do there. There’s also no support planned.
Unlike many those other dependency injection related features, this is also not really possible to patch by just providing the right wrapper or factory types. So you will actually have to change your design here.
Since you want to resolve IRepository<TEntity> and the only way to do this is by registering an equivalent open generic type, you will have to have some type Repository<TEntity> that implements your repository. That makes it impossible to retrieve the database context type from the generic type argument, so you will have to use a different way here.
You have different options to do that. For example, you could configure your Repository<TEntity> (e.g. using M.E.Options) with the context type and make that resolve the Repository<TEntity, TContext> dynamically. But since you have actual control over your database context, I would suggest either adding a marker interface or introducing another type for the context which you can then register with the container:
public class Repository<TEntity> : IRepository<TEntity>
{
public Repository(IDbContext dbContextFactory)
{ … }
}
public class MyDbContext : DbContext, IDbContext
{ … }
Then, your extension method could look like this:
public static IServiceCollection AddRepository<TContext>(this IServiceCollection services)
where TContext : DbContext, IDbContext
{
services.AddTransient(typeof(IDbContext), sp => sp.GetService<TContext>());
services.TryAddScoped(typeof(IRepository<>), typeof(Repository<>));
return services;
}
Of course, this changes how your Repository implementation works, but I don’t actually assume that you need to know the TContext type other than to inject the database context type. So this will probably still work for you.
That being said, I have too agree with Chris Pratt, that you probably don’t need this. You say that you want to introduce the repository, because “coding stores and implementations for every entity is a time consuming task” but you should really think about whether you actually need that. A generic repository is very limited in what it can do, and mostly means that you are doing just CRUD operations. But exactly that is what DbContext and DbSet<T> already do:
C: DbContext.Add, DbSet<T>.Add
R: DbContext.Find, DbSet<T>.Find
U: DbContext.Update, DbSet<T>.Update
D: DbContext.Remove, DbSet<T>.Remove
In addition, DbContext is a “unit of work” and DbSet<T> is an IQueryable<T> which gives you a lot more control and power than a generic repository could possible give you.
You cannot have a partially open generic reference. It's all or nothing. In other words, you can try:
services.TryAddScoped(
typeof(IRepository<>),
typeof(Repository<,>));
But, if that doesn't work, you'll likely need to add a type param to your AddRepository method:
public static IServiceCollection AddRepository<TEntity, TContext>(this IServiceCollection services)
where TEntity : class
where TContext : DbContext
{
services.TryAddScoped(
typeof(IRepository<TEntity>),
typeof(Repository<TEntity, TContext>));
return services;
}
Of course, I think that breaks what you're ultimately trying to achieve here: registering repositories for all the entity types in one go. You can always use a bit of reflection find all entities in your assembly (they would need to share something in common: base class, interface, etc.) and then enumerate over them and use reflection to call AddScoped on your service collection for each.
All that said, the best thing you can do here is to actually throw all this away. You don't need the repositories. EF already implements the repository and unit of work patterns. When you use an ORM like EF, you're essentially making that your data layer instead of a custom class library you create. Putting you own custom wrapper around EF not only adds entropy to your code (more to maintain, more to test, and more than can break), but it can also mess up the way EF works in many cases, leading to less efficiency in the best cases and outright introducing bugs into your application in the worst cases.

Modular design and intermodule references

I'm not so sure the title is a good match for this question I want to put on the table.
I'm planning to create a web MVC framework as my graduation dissertation and in a previous conversation with my advisor trying to define some achivements, he convinced me that I should choose a modular design in this project.
I already had some things developed by then and stopped for a while to analyze how much modular it would be and I couldn't really do it because I don't know the real meaning of "modular".
Some things are not very cleary for me, like for example, just referencing another module blows up the modularity of my system?
Let's say I have a Database Access module and it OPTIONALY can use a Cache module for storing results of complex queries. As anyone can see, I at least will have a naming dependency for the cache module.
In my conception of "modular design", I can distribute each component separately and make it interact with others developed by other people. In this case I showed, if someone wants to use my Database Access module, they will have to take the Cache as well, even if he will not use it, just for referencing/naming purposes.
And so, I was wondering if this is really a modular design yet.
I came up with an alternative that is something like creating each component singly, without don't even knowing about the existance of other components that are not absolutely required for its functioning. To extend functionalities, I could create some structure based on Decorators and Adapters.
To clarify things a little bit, here is an example (in PHP):
Before
interface Cache {
public function isValid();
public function setValue();
public function getValue();
}
interface CacheManager {
public function get($name);
public function put($name, $value);
}
// Some concrete implementations...
interface DbAccessInterface {
public doComplexOperation();
}
class DbAccess implements DbAccessInterface {
private $cacheManager;
public function __construct(..., CacheManager $cacheManager = null) {
// ...
$this->cacheManager = $cacheManager;
}
public function doComplexOperation() {
if ($this->cacheManager !== null) {
// return from cache if valid
}
// complex operation
}
}
After
interface Cache {
public function isValid();
public function setValue();
public function getValue();
}
interface CacheManager {
public function get($name);
public function put($name, $value);
}
// Some concrete implementations...
interface DbAccessInterface {
public function doComplexOperation();
}
class DbAccess implements DbAccessInterface {
public function __construct(...) {
// ...
}
public function doComplexQuery() {
// complex operation
}
}
// And now the integration module
class CachedDbAcess implements DbAccessInterface {
private $dbAccess;
private $cacheManager;
public function __construct(DbAccessInterface $dbAccess, CacheManager $cacheManager) {
$this->dbAccess = $dbAccess;
$this->cacheManager = $cacheManager;
}
public function doComplexOperation() {
$cache = $this->cacheManager->get("Foo")
if($cache->isValid()) {
return $cache->getValue();
}
// Do complex operation...
}
}
Now my question is:
Is this the best solution? I should do this for all the modules that do not have as a requirement work together, but can be more efficient doing so?
Anyone would do it in a different way?
I have some more further questions involving this, but I don't know if this is an acceptable question for stackoverflow.
P.S.: English is not my first language, maybe some parts can get a little bit confuse
Some resources (not theoretical):
Nuclex Plugin Architecture
Python Plugin Application
C++ Plugin Architecture (Use NoScript on that side, they have some weird login policies)
Other SO threads (design pattern for plugins in php)
Django Middleware concept
Just referencing another module blows up the modularity of my system?
Not necessarily. It's a dependency. Having a dependencies is perfectly normal. Without dependencies modules can't interact with each other (unless you're doing such interaction indirectly which in general is a bad practice since it hides dependencies and complicates the code). Modular desing implies managing of dependencies, not removing them.
One tool - is using interfaces. Referencing module via interface makes a so called soft dependency. Such module can accept any implementation of an interface as a dependency so it is more independant and as a result - more maintainable.
The other tool - designing modules (and their interfaces) that have only single responcibility. This also makes them more granular, independant and maintainable.
But there is a line which you should not cross - blindly applying these tools may leed to a too modular and too generic desing. Making things too granular makes the whole system more complex. You should not solve universe problems, making generic modules, that all developers can use (unless it is your goal). First of all your system should solve your domain tasks and make things generic enough, but not more than that.
I came up with an alternative that is something like creating each component singly, without don't even knowing about the existance of other components that are not absolutely required for its functioning
It is great if you came up with this idea by yourself. The statement itself, is a key to modular programming.
Plugin architecture is the best in terms of extensibility, but imho it is hard to maintenance especially in intra application. And depending the complexity of plugin architecture, it can make your code more complex by adding plugin logics, etc.
Thus, for intra modular design, I choose the N-Tier, interface based architecture. Basically, the architecture relays on those tiers:
Domain / Entity
Interface [Depend on 1]
Services [Depend on 1 and 2]
Repository / DAL [Depend on 1 and 2]
Presentation Layer [Depend on 1,2,3,4]
Unfortunately, I don't think this is achieveable neatly in php projects as it need separated project / dll references in each tier. However, following the architecture can help to modularize the application.
For each modules, we need to do interface-based design. It can help to enhance the modularity of your code, because you can change the implementation later, but still keep the consumer the same.
I have provided an answer similiar to this interface-based design, at this stackoverflow question.
Lastly but not least, if you want to make your application modular to the UI, you can do Service Oriented Architecture. This is simply make your application as bunch of services, and then make the UI to consume the service. This design can help to separate your UI with your logic. You can later use different UI such as desktop app, but still use the same logic. Unfortunately, I don't have any reliable source for SOA.
EDIT:
I misunderstood the question. This is my point of view about modular framework. Unfortunately, I don't know much about Zend so I will give examples in C#:
It consist of modules, from the smallest to larger modules. Example in C# is you can using the Windows Form (larger) at your application, and also the Graphic (smaller) class to draw custom shapes in the screen.
It is extensible, or replaceable without making change to base class. In C# you can assign FormLoad event (extensible) to the Form class, inherit the Form or List class (extensible) or overridding form draw method to create a custom window graphic (replaceable).
(optional) it is easy to use. In normal DI interface design, we usually inject smaller modules into a larger (high level) module. This will require an IOC container. Refer to my question for detail.
Easy to configure, and does not involve any magical logic such as Service Locator Pattern. Search Service Locator is an Anti Pattern in google.
I don't know much about Zend, however I guess that the modularity in Zend can means that it can be extended without changing the core (replacing the code) inside framework.
If you said that:
if someone wants to use my Database Access module, they will have to take the Cache as well, even if he will not use it, just for referencing/naming purposes.
Then it is not modular. It is integrated, means that your Database Access module will not work without Cache. In reference of C# components, it choose to provide List<T> and BindingList<T> to provide different functionality. In your case, imho it is better to provide CachedDataAccess and DataAccess.

What do you mean by "programming to interface" and "programming to implementation"?

In the Head First Design Patterns book, the author often says that one should program to interface rather than implementation?
What does that mean?
Let's illustrate it with the following code:
namespace ExperimentConsoleApp
{
class Program
{
static void Main()
{
ILogger loggerA = new DatabaseLogger();
ILogger loggerB = new FileLogger();
loggerA.Log("My message");
loggerB.Log("My message");
}
}
public interface ILogger
{
void Log(string message);
}
public class DatabaseLogger : ILogger
{
public void Log(string message)
{
// Log to database
}
}
public class FileLogger : ILogger
{
public void Log(string message)
{
// Log to File
}
}
}
Suppose you are the Logger developer and the application developer needs a Logger from you. You give the Application developer your ILogger interface and you say to him he can use but he doesn't have to worry about the implementation details.
After that you start developing a FileLogger and Databaselogger and you make sure they follow the interface that you gave to the Application developer.
The Application developer is now developing against an interface, not an implementation. He doesn't know or care how the class is implemented. He only knows the interface. This promotes less coupling in code and it gives you the ability to (trough configuration files for example) easily switch to another implementation.
Worry more about what a class does rather than how it does it. The latter should be an implementation detail, encapsulated away from clients of your class.
If you start with an interface, you're free to inject in a new implementation later without affecting clients. They only use references of the interface type.
It means that when working with a class, you should only program against the public interface and not make assumptions about how it was implemented, as it may change.
Normally this translates to using interfaces/abstract classes as variable types instead of concrete ones, allowing one to swap implementations if needed.
In the .NET world one example is the use of the IEnumerable/IEnumerator interfaces - these allow you to iterate over a collection without worrying how the collection was implemented.
It is all about coupling. Low coupling is a very important property* of software architecture. The less you need to know about your dependency the better.
Coupling can be measured by the number of assumptions you have to make in order to interact/use your dependency (paraphrasing M Fowler here).
So when using more generic types we are more loosely coupled. We are for example de-coupled from a particular implementation strategy of a collection: linked list, double linked list, arrays, trees, etc. Or from the classic OO school: "what exact shape it is: rectangle, circle, triangle", when we just want to dependent on a shape (in old school OO we apply polymorphism here)

Repository pattern with Entity Frameworks 4

I used to use NHibernate with repository interfaces.
What is the proper way to use this pattern with EF?
How can I implement this repository interface, for a RepositoryBase<T>?
public interface IRepository<T>
{
T GetById(object id);
void Save(T entity);
T[] GetAll();
void Delete(T entity);
}
For some reason all of the examples given expose the collections as IQueryable or IEnumerable. EF4 has an interface for this very purpose - IObjectSet (or IDbSet if you're using the latest CTP).
Julie Lerman has a tremendous post on doing this, including creating a MockSet that implements IObjectSet, so you can do some disconnected unit testing
http://thedatafarm.com/blog/data-access/agile-entity-framework-4-repository-part-6-mocks-amp-unit-tests/
It's not really a whole lot different than any other ORM. Here's an example: http://blogs.microsoft.co.il/blogs/gilf/archive/2010/01/20/using-repository-pattern-with-entity-framework.aspx
Have a look at the Entity Framework Repository & Unit of Work Template. You have some details here.
There are several approaches (most of them are quite similar and only differ slightly), so I would recommend doing some research and choosing which one suits you best.
With EF 4 it is possible to implement a generic repository by using ObjectSet<T>. Take a look at a few articles that might help:
http://devtalk.dk/2009/06/09/Entity+Framework+40+Beta+1+POCO+ObjectSet+Repository+And+UnitOfWork.aspx
http://www.forkcan.com/viewcode/166/Generic-Entity-Framework-40-Base-Repository
You basically have your Repositories talk yo your object context. Only change I would make would be having your GetAll return an IEnumerable instead something like:
public class SomeObjectRepo : IRepository<SomeObject>
{
SomeContext GetById(object id)
{
using(var context = new MyContext())
{
return context.SomeObjects.First(x=>x.id.Equals(id));
}
}
etc...
}
This is my solution: http://www.necronet.org/archive/2010/04/10/generic-repository-for-entity-framework.aspx
I like this because it doesn't couple instance of repository with specific instance of object context, so with some DI framework, I can have all my repositories be singletons.

How to manage IoC containers in tests?

I'm very new to testing and IoC containers and have two projects:
MySite.Website (MVC)
MySite.WebsiteTest
Currently I have an IoC container in my website. Should I recreate another IoC container for my test? Or is there a way to use the IoC in both?
When you have an IoC container, hopefully you will also have some sort of dependency injection going on - whether through constructor or setter injection.
The point of a unit test is to test components in isolation and doing DI goes a long way in aiding that. What you want to do is unit test each class by manually constructing it and passing it the required dependencies, not rely on container to construct it.
The point of doing that is simple. You want to isolate the SUT(System Under Test) as much as possible. If your SUT relies on another class and IoC to inject it, you are really testing three systems, not one.
Take the following example:
public class ApiController : ControllerBase {
IRequestParser m_Parser;
public ApiController(IRequestParser parser) {
m_Parser = parser;
}
public ActionResult Posts(string request) {
var postId = m_Parser.GetPostId(request);
}
}
The ApiController constructor is a dependency constructor and will get invoked by IoC container at runtime. During test, however, you want to mock IRequestParser interface and construct the controller manually.
[Test]
public void PostsShouldCallGetPostId() {
//use nmock for mocking
var requestParser = m_Mocks.NewMock<IRequestParser>();
//Set up an expectation that Posts action calls GetPostId on IRequestParser
Expect.Once.On(requestParser).Method("GetPostId").With("posts/12").Will(Return.Value(0));
var controller = new ApiController(requestParser);
controller.Posts("posts/12");
}
Testing is about real implementation. So you normally should not use IOC in your unit tests. In case you really feel needing it (one component depending on another one), using an interface to isolate the interaction and using a mocking lib (Moq is good) to mock it and do the testing.
The only chance I see IOC container is necessary for testing is in integration testing.