Ninject NHibernate on plugin oriented architecture - nhibernate

According COMPOSITION ROOT pattern, I must to construct all dependencies graph as close as possible to the application's entry point.
My architecture is plugin oriented. So, if someone wants to extend my base system he can.
For example, in my base system I have this structure:
View Layer
Services Layer
Data Access Layer
Model Layer
In DAL, I expose some classes like:
IRepository
NHibernateRepository
ProductRepository
So, I'd like if a plugin wants to extend my base Product class to ExtendedProduct, and then create ExtendedProductRepository that inherits from NHibernateRepository.
The question is:
How can instantiate from my base system an instance of NHibernateRepository using NInject?
So, I know the first thing to do is to construct the graph dependencies:
using (var kernel = new StandardKernel())
{
kernel.Bind(b => b.FromAssembliesMatching("*")
.SelectAllClasses()
.InheritedFrom<IRepository>()
.BindAllInterfaces());
}
However, I'm figuring out that when I execute something like:
kernel.GetAll<IRepository>()
It's going to return me a ProductRepository instance, and another ProductExtendedRepository under two IRepository objects.
So, how I can save a ProductExtended object from my base system...?
Another question, would be, how could I inject a object instance in my plugins, or, how can plugins autoinject some instance of base system assembly?
Thanks for all.
I'll appreciate a lot some help.

I use this pattern for my NHibernate based projects:
public interface IRepository<T> : IQueryable<T>
{
T Get(int id);
void Save(T item);
void Delete(T item);
}
public class NHibernateRepository<ModelType> : IRepository<ModelType>
where ModelType : class
{
// implementation
}
then...
public interface IProductRepository : IRepository<Product>
{
// product specific data access methods
}
public class ProductRepository : NHibernateRepository<Product>, IProductRepository
{
// implementation
}
... and in Ninject Module:
Bind(typeof(IRepository<>)).To(typeof(NHibernateRepository<>));
Bind<IProductRepository>().To<ProductRepository>();
then you can either request the base functionality like:
public Constructor(IRepository<Product> repo) { ... }
or specific product repository functionality:
public Constructor(IProductRepository repo) { ... }
your plugins can either get the base functionality and won't have to register anything:
public PluginConstructor(IRepository<ProductExtended> repo { ... }
or create their own repositories and register them in a Ninject module.

Thanks dave.
It's perfect. I'll try it.
However, how could I save or get or update (whichever IRepository methods)... an ExtendedProduct instance from my base system?
Think the follow out:
public interface BasePlugin<T> {...}
In another assembly:
public class PluginExtendedProduct : BasePlugin<ExtendedProduct>
{
public PluginExtendedProduct (IRepository<ExtendedProduct> repo { ... }
}
My headache is how to create an instance of (so, ExtendedProduct) in my base system in order to call methods PluginExtendedProduct that uses an IRepository.
I don't know if I'm explaining myself well...
Thanks for all.

Related

Autofac Register closed types and retrieve them at run time

I have an Interface that will take in a generic type T
internal interface IQuestion<T> where T : IWithOptionsId
{
Task<T> Provide(Guid id);
}
Following by that I will implement this interface in multiple classes. For example
public class SomeProvider : IQuestion<OptionsClass>
{
private readonly IRepository _repository;
public SomeProvider(IRepository repository)
{
_repository = repository;
}
public async Task<OptionsClass> Provide(Guid id)
...
}
To register this with outofac I used this
Autofac.RegisterAssemblyTypes(
Assembly.GetExecutingAssembly())
.AsImplementedInterfaces()
.AsClosedTypesOf(typeof(IQuestion<>));
My question is this. I have multiple instances for this interface. How do I access different instance once at the run time? If my IQuestion<T> will take in Options class and also it will take in Answer class how can I get an instance of those classes during run time?
I'm pretty sure you can just inject the instance itself. Not great practice, but it should work:
public SomeClass(SomeProvider<OptionsClass> provider)
You could also try creating a named instance when you register it, and inject that. See this and this.

Do we need interfaces for dependency injection?

I have an ASP.NET Core application. The application has few helper classes that does some work. Each class has different signature method. I see lot of .net core examples online that create interface for each class and then register types with DI framework. For example
public interface IStorage
{
Task Download(string file);
}
public class Storage
{
public Task Download(string file)
{
}
}
public interface IOcr
{
Task Process();
}
public class Ocr:IOcr
{
public Task Process()
{
}
}
Basically for each interface there is only one class. Then i register these types with DI as
services.AddScoped<IStorage, Storage>();
services.AddScoped<IOcr,Ocr>();
But i can register type without having interfaces so interfaces here look redundant. eg
services.AddScoped<Storage>();
services.AddScoped<Ocr>();
So do i really need interfaces?
No, you don't need interfaces for dependency injection. But dependency injection is much more useful with them!
As you noticed, you can register concrete types with the service collection and ASP.NET Core will inject them into your classes without problems. The benefit you get by injecting them over simply creating instances with new Storage() is service lifetime management (transient vs. scoped vs. singleton).
That's useful, but only part of the power of using DI. As #DavidG pointed out, the big reason why interfaces are so often paired with DI is because of testing. Making your consumer classes depend on interfaces (abstractions) instead of other concrete classes makes them much easier to test.
For example, you could create a MockStorage that implements IStorage for use during testing, and your consumer class shouldn't be able to tell the difference. Or, you can use a mocking framework to easily create a mocked IStorage on the fly. Doing the same thing with concrete classes is much harder. Interfaces make it easy to replace implementations without changing the abstraction.
Does it work? Yes. Should you do it? No.
Dependency Injection is a tool for the principle of Dependency Inversion : https://en.wikipedia.org/wiki/Dependency_inversion_principle
Or as it's described in SOLID
one should “depend upon abstractions, [not] concretions."
You can just inject concrete classes all over the place and it will work. But it's not what DI was designed to achieve.
No, we don't need interfaces. In addition to injecting classes or interfaces you can also inject delegates. It's comparable to injecting an interface with one method.
Example:
public delegate int DoMathFunction(int value1, int value2);
public class DependsOnMathFunction
{
private readonly DoMathFunction _doMath;
public DependsOnAFunction(DoMathFunction doMath)
{
_doMath = doMath;
}
public int DoSomethingWithNumbers(int number1, int number2)
{
return _doMath(number1, number2);
}
}
You could do it without declaring a delegate, just injecting a Func<Something, Whatever> and that will also work. I'd lean toward the delegate because it's easier to set up DI. You might have two delegates with the same signature that serve unrelated purposes.
One benefit to this is that it steers the code toward interface segregation. Someone might be tempted to add a method to an interface (and its implementation) because it's already getting injected somewhere so it's convenient.
That means
The interface and implementation gain responsibility they possibly shouldn't have just because it's convenient for someone in the moment.
The class that depends on the interface can also grow in its responsibility but it's harder to identify because the number of its dependencies hasn't grown.
Other classes end up depending on the bloated, less-segregated interface.
I've seen cases where a single dependency eventually grows into what should really be two or three entirely separate classes, all because it was convenient to add to an existing interface and class instead of injecting something new. That in turn helped some classes on their way to becoming 2,500 lines long.
You can't prevent someone doing what they shouldn't. You can't stop someone from just making a class depend on 10 different delegates. But it can set a pattern that guides future growth in the right direction and provides some resistance to growing interfaces and classes out control.
(This doesn't mean don't use interfaces. It means that you have options.)
I won't try to cover what others have already mentioned, using interfaces with DI will often be the best option. But it's worth mentioning that using object inheritance at times may provide another useful option. So for example:
public class Storage
{
public virtual Task Download(string file)
{
}
}
public class DiskStorage: Storage
{
public override Task Download(string file)
{
}
}
and registering it like so:
services.AddScoped<Storage, DiskStorage>();
Without Interface
public class Benefits
{
public void BenefitForTeacher() { }
public void BenefitForStudent() { }
}
public class Teacher : Benefits
{
private readonly Benefits BT;
public Teacher(Benefits _BT)
{ BT = _BT; }
public void TeacherBenefit()
{
base.BenefitForTeacher();
base.BenefitForStudent();
}
}
public class Student : Benefits
{
private readonly Benefits BS;
public Student(Benefits _BS)
{ BS = _BS; }
public void StudentBenefit()
{
base.BenefitForTeacher();
base.BenefitForStudent();
}
}
here you can see benefits for Teachers is accessible in Student class and benefits for Student is accessible in Teacher class which is wrong.
Lets see how can we resolve this problem using interface
With Interface
public interface IBenefitForTeacher
{
void BenefitForTeacher();
}
public interface IBenefitForStudent
{
void BenefitForStudent();
}
public class Benefits : IBenefitForTeacher, IBenefitForStudent
{
public Benefits() { }
public void BenefitForTeacher() { }
public void BenefitForStudent() { }
}
public class Teacher : IBenefitForTeacher
{
private readonly IBenefitForTeacher BT;
public Teacher(IBenefitForTeacher _BT)
{ BT = _BT; }
public void BenefitForTeacher()
{
BT.BenefitForTeacher();
}
}
public class Student : IBenefitForStudent
{
private readonly IBenefitForStudent BS;
public Student(IBenefitForStudent _BS)
{ BS = _BS; }
public void BenefitForStudent()
{
BS.BenefitForStudent();
}
}
Here you can see there is no way to call Teacher benefits in Student class and Student benefits in Teacher class
So interface is used here as an abstraction layer.

Repository OO Design - Multiple Specifications

I have a pretty standard repository interface:
public interface IRepository<TDomainEntity>
where TDomainEntity : DomainEntity, IAggregateRoot
{
TDomainEntity Find(Guid id);
void Add(TDomainEntity entity);
void Update(TDomainEntity entity);
}
We can use various infrastructure implementations in order to provide default functionality (e.g. Entity Framework, DocumentDb, Table Storage, etc). This is what the Entity Framework implementation looks like (without any actual EF code, for simplicity sake):
public abstract class EntityFrameworkRepository<TDomainEntity, TDataEntity> : IRepository<TDomainEntity>
where TDomainEntity : DomainEntity, IAggregateRoot
where TDataEntity : class, IDataEntity
{
protected IEntityMapper<TDomainEntity, TDataEntity> EntityMapper { get; private set; }
public TDomainEntity Find(Guid id)
{
// Find, map and return entity using Entity Framework
}
public void Add(TDomainEntity item)
{
var entity = EntityMapper.CreateFrom(item);
// Insert entity using Entity Framework
}
public void Update(TDomainEntity item)
{
var entity = EntityMapper.CreateFrom(item);
// Update entity using Entity Framework
}
}
There is a mapping between the TDomainEntity domain entity (aggregate) and the TDataEntity Entity Framework data entity (database table). I will not go into detail as to why there are separate domain and data entities. This is a philosophy of Domain Driven Design (read about aggregates). What's important to understand here is that the repository will only ever expose the domain entity.
To make a new repository for, let's say, "users", I could define the interface like this:
public interface IUserRepository : IRepository<User>
{
// I can add more methods over and above those in IRepository
}
And then use the Entity Framework implementation to provide the basic Find, Add and Update functionality for the aggregate:
public class UserRepository : EntityFrameworkRepository<Stop, StopEntity>, IUserRepository
{
// I can implement more methods over and above those in IUserRepository
}
The above solution has worked great. But now we want to implement deletion functionality. I have proposed the following interface (which is an IRepository):
public interface IDeleteableRepository<TDomainEntity>
: IRepository<TDomainEntity>
{
void Delete(TDomainEntity item);
}
The Entity Framework implementation class would now look something like this:
public abstract class EntityFrameworkRepository<TDomainEntity, TDataEntity> : IDeleteableRepository<TDomainEntity>
where TDomainEntity : DomainEntity, IAggregateRoot
where TDataEntity : class, IDataEntity, IDeleteableDataEntity
{
protected IEntityMapper<TDomainEntity, TDataEntity> EntityMapper { get; private set; }
// Find(), Add() and Update() ...
public void Delete(TDomainEntity item)
{
var entity = EntityMapper.CreateFrom(item);
entity.IsDeleted = true;
entity.DeletedDate = DateTime.UtcNow;
// Update entity using Entity Framework
// ...
}
}
As defined in the class above, the TDataEntity generic now also needs to be of type IDeleteableDataEntity, which requires the following properties:
public interface IDeleteableDataEntity
{
bool IsDeleted { get; set; }
DateTime DeletedDate { get; set; }
}
These properties are set accordingly in the Delete() implementation.
This means that, IF required, I can define IUserRepository with "deletion" capabilities which would inherently be taken care of by the relevant implementation:
public interface IUserRepository : IDeleteableRepository<User>
{
}
Provided that the relevant Entity Framework data entity is an IDeleteableDataEntity, this would not be an issue.
The great thing about this design is that I can start granualising the repository model even further (IUpdateableRepository, IFindableRepository, IDeleteableRepository, IInsertableRepository) and aggregate repositories can now expose only the relevant functionality as per our specification (perhaps you should be allowed to insert into a UserRepository but NOT into a ClientRepository). Further to this, it specifies a standarised way in which certain repository actions are done (i.e. the updating of IsDeleted and DeletedDate columns will be universal and are not at the hand of the developer).
PROBLEM
A problem with the above design arises when I want to create a repository for some aggregate WITHOUT deletion capabilities, e.g:
public interface IClientRepository : IRepository<Client>
{
}
The EntityFrameworkRepository implementation still requires TDataEntity to be of type IDeleteableDataEntity.
I can ensure that the client data entity model does implement IDeleteableDataEntity, but this is misleading and incorrect. There will be additional fields that are never updated.
The only solution I can think of is to remove the IDeleteableDataEntity generic condition from TDataEntity and then cast to the relevant type in the Delete() method:
public abstract class EntityFrameworkRepository<TDomainEntity, TDataEntity> : IDeleteableRepository<TDomainEntity>
where TDomainEntity : DomainEntity, IAggregateRoot
where TDataEntity : class, IDataEntity
{
protected IEntityMapper<TDomainEntity, TDataEntity> EntityMapper { get; private set; }
// Find() and Update() ...
public void Delete(TDomainEntity item)
{
var entity = EntityMapper.CreateFrom(item);
var deleteableEntity = entity as IDeleteableEntity;
if(deleteableEntity != null)
{
deleteableEntity.IsDeleted = true;
deleteableEntity.DeletedDate = DateTime.UtcNow;
entity = deleteableEntity;
}
// Update entity using Entity Framework
// ...
}
}
Because ClientRepository does not implement IDeleteableRepository, there will be no Delete() method exposed, which is good.
QUESTION
Can anyone advise of a better architecture which leverages the C# typing system and does not involve the hacky cast?
Interestly enough, I could do this if C# supported multiple inheritance (with separate concrete implementation for finding, adding, deleting, updating).
I do think that you're complicating things a bit too much trying to get the most generic solution of them all, however I think there's a pretty easy solution to your current problem.
TDataEntity is a persistence data structure, it has no Domain value and it's not known outside the persistence layer. So it can have fields it won't ever use, the repository is the only one knowing that, it'a persistence detail . You can afford to be 'sloppy' here, things aren't that important at this level.
Even the 'hacky' cast is a good solution because it's in one place and a private detail.
It's good to have clean and maintainable code everywhere, however we can't afford to waste time coming up with 'perfect' solutions at every layer. Personally, for view and persistence models I prefer the quickest and simplest solutions even if they're a bit smelly.
P.S: As a thumb rule, generic repository interfaces are good, generic abstract repositories not so much (you need to be careful) unless you're serializing things or using a doc db.

Using Ninject to bind an interface to multiple implementations unknown at compile time

I just recently started using Ninject (v2.2.0.0) in my ASP.NET MVC 3 application. So far I'm thrilled with it, but I ran into a situation I can't seem to figure out.
What I'd like to do is bind an interface to concrete implementations and have Ninject be able to inject the concrete implementation into a constructor using a factory (that will also be registered with Ninject). The problem is that I'd like my constructor to reference the concrete type, not the interface.
Here is an example:
public class SomeInterfaceFactory<T> where T: ISomeInterface, new()
{
public T CreateInstance()
{
// Activation and initialization logic here
}
}
public interface ISomeInterface
{
}
public class SomeImplementationA : ISomeInterface
{
public string PropertyA { get; set; }
}
public class SomeImplementationB : ISomeInterface
{
public string PropertyB { get; set; }
}
public class Foo
{
public Foo(SomeImplementationA implA)
{
Console.WriteLine(implA.PropertyA);
}
}
public class Bar
{
public Bar(SomeImplementationB implB)
{
Console.WriteLine(implB.PropertyB);
}
}
Elsewhere, I'd like to bind using just the interface:
kernel.Bind<Foo>().ToSelf();
kernel.Bind<Bar>().ToSelf();
kernel.Bind(typeof(SomeInterfaceFactory<>)).ToSelf();
kernel.Bind<ISomeInterface>().To ...something that will create and use the factory
Then, when requesting an instance of Foo from Ninject, it would see that one of the constructors parameters implements a bound interface, fetch the factory, and instantiate the correct concrete type (SomeImplementationA) and pass it to Foo's constructor.
The reason behind this is that I will have many implementations of ISomeInterface and I'd prefer to avoid having to bind each one individually. Some of these implementations may not be known at compile time.
I tried using:
kernel.Bind<ISomeInterface>().ToProvider<SomeProvider>();
The provider retrieves the factory based on the requested service type then calls its CreateInstance method, returning the concrete type:
public class SomeProvider : Provider<ISomeInterface>
{
protected override ISomeInterface CreateInstance(IContext context)
{
var factory = context.Kernel.Get(typeof(SomeInterfaceFactory<>)
.MakeGenericType(context.Request.Service));
var method = factory.GetType().GetMethod("CreateInstance");
return (ISomeInterface)method.Invoke();
}
}
However, my provider was never invoked.
I'm curious if Ninject can support this situation and, if so, how I might go about solving this problem.
I hope this is enough information to explain my situation. Please let me know if I should elaborate further.
Thank you!
It seems you have misunderstood how ninject works. In case you create Foo it sees that it requires a SomeImplementationA and will try to create an instance for it. So you need to define a binding for SomeImplementationA and not for ISomeInterface.
Also most likely your implementation breaks the Dependency Inversion Princple because you rely upon concrete instances instead of abstractions.
The solution to register all similar types at once (and the prefered way to configure IoC containers) is to use configuration by conventions. See the Ninject.Extensions.Conventions extenstion.

NHibernate ITransaction and pure domain model

I'm trying to write my Domain Model as persistence-ignorant as possible. The only thing I'm doing right now is marking every property and method virtual, as NHibernate requires that for lazy-loading.
In my domain model assembly I define some repository interfaces:
public interface IRepository<TEntity> where TEntity : EntityBase {
TEntity Get(int id);
/* ... */
}
public interface IProductRepository : IRepository<Product> { ... }
Then I have a data assembly. This one will reference NHibernate, it knows about its existence. This is the assembly that implements those repository interfaces:
public abstract class Repository<TEntity> : IRepository<TEntity> {
public TEntity Get(ind id) { ... }
/* ... */
}
public class ProductRepository : Repository<Product>, IProductRepository {
/* ... */
}
and so on.
Now I wanted to implement a transaction functionality to my repositories. To do so, I would add a BeginTransaction method on my IRepository interface. However, I cannot define its return type as NHibernate.ITransaction, since I want to keep the domain model persistence-ignorant, and not be forced to reference NHibernate's assembly from my domain model assembly.
What would you do?
Would you simply implement a void BeginTransaction(), a void Commit(), and a void RollBack() methods on the interface, and let the repository implementation manage the ITransaction object internally?
Or would you find a way to expose the ITransaction object to let the client manage the transaction directly with it, instead of using repository's methods?
Thanks!
You can take a look at the Sharp Architecture which has already implemented everything you talk about, including generic repositories with transactions support. The solution there is that IRepository has DbContext property which encapsulates transactions (it's actually an interface).
This is the first of the options that you described (custom transactions interface which hides NHibernate). And it works well.
I guess you can even re-use S#arp code regardless if you intend to use the full framework.
IMO Transactions should always start and end in business logic, in other words the transaction should start in the service layer not the repository layer and the repository should enlist it's self in the transaction, ideally this would be done implicitly.
Now if you're using NH then if your service and repositories share the same 'session' (which they should) then you can call 'BeginTransaction' in the service layer and commit or roll back as required:
Eg, imagine this a method on a service:
public void RegisterCustomer(Customer customer)
{
try
{
using(var transaction = _session.BeginTransaction())
{
_customerRepository.Save(customer);
_customerSurveyRepository.Save(customerSurvey);
// DO What ever else you want...
transaction.Commit();
}
}
catch (Exception exn)
{
throw new AMException(FAILED_REGISTRATION, exn);
}
}
How the repositories obtain a reference to the same Session can be solved by injecting in the constructors or by using a the SessionFactory to obtain the current session...