What is the correct way to use Unit of Work/Repositories within the business layer? - repository

Having built a small application using the Unit of Work/Repository pattern, I am struggling to understand how to use this properly within my business layer. My application has a a data access layer which can be either NHibernate or the Entity Framework. I can switch between these easily.
I have a number of repositories, for example, Customer, Order etc. My unit of work will be either an ISession or an Object Context depending on which DAL I want to test with.
My business layer contains a single business method - CreateOrder(). What I am struggling to understand is where in the business layer I should be initialising my unit of work and my repositories.
Focusing on Nhibernate, my DAL looks like:
public class NHibernateDAL : IUnitOfWork
{
log4net.ILog log = log4net.LogManager.GetLogger(typeof(NHibernateDAL));
ISession context;
public NHibernateDAL()
{
context = SessionProvider.OpenSession();
this.Context.BeginTransaction();
CurrentSessionContext.Bind(context);
}
public ISession Context
{
get { return context; }
}
public void Commit()
{
this.Context.Transaction.Commit();
context.Close();
}
public void Dispose()
{
ISession session = CurrentSessionContext.Unbind(SessionProvider.SessionFactory);
session.Close();
}
}
Within my business layer, I want to know where I should be declaring my Unit of Work and repositories. Are they declared at class level or within the CreateOrder method?
For example:
public class BusinessLogic
{
UnitOfWork _unitOfWork = new UnitOfWork(NHibernateDAL);
NhRepository<Order> _orderRepository = new NhRepository<Order>(_unitOfWork);
NhRepository<Customer> _customerRepository = new NhRepository<Customer>(_unitOfWork);
....
public void CreateOrder(.....)
{
Order order = new Order();
_orderRepository.Add(order);
_unitOfWork.Commit();
}
}
The above code works only for the first time the CreateOrder() method is called, but not for subsequent calls because the session is closed. I have tried removing the 'context.Close()' call after committing the transaction but this also fails. Although the above approach doesn't work, it seems more correct to me to declare my repositories and unit of work with this scope.
However, if I implement it as below instead it works fine, but it seems unnatural to declare the repositories and unit of work within the scope of the method itself. If I had a tonne of business methods then I would be declaring repositories and Units of Work all over the place:
public class BusinessLogic
{
public void CreateOrder(.....)
{
UnitOfWork _unitOfWork = new UnitOfWork(NHibernateDAL);
var _orderRepository = new NhRepository<Order>(_unitOfWork);
NhRepository<Customer> _customerRepository = null;
Order order = new Order();
_orderRepository.Add(order);
_unitOfWork.Commit();
}
}
If I were to implement this with class level declaration then I think I would need some means of re-opening the same unit of work at the start of the CreateOrder method.
What is the correct way to use the unit of work and repositories within the business layer?

Looks to me like you've almost got it. In our new server stack I have this setup:
WCF Service Layer --> just returns results from my Business Layer
My business layer is called, creates a unitofwork, creates the respository
Calls the respository function
Uses AutoMapper to move returned results into a DTO
My repository gets the query results and populates a composite object.
Looks almost like what you've got there. Though we use Unity to locate what you call the business layer. (we just call it our function processor)
What I would highly suggest, though, is that you do NOT keep the UnitOfWork at the class level. After all each descreet function is a unit of work. So mine is like this (the names have been changed to protect the innocent):
using ( UnitOfWorkScope scope = new UnitOfWorkScope( TransactionMode.Default ) )
{
ProcessRepository repository = new ProcessRepository( );
CompositionResultSet result = repository.Get( key );
scope.Commit( );
MapData( );
return AutoMapper.Mapper.Map<ProcessSetDTO>( result );
}
We also had a long discussion on when to do a scope.Commit and while it isn't needed for queries, it establishes a consistent pattern for every function in the application layer. BTW we are using NCommon for our repository/unitofwork patterns and do not have to pass the UoW to the repository.

Your IUnitOfWork implementation contains all repositories.
Your IUnitOfWork is injected into your presentation layer like mvc controller.
Your IUnitOfWork is injected into mvc controller.
Your IRepository is injected into your UnitOfWork implementation.

Related

When to instantiate the repository and which is the lifespan of it?

In DDD, is the application layer who uses the repository to get the data from database, call the methods of the domain and then call the repository to persists the data. Something like that:
public void MyApplicationService()
{
Order myOrder = _orderRepository.Get(1);
myOrder.Update(data);
_orderRepository.Commit();
}
In this example the repository is a class variable that it is instantiate in the constructor of the service, so its life is the life of the class.
But I am wondering if it wouldn't be better to instantiate a repository for each action that I want to do, to have a shorter life, because if not, if I use the class for many actions, the repository will have many entities that perhaps it will not need more.
So I was thinking in a solution like this:
public void MyApplicationService()
{
OrderRepository myOrderRepository = new OrderRepository(_options);
Order myOrder = myOrderRepository.GetOrder(1);
myOrder.Update(data);
myOrderRepository.Commit();
myOrderRepository.Dispose();
}
So a new instance each time I need to do the action.
So in sumary, I would like to know about the differents solutions and the advantages and disadvanges to decide the lifespan of the repository.
Thanks.
The recommended lifespan of the repository is one business transaction.
Your second patch of code is correct in that aspect, however it has one drawback: you have created a strong dependency between the ApplicationService and OrderRepository classes. With your code, you are not able to isolate both class in order to unit test them separately. Also, you need to update the ApplicationService class whenever you change the constructor of the OrderRepository. If OrderRepository requires parameters to construct, then you have to construct them (which implies to reference their type and base types), despite this being an implementation detail of OrderRepository (needed for data persistence store access) and not needed for your application service layer.
For these reasons, most of modern program development rely on a pattern called Dependency Injection (DI). With DI, you specify that your ApplicationService class depends on an instance of the OrderRepository class, or better, an interface IOrderRepository whom the OrderRepository class implements. The dependency is declared by adding a parameter in the ApplicationService constructor:
public interface IOrderRepository : IDisposable
{
Order GetOrder(int id);
void Commit();
}
public class ApplicationService
{
private readonly OrderRepository orderRepository;
public ApplicationService(IOrderRepository orderRepository)
{
this.orderRepository = orderRepository ?? throw new ArgumentNullException(nameof(orderRepository));
}
public void Update(int id, string data)
{
Order myOrder = orderRepository.Get(id);
myOrder.Update(data);
orderRepository.Commit();
}
}
Now the DI library is responsible to construct OrderRepository and inject the instance in the ApplicationService class. If OrderRepository has its own dependencies, the library will resolve them first and construct the whole object graph so you don't have to do that yourself. You simply need to tell your DI library what specific implementation you want for each referenced interface. For example in C#:
public IServiceCollection AddServices(IServiceCollection services)
{
return services.AddScoped<IOrderRepository,OrderRepository>();
}
When unit testing your code, you can replace the actual implementation of OrderRepository with a mock object, such as Mock<IOrderRepository> or your own MockOrderRepository implementation. The code under test is then exactly the code in production, all wiring being done by the DI framework.
Most modern DI libraries have support for object lifetime management, including transient (always resolve a new object), singleton (always reuse the same object), or scoped (each scope has a single instance). The latter is what is used to isolate objects instance per business transaction, using a singleton ScopeFactory to create scopes whenever you start a business transaction:
public class UpdateOrderUseCase : UseCase
{
private readonly IScopeFactory scopeFactory;
public UpdateOrderUseCase(IScopeFactory scopeFactory) // redacted
public void UpdateOrder(int id, string data)
{
using var scope = scopeFactory.CreateScope();
var orderRepository = scope.GetService<IOrderRepository>();
var order = orderRepository.Get(id);
order.Update(data);
orderRepository.Commit();
// disposing the scope will also dispose the object graph
}
}
When you implement a REST service, that transaction usually corresponds to one HTTP request. Modern frameworks, such as asp.net core, will automatically create scopes per HTTP request and use that to resolve your dependency graph later in the framework internals. This means you don't even have to handle the ScopeFactory yourself.

Looking for a Ninject scope that behaves like InRequestScope

On my service layer I have injected an UnitOfWork and 2 repositories in the constructor. The Unit of Work and repository have an instance of a DbContext I want to share between the two of them. How can I do that with Ninject ? Which scope should be considered ?
I am not in a web application so I can't use InRequestScope.
I try to do something similar... and I am using DI however, I need my UoW to be Disposed and created like this.
using (IUnitOfWork uow = new UnitOfWorkFactory.Create())
{
_testARepository.Insert(a);
_testBRepository.Insert(b);
uow.SaveChanges();
}
EDIT: I just want to be sure i understand… after look at https://github.com/ninject/ninject.extensions.namedscope/wiki/InNamedScope i though about my current console application architecture which actually use Ninject.
Lets say :
Class A is a Service layer class
Class B is an unit of work which take into parameter an interface (IContextFactory)
Class C is a repository which take into parameter an interface (IContextFactory)
The idea here is to be able to do context operations on 2 or more repository and using the unit of work to apply the changes.
Class D is a context factory (Entity Framework) which provide an instance (keep in a container) of the context which is shared between Class B et C (.. and would be for other repositories aswell).
The context factory keep the instance in his container so i don’t want to reuse this instance all the name since the context need to be disposed at the end of the service operaiton.. it is the main purpose of the InNamedScope actually ?
The solution would be but i am not sure at all i am doing it right, the services instance gonna be transcient which mean they actually never disposed ? :
Bind<IScsContextFactory>()
.To<ScsContextFactory>()
.InNamedScope("ServiceScope")
.WithConstructorArgument(
"connectionString",
ConfigurationUtility.GetConnectionString());
Bind<IUnitOfWork>().To<ScsUnitOfWork>();
Bind<IAccountRepository>().To<AccountRepository>();
Bind<IBlockedIpRepository>().To<BlockedIpRepository>();
Bind<IAccountService>().To<AccountService>().DefinesNamedScope("ServiceScope");
Bind<IBlockedIpService>().To<BlockedIpService>().DefinesNamedScope("ServiceScope");
UPDATE: This approach works against NuGet current, but relies in an anomaly in the InCallscope implementation which has been fixed in the current Unstable NuGet packages. I'll be tweaking this answer in a few days to reflect the best approach after some mulling over. NB the high level way of structuring stuff will stay pretty much identical, just the exact details of the Bind<DbContext>() scoping will work. (Hint: CreateNamedScope in unstable would work or one could set up the Command Handler as DefinesNamedScope. Reason I dont just do that is that I want to have something that composes/plays well with InRequestScope)
I highly recommend reading the Ninject.Extensions.NamedScope integration tests (seriously, find them and read and re-read them)
The DbContext is a Unit Of Work so no further wrapping is necessary.
As you want to be able to have multiple 'requests' in flight and want to have a single Unit of Work shared between them, you need to:
Bind<DbContext>()
.ToMethod( ctx =>
new DbContext(
connectionStringName: ConfigurationUtility.GetConnectionString() ))
.InCallScope();
The InCallScope() means that:
for a given object graph composed for a single kernel.Get() Call (hence In Call Scope), everyone that requires an DbContext will get the same instance.
the IDisposable.Dispose() will be called when a Kernel.Release() happens for the root object (or a Kernel.Components.Get<ICache>().Clear() happens for the root if it is not .InCallScope())
There should be no reason to use InNamedScope() and DefinesNamedScope(); You don't have long-lived objects you're trying to exclude from the default pooling / parenting / grouping.
If you do the above, you should be able to:
var command = kernel.Get<ICommand>();
try {
command.Execute();
} finally {
kernel.Components.Get<ICache>().Clear( command ); // Dispose of DbContext happens here
}
The Command implementation looks like:
class Command : ICommand {
readonly IAccountRepository _ar;
readonly IBlockedIpRepository _br;
readonly DbContext _ctx;
public Command(IAccountRepository ar, IBlockedIpRepository br, DbContext ctx){
_ar = ar;
_br = br;
_ctx = ctx;
}
void ICommand.Execute(){
_ar.Insert(a);
_br.Insert(b);
_ctx.saveChanges();
}
}
Note that in general, I avoid having an implicit Unit of Work in this way, and instead surface it's creation and Disposal. This makes a Command look like this:
class Command : ICommand {
readonly IAccountService _as;
readonly IBlockedIpService _bs;
readonly Func<DbContext> _createContext;
public Command(IAccountService #as, IBlockedIpServices bs, Func<DbContext> createContext){
_as = #as;
_bs = bs;
_createContext = createContext;
}
void ICommand.Execute(){
using(var ctx = _createContext()) {
_ar.InsertA(ctx);
_br.InsertB(ctx);
ctx.saveChanges();
}
}
This involves no usage of .InCallScope() on the Bind<DbContext>() (but does require the presence of Ninject.Extensions.Factory's FactoryModule to synthesize the Func<DbContext> from a straightforward Bind<DbContext>().
As discussed in the other answer, InCallScope is not a good approach to solving this problem.
For now I'm dumping some code that works against the latest NuGet Unstable / Include PreRelease / Instal-Package -Pre editions of Ninject.Web.Common without a clear explanation. I will translate this to an article in the Ninject.Extensions.NamedScope wiki at some stagehave started to write a walkthrough of this technique in the Ninject.Extensions.NamedScope wiki's CreateNamedScope/GetScope article.
Possibly some bits will become Pull Request(s) at some stage too (Hat tip to #Remo Gloor who supplied me the outline code). The associated tests and learning tests are in this gist for now), pending packaging in a proper released format TBD.
The exec summary is you Load the Module below into your Kernel and use .InRequestScope() on everything you want created / Disposed per handler invocation and then feed requests through via IHandlerComposer.ComposeCallDispose.
If you use the following Module:
public class Module : NinjectModule
{
public override void Load()
{
Bind<IHandlerComposer>().To<NinjectRequestScopedHandlerComposer>();
// Wire it up so InRequestScope will work for Handler scopes
Bind<INinjectRequestHandlerScopeFactory>().To<NinjectRequestHandlerScopeFactory>();
NinjectRequestHandlerScopeFactory.NinjectHttpApplicationPlugin.RegisterIn( Kernel );
}
}
Which wires in a Factory[1] and NinjectHttpApplicationPlugin that exposes:
public interface INinjectRequestHandlerScopeFactory
{
NamedScope CreateRequestHandlerScope();
}
Then you can use this Composer to Run a Request InRequestScope():
public interface IHandlerComposer
{
void ComposeCallDispose( Type type, Action<object> callback );
}
Implemented as:
class NinjectRequestScopedHandlerComposer : IHandlerComposer
{
readonly INinjectRequestHandlerScopeFactory _requestHandlerScopeFactory;
public NinjectRequestScopedHandlerComposer( INinjectRequestHandlerScopeFactory requestHandlerScopeFactory )
{
_requestHandlerScopeFactory = requestHandlerScopeFactory;
}
void IHandlerComposer.ComposeCallDispose( Type handlerType, Action<object> callback )
{
using ( var resolutionRoot = _requestHandlerScopeFactory.CreateRequestHandlerScope() )
foreach ( object handler in resolutionRoot.GetAll( handlerType ) )
callback( handler );
}
}
The Ninject Infrastructure stuff:
class NinjectRequestHandlerScopeFactory : INinjectRequestHandlerScopeFactory
{
internal const string ScopeName = "Handler";
readonly IKernel _kernel;
public NinjectRequestHandlerScopeFactory( IKernel kernel )
{
_kernel = kernel;
}
NamedScope INinjectRequestHandlerScopeFactory.CreateRequestHandlerScope()
{
return _kernel.CreateNamedScope( ScopeName );
}
/// <summary>
/// When plugged in as a Ninject Kernel Component via <c>RegisterIn(IKernel)</c>, makes the Named Scope generated during IHandlerFactory.RunAndDispose available for use via the Ninject.Web.Common's <c>.InRequestScope()</c> Binding extension.
/// </summary>
public class NinjectHttpApplicationPlugin : NinjectComponent, INinjectHttpApplicationPlugin
{
readonly IKernel kernel;
public static void RegisterIn( IKernel kernel )
{
kernel.Components.Add<INinjectHttpApplicationPlugin, NinjectHttpApplicationPlugin>();
}
public NinjectHttpApplicationPlugin( IKernel kernel )
{
this.kernel = kernel;
}
object INinjectHttpApplicationPlugin.GetRequestScope( IContext context )
{
// TODO PR for TrgGetScope
try
{
return NamedScopeExtensionMethods.GetScope( context, ScopeName );
}
catch ( UnknownScopeException )
{
return null;
}
}
void INinjectHttpApplicationPlugin.Start()
{
}
void INinjectHttpApplicationPlugin.Stop()
{
}
}
}

MVC 4 How to process a url parameter on every page, base controller?

Looking for some guidance in designing my new MVC 4 app.
I would like to have a url parameter s=2011 on every page of the app to let me know what year of data I'm working with. Obviously, the user will have a way to change that parameter as needed.
I will need that parameter in every controller and wondering the best way to do this. I was thinking of creating a base controller that reads Request.QueryString and puts the year into a public property. However, considering all the extensability points in MVC, I'm wondering if there's a better way to do this?
This very much depends on the design of your app, but just to give you two alternatives
IActionFilter
If you are doing data context per request you can use a global IActionFilter to hook pre-action execution globally and apply a query filter to your data context behind the scenes.
Major down-side of this is that to test the controller you will need to have the full MVC pipeline setup so that the actionfilter gets applied properly.
Dependency Injection
Instead of using sub-classing (base controller as you say) you can use dependency injection . Keeping things more loose will allow you to pull the filter from query string, cookie, user setting in the database or whatever else - without your controller knowing where it comes from.
Here is some pseudo code how I would do it if I was using something like Entity Framework or Nhibernate (also I am sure applicable with other technologies as well)
public Car
{
public string Year { get; set; }
}
public class CarsDataContext : DbContext
{
private IQuerable<Cars> _cars = null;
private Func<Car, bool> _carsFilter = null;
public IQuerable<Car> Cars {
get {
if (_carsFitler != null)
return _cars.Where(_carsFitler);
return _cars;
}
set { _cars = value; }
}
public void ApplyCarsFilter(Func<Car, bool> predicate)
{
_carsFilter = predicate;
}
}
Assuming you have dependency injection setup already (NInject or whichever other framework) in you can configure how the context to be intialized
Bind<CarsDataContext>().ToMethod(() => {
string yearFilter = GetYearFilter(); // can be coming from anywhere
CarsDataContext dataContext = new CarsDataContext();
dataContext.Applyfilter(car => car.Year == yearFilter);
return dataContext;
}).InRequestScope();
Then my controller knows nothing about the data filtering and I can easily test it:
class MyController : Controller
{
public MyController(CarsDataContext dataContext)
{
}
...
}
However I would only do this is filtering the dataset was across many controllers and important part of my software. Otherwise it's pure over-engineering.

Multiple UnitOfWorks, ISession and repositories

Do you know a good example somewhere for this?
What I would like is the following:
using(IUnitOfWork uow = UnitOfWork.Start())
{
repositoryA.Add(insanceOfA);
repositoryB.Add(instanceOfB);
uow.Commit();
}
But this case is too ideal, because what I also want is multiple UnitOfWorks (per presenter in most cases, and this is not for a web app). In that case, how will repositories know which UnitOfWork to use if I don't give them that explicitly like
repository.UnitOfWork = uow;
My goal is to have every UnitOfWork contain a ISession behind, so one UOW per presenter, one ISession per presenter. Note: I know that ISession is esentially UOW, but I must proceed this way...
Any advices are appreciated
What I do, is pass the unitOfWork that should be used to the Repository.
To achieve this, you can make sure that you have to pass the UnitOfWork instance that must be used by the repository in the repository's constructor.
In order to make it all a bit more user-friendly, you can create extension methods on your UnitOfWork that are specific for your project (supposing that your UnitOfWork is generic and used in multiple projects) that look like this:
public static class UnitOfWorkExtensions
{
public static IRepositoryA GetRepositoryA( this UnitOfWork uow )
{
return new RepositoryA(uow);
}
public static IRepositoryB GetRepositoryB( this UnitOfWork uow )
{
return new RepositoryB(uow);
}
}
Then, it enables you to do this:
using(IUnitOfWork uow = UnitOfWork.Start())
{
var repositoryA = uow.GetRepositoryA();
var repositoryB = uow.GetRepositoryB();
repositoryA.Add(insanceOfA);
repositoryB.Add(instanceOfB);
uow.Commit();
}

Accessing more than one data provider in a data layer

I'm working on a business application which is being developed using DDD philosophy. Database is accessed through NHibernate and data layer is implemented using DAO pattern.
The UML class diagram is shown below.
UML Class Diagram http://img266.imageshack.us/my.php?image=classdiagramhk0.png
http://img266.imageshack.us/my.php?image=classdiagramhk0.png
I don't know the design is good or not. What do you think?
But the problem is not the design is good or not. The problem is after starting up the application an IDaoFactory is instantiated in presentation layer and send as parameter to presenter classes(which is designed using MVC pattern) as below
...
IDaoFactory daoFactory = new NHibernateDaoFactory(); //instantiation in main class
...
SamplePresenterClass s = new SamplePresenterClass(daoFactory);
...
Using just one data provider (which was just one database) was simple. But now we should get data from XML too. And next phases of the development we should connect to different web services and manipulate incoming and outgoing data.
The data from XML is going to be got using a key which is an enum. We add a class named XMLLoader to the data layer and add an interface ILoader to the domain. XMLLoader has a method whose signature is
List<string> LoadData(LoaderEnum key)
If we instantiate ILoader with XMLLoader in presentation layer as below we have to send it to objects which is going to get some XML data from data layer.
ILoader loader = new XMLLoader();
SamplePresenterClass s = new SamplePresenterClass(daoFactory, xmlLoader);
After implementing web service access classes
SamplePresenterClass s = new SamplePresenterClass(daoFactory, xmlLoader, sampleWebServiceConnector1, sampleWebServiceConnector2, ...);
The parameters is going to be grown in time. I think i can hold all instances of data access objects in a class and pass it to required presenters (maybe singleton pattern can helps too). In domain layer there must be a class like this,
public class DataAccessHolder
{
private IDaoFactory daoFactory;
private ILoader loader;
...
public IDaoFactory DaoFactory
{
get { return daoFactory; }
set { daoFactory = value; }
}
...
}
In main class the instantiation can be made with this design as follows
DataAccessHolder dataAccessHolder = new DataAccessHolder();
dataAccessHolder.DaoFactory = new NHibernateDaoFactory();
dataAccessHolder.Loader = new XMLLoader();
...
SamplePresenterClass s = new SamplePresenterClass(dataAccessHolder);
What do you think about this design or can you suggest me a different one?
Thanks for all repliers...
IMO, it would be cleaner to use a "global" or static daoFactory and make it generic.
DaoFactory<SamplePresenterClass>.Create(); // or
DaoFactory<SamplePresenterClass>.Create(id); // etc
Then, you can define DaoFactory<T> to take only, say, IDao's
interface IDao
{
IDaoProvider GetProvider();
}
interface IDaoProvider
{
IDao Create(IDao instance);
void Update(IDao instance);
void Delete(IDao instance);
}
Basically instead of passing every constructor your DaoFactory, you use a static generic DaoFactory. Its T must inherit from IDao. Then the DaoFactory class can look at the T provider at runtime:
static class DaoFactory<T> where T : IDao, new()
{
static T Create()
{
T instance = new T();
IDaoProvider provider = instance.GetProvider();
return (T)provider.Create(instance);
}
}
Where IDaoProvier is a common interface that you would implement to load things using XML, NHibernate, Web Services, etc. depending on the class. (Each IDao object would know how to connect to its data provider).
Overall, not a bad design though. Add a bit more OO and you will have a pretty slick design. For instance, each file for the XmlEnums could be implemented as IDao's
class Cat : IDao
{
IDaoProvider GetProvider()
{
return new XmlLoader(YourEnum.Cat);
}
// ...
}