Every time I have a stateful object with class properties, I resort to using a factory for that class to be used in the dependent class as such:
class Foo
{
Dependency1 d1;
Dependency2 d2;
IdProcessorFactory idProcessorFactory;
public Foo(Dependency1 d1, Dependency2 d2, IdProcessorFactory idProcessorFactory)
{
this.d1 = d1;
this.d2 = d2;
this.idProcessorFactory = idProcessorFactory;
}
private void DoSomething(List<int> ids)
{
foreach(var id in ids)
{
var idProcessor = idProcessorFactory.GetInstance();
idProcessor.Process(id);
}
}
}
I wonder if this is the right approach. Because, since stateful objects are encouraged in an object oriented language, then one has to write a factory for each dependency if multiple instances are to be used in the dependent class. Is this the way to go or is there a better alternative?
(Please note that I am aware of the service locator pattern which is discouraged.)
Related
In DDD, is the application layer who uses the repository to get the data from database, call the methods of the domain and then call the repository to persists the data. Something like that:
public void MyApplicationService()
{
Order myOrder = _orderRepository.Get(1);
myOrder.Update(data);
_orderRepository.Commit();
}
In this example the repository is a class variable that it is instantiate in the constructor of the service, so its life is the life of the class.
But I am wondering if it wouldn't be better to instantiate a repository for each action that I want to do, to have a shorter life, because if not, if I use the class for many actions, the repository will have many entities that perhaps it will not need more.
So I was thinking in a solution like this:
public void MyApplicationService()
{
OrderRepository myOrderRepository = new OrderRepository(_options);
Order myOrder = myOrderRepository.GetOrder(1);
myOrder.Update(data);
myOrderRepository.Commit();
myOrderRepository.Dispose();
}
So a new instance each time I need to do the action.
So in sumary, I would like to know about the differents solutions and the advantages and disadvanges to decide the lifespan of the repository.
Thanks.
The recommended lifespan of the repository is one business transaction.
Your second patch of code is correct in that aspect, however it has one drawback: you have created a strong dependency between the ApplicationService and OrderRepository classes. With your code, you are not able to isolate both class in order to unit test them separately. Also, you need to update the ApplicationService class whenever you change the constructor of the OrderRepository. If OrderRepository requires parameters to construct, then you have to construct them (which implies to reference their type and base types), despite this being an implementation detail of OrderRepository (needed for data persistence store access) and not needed for your application service layer.
For these reasons, most of modern program development rely on a pattern called Dependency Injection (DI). With DI, you specify that your ApplicationService class depends on an instance of the OrderRepository class, or better, an interface IOrderRepository whom the OrderRepository class implements. The dependency is declared by adding a parameter in the ApplicationService constructor:
public interface IOrderRepository : IDisposable
{
Order GetOrder(int id);
void Commit();
}
public class ApplicationService
{
private readonly OrderRepository orderRepository;
public ApplicationService(IOrderRepository orderRepository)
{
this.orderRepository = orderRepository ?? throw new ArgumentNullException(nameof(orderRepository));
}
public void Update(int id, string data)
{
Order myOrder = orderRepository.Get(id);
myOrder.Update(data);
orderRepository.Commit();
}
}
Now the DI library is responsible to construct OrderRepository and inject the instance in the ApplicationService class. If OrderRepository has its own dependencies, the library will resolve them first and construct the whole object graph so you don't have to do that yourself. You simply need to tell your DI library what specific implementation you want for each referenced interface. For example in C#:
public IServiceCollection AddServices(IServiceCollection services)
{
return services.AddScoped<IOrderRepository,OrderRepository>();
}
When unit testing your code, you can replace the actual implementation of OrderRepository with a mock object, such as Mock<IOrderRepository> or your own MockOrderRepository implementation. The code under test is then exactly the code in production, all wiring being done by the DI framework.
Most modern DI libraries have support for object lifetime management, including transient (always resolve a new object), singleton (always reuse the same object), or scoped (each scope has a single instance). The latter is what is used to isolate objects instance per business transaction, using a singleton ScopeFactory to create scopes whenever you start a business transaction:
public class UpdateOrderUseCase : UseCase
{
private readonly IScopeFactory scopeFactory;
public UpdateOrderUseCase(IScopeFactory scopeFactory) // redacted
public void UpdateOrder(int id, string data)
{
using var scope = scopeFactory.CreateScope();
var orderRepository = scope.GetService<IOrderRepository>();
var order = orderRepository.Get(id);
order.Update(data);
orderRepository.Commit();
// disposing the scope will also dispose the object graph
}
}
When you implement a REST service, that transaction usually corresponds to one HTTP request. Modern frameworks, such as asp.net core, will automatically create scopes per HTTP request and use that to resolve your dependency graph later in the framework internals. This means you don't even have to handle the ScopeFactory yourself.
I'm trying to write a test for a class that has a constructor dependency on Func<T>. In order to complete successfully the function under test needs to create a number of separate objects of type T.
When running in production, AutoFac generates a new T every time factory() is called, however when writing a test using AutoMock it returns the same object when it is called again.
Test case below showing the difference in behaviour when using AutoFac and AutoMock. I'd expect both of these to pass, but the AutoMock one fails.
public class TestClass
{
private readonly Func<TestDep> factory;
public TestClass(Func<TestDep> factory)
{
this.factory = factory;
}
public TestDep Get()
{
return factory();
}
}
public class TestDep
{}
[TestMethod()]
public void TestIt()
{
using var autoMock = AutoMock.GetStrict();
var testClass = autoMock.Create<TestClass>();
var obj1 = testClass.Get();
var obj2 = testClass.Get();
Assert.AreNotEqual(obj1, obj2);
}
[TestMethod()]
public void TestIt2()
{
var builder = new ContainerBuilder();
builder.RegisterSource(new AnyConcreteTypeNotAlreadyRegisteredSource());
var container = builder.Build();
var testClass = container.Resolve<TestClass>();
var obj1 = testClass.Get();
var obj2 = testClass.Get();
Assert.AreNotEqual(obj1, obj2);
}
AutoMock (from the Autofac.Extras.Moq package) is primarily useful for setting up complex mocks. Which is to say, you have a single object with a lot of dependencies and it's really hard to set that object up because it doesn't have a parameterless constructor. Moq doesn't let you set up objects with constructor parameters by default, so having something that fills the gap is useful.
However, the mocks you get from it are treated like any other mock you might get from Moq. When you set up a mock instance with Moq, you're not getting a new one every time unless you also implement the factory logic yourself.
AutoMock is not for mocking Autofac behavior. The Func<T> support where Autofac calls a resolve operation on every call to the Func<T> - that's Autofac, not Moq.
It makes sense for AutoMock to use InstancePerLifetimeScope because, just like setting up mocks with plain Moq, you need to be able to get the mock instance back to configure it and validate against it. It would be much harder if it was new every time.
Obviously there are ways to work around that, and with a non-trivial amount of breaking changes you could probably implement InstancePerDependency semantics in there, but there's really not much value in doing that at this point since that's not really what this is for... and you could always create two different AutoMock instances to get two different mocks.
A much better way to go, in general, is to provide useful abstractions and use Autofac with mocks in the container.
For example, say you have something like...
public class ThingToTest
{
public ThingToTest(PackageSender sender) { /* ... */ }
}
public class PackageSender
{
public PackageSender(AddressChecker checker, DataContext context) { /* ... */ }
}
public class AddressChecker { }
public class DataContext { }
If you're trying to set up ThingToTest, you can see how also setting up a PackageSender is going to be complex, and you'd likely want something like AutoMock to handle that.
However, you can make your life easier by introducing an interface there.
public class ThingToTest
{
public ThingToTest(IPackageSender sender) { /* ... */ }
}
public interface IPackageSender { }
public class PackageSender : IPackageSender { }
By hiding all the complexity behind the interface, you now can mock just IPackageSender using plain Moq (or whatever other mocking framework you like, or even creating a manual stub implementation). You wouldn't even need to include Autofac in the mix because you could mock the dependency directly and pass it in.
Point being, you can design your way into making testing and setup easier, which is why, in the comments on your question, I asked why you were doing things that way (which, at the time of this writing, never did get answered). I would strongly recommend designing things to be easier to test if possible.
I have an interface, say IVehicle, which is implemented in 100s of classes, some of them are variety of 4 wheeler and some are two wheeler dervied types.
I need to introduce a new method for all the 4 wheeler classes, lets say there are 50 of them. My challenge is to reduce the effort as much as I can.
I suggested, to introduce a new interface / abstract class with a method definition. But this require to change every 4 wheeler class declaration and extend with an extra parent.
Is there any possible way?
If you really want to avoid changing all those classes and want a solution that can be considered to be OO, one thing you can do is decorate those classes where they are used and need this extra behaviour.
I'll use C# for example code as you mentioned you're looking for C#/Java solution.
interface IVehicle
{
void DoThisNormalThing();
// ...
}
interface IBetterVehicle : IVehicle
{
void DoThisNeatThing();
}
class FourWheelVehicle : IVehicle
{
public void DoThisNormalThing()
{
// ...
}
// ...
}
class BetterFourWheelVehicle : IBetterVehicle
{
private readonly _vehicle;
public BetterFourWheelVehicle(IVehicle vehicle)
{
_vehicle = vehicle;
}
public void DoThisNormalThing()
{
_vehicle.DoThisNormalThing();
}
public void DoThisNeatThing()
{
// ...
}
// ...
}
Then usage:
var vehicle = new FourWheelVehicle();
var betterVehicle = new BetterFourWheelVehicle(vehicle);
betterVehicle.DoThisNeatThing();
This can be done using extension methods as well (and would result in a little less code and fewer allocated objects), but as this question is tagged with [oop] I wouldn't say extension methods are an OO construct. They're much more aligned with procedural style as they turn your objects into bags of procedures.
On my service layer I have injected an UnitOfWork and 2 repositories in the constructor. The Unit of Work and repository have an instance of a DbContext I want to share between the two of them. How can I do that with Ninject ? Which scope should be considered ?
I am not in a web application so I can't use InRequestScope.
I try to do something similar... and I am using DI however, I need my UoW to be Disposed and created like this.
using (IUnitOfWork uow = new UnitOfWorkFactory.Create())
{
_testARepository.Insert(a);
_testBRepository.Insert(b);
uow.SaveChanges();
}
EDIT: I just want to be sure i understand… after look at https://github.com/ninject/ninject.extensions.namedscope/wiki/InNamedScope i though about my current console application architecture which actually use Ninject.
Lets say :
Class A is a Service layer class
Class B is an unit of work which take into parameter an interface (IContextFactory)
Class C is a repository which take into parameter an interface (IContextFactory)
The idea here is to be able to do context operations on 2 or more repository and using the unit of work to apply the changes.
Class D is a context factory (Entity Framework) which provide an instance (keep in a container) of the context which is shared between Class B et C (.. and would be for other repositories aswell).
The context factory keep the instance in his container so i don’t want to reuse this instance all the name since the context need to be disposed at the end of the service operaiton.. it is the main purpose of the InNamedScope actually ?
The solution would be but i am not sure at all i am doing it right, the services instance gonna be transcient which mean they actually never disposed ? :
Bind<IScsContextFactory>()
.To<ScsContextFactory>()
.InNamedScope("ServiceScope")
.WithConstructorArgument(
"connectionString",
ConfigurationUtility.GetConnectionString());
Bind<IUnitOfWork>().To<ScsUnitOfWork>();
Bind<IAccountRepository>().To<AccountRepository>();
Bind<IBlockedIpRepository>().To<BlockedIpRepository>();
Bind<IAccountService>().To<AccountService>().DefinesNamedScope("ServiceScope");
Bind<IBlockedIpService>().To<BlockedIpService>().DefinesNamedScope("ServiceScope");
UPDATE: This approach works against NuGet current, but relies in an anomaly in the InCallscope implementation which has been fixed in the current Unstable NuGet packages. I'll be tweaking this answer in a few days to reflect the best approach after some mulling over. NB the high level way of structuring stuff will stay pretty much identical, just the exact details of the Bind<DbContext>() scoping will work. (Hint: CreateNamedScope in unstable would work or one could set up the Command Handler as DefinesNamedScope. Reason I dont just do that is that I want to have something that composes/plays well with InRequestScope)
I highly recommend reading the Ninject.Extensions.NamedScope integration tests (seriously, find them and read and re-read them)
The DbContext is a Unit Of Work so no further wrapping is necessary.
As you want to be able to have multiple 'requests' in flight and want to have a single Unit of Work shared between them, you need to:
Bind<DbContext>()
.ToMethod( ctx =>
new DbContext(
connectionStringName: ConfigurationUtility.GetConnectionString() ))
.InCallScope();
The InCallScope() means that:
for a given object graph composed for a single kernel.Get() Call (hence In Call Scope), everyone that requires an DbContext will get the same instance.
the IDisposable.Dispose() will be called when a Kernel.Release() happens for the root object (or a Kernel.Components.Get<ICache>().Clear() happens for the root if it is not .InCallScope())
There should be no reason to use InNamedScope() and DefinesNamedScope(); You don't have long-lived objects you're trying to exclude from the default pooling / parenting / grouping.
If you do the above, you should be able to:
var command = kernel.Get<ICommand>();
try {
command.Execute();
} finally {
kernel.Components.Get<ICache>().Clear( command ); // Dispose of DbContext happens here
}
The Command implementation looks like:
class Command : ICommand {
readonly IAccountRepository _ar;
readonly IBlockedIpRepository _br;
readonly DbContext _ctx;
public Command(IAccountRepository ar, IBlockedIpRepository br, DbContext ctx){
_ar = ar;
_br = br;
_ctx = ctx;
}
void ICommand.Execute(){
_ar.Insert(a);
_br.Insert(b);
_ctx.saveChanges();
}
}
Note that in general, I avoid having an implicit Unit of Work in this way, and instead surface it's creation and Disposal. This makes a Command look like this:
class Command : ICommand {
readonly IAccountService _as;
readonly IBlockedIpService _bs;
readonly Func<DbContext> _createContext;
public Command(IAccountService #as, IBlockedIpServices bs, Func<DbContext> createContext){
_as = #as;
_bs = bs;
_createContext = createContext;
}
void ICommand.Execute(){
using(var ctx = _createContext()) {
_ar.InsertA(ctx);
_br.InsertB(ctx);
ctx.saveChanges();
}
}
This involves no usage of .InCallScope() on the Bind<DbContext>() (but does require the presence of Ninject.Extensions.Factory's FactoryModule to synthesize the Func<DbContext> from a straightforward Bind<DbContext>().
As discussed in the other answer, InCallScope is not a good approach to solving this problem.
For now I'm dumping some code that works against the latest NuGet Unstable / Include PreRelease / Instal-Package -Pre editions of Ninject.Web.Common without a clear explanation. I will translate this to an article in the Ninject.Extensions.NamedScope wiki at some stagehave started to write a walkthrough of this technique in the Ninject.Extensions.NamedScope wiki's CreateNamedScope/GetScope article.
Possibly some bits will become Pull Request(s) at some stage too (Hat tip to #Remo Gloor who supplied me the outline code). The associated tests and learning tests are in this gist for now), pending packaging in a proper released format TBD.
The exec summary is you Load the Module below into your Kernel and use .InRequestScope() on everything you want created / Disposed per handler invocation and then feed requests through via IHandlerComposer.ComposeCallDispose.
If you use the following Module:
public class Module : NinjectModule
{
public override void Load()
{
Bind<IHandlerComposer>().To<NinjectRequestScopedHandlerComposer>();
// Wire it up so InRequestScope will work for Handler scopes
Bind<INinjectRequestHandlerScopeFactory>().To<NinjectRequestHandlerScopeFactory>();
NinjectRequestHandlerScopeFactory.NinjectHttpApplicationPlugin.RegisterIn( Kernel );
}
}
Which wires in a Factory[1] and NinjectHttpApplicationPlugin that exposes:
public interface INinjectRequestHandlerScopeFactory
{
NamedScope CreateRequestHandlerScope();
}
Then you can use this Composer to Run a Request InRequestScope():
public interface IHandlerComposer
{
void ComposeCallDispose( Type type, Action<object> callback );
}
Implemented as:
class NinjectRequestScopedHandlerComposer : IHandlerComposer
{
readonly INinjectRequestHandlerScopeFactory _requestHandlerScopeFactory;
public NinjectRequestScopedHandlerComposer( INinjectRequestHandlerScopeFactory requestHandlerScopeFactory )
{
_requestHandlerScopeFactory = requestHandlerScopeFactory;
}
void IHandlerComposer.ComposeCallDispose( Type handlerType, Action<object> callback )
{
using ( var resolutionRoot = _requestHandlerScopeFactory.CreateRequestHandlerScope() )
foreach ( object handler in resolutionRoot.GetAll( handlerType ) )
callback( handler );
}
}
The Ninject Infrastructure stuff:
class NinjectRequestHandlerScopeFactory : INinjectRequestHandlerScopeFactory
{
internal const string ScopeName = "Handler";
readonly IKernel _kernel;
public NinjectRequestHandlerScopeFactory( IKernel kernel )
{
_kernel = kernel;
}
NamedScope INinjectRequestHandlerScopeFactory.CreateRequestHandlerScope()
{
return _kernel.CreateNamedScope( ScopeName );
}
/// <summary>
/// When plugged in as a Ninject Kernel Component via <c>RegisterIn(IKernel)</c>, makes the Named Scope generated during IHandlerFactory.RunAndDispose available for use via the Ninject.Web.Common's <c>.InRequestScope()</c> Binding extension.
/// </summary>
public class NinjectHttpApplicationPlugin : NinjectComponent, INinjectHttpApplicationPlugin
{
readonly IKernel kernel;
public static void RegisterIn( IKernel kernel )
{
kernel.Components.Add<INinjectHttpApplicationPlugin, NinjectHttpApplicationPlugin>();
}
public NinjectHttpApplicationPlugin( IKernel kernel )
{
this.kernel = kernel;
}
object INinjectHttpApplicationPlugin.GetRequestScope( IContext context )
{
// TODO PR for TrgGetScope
try
{
return NamedScopeExtensionMethods.GetScope( context, ScopeName );
}
catch ( UnknownScopeException )
{
return null;
}
}
void INinjectHttpApplicationPlugin.Start()
{
}
void INinjectHttpApplicationPlugin.Stop()
{
}
}
}
Having built a small application using the Unit of Work/Repository pattern, I am struggling to understand how to use this properly within my business layer. My application has a a data access layer which can be either NHibernate or the Entity Framework. I can switch between these easily.
I have a number of repositories, for example, Customer, Order etc. My unit of work will be either an ISession or an Object Context depending on which DAL I want to test with.
My business layer contains a single business method - CreateOrder(). What I am struggling to understand is where in the business layer I should be initialising my unit of work and my repositories.
Focusing on Nhibernate, my DAL looks like:
public class NHibernateDAL : IUnitOfWork
{
log4net.ILog log = log4net.LogManager.GetLogger(typeof(NHibernateDAL));
ISession context;
public NHibernateDAL()
{
context = SessionProvider.OpenSession();
this.Context.BeginTransaction();
CurrentSessionContext.Bind(context);
}
public ISession Context
{
get { return context; }
}
public void Commit()
{
this.Context.Transaction.Commit();
context.Close();
}
public void Dispose()
{
ISession session = CurrentSessionContext.Unbind(SessionProvider.SessionFactory);
session.Close();
}
}
Within my business layer, I want to know where I should be declaring my Unit of Work and repositories. Are they declared at class level or within the CreateOrder method?
For example:
public class BusinessLogic
{
UnitOfWork _unitOfWork = new UnitOfWork(NHibernateDAL);
NhRepository<Order> _orderRepository = new NhRepository<Order>(_unitOfWork);
NhRepository<Customer> _customerRepository = new NhRepository<Customer>(_unitOfWork);
....
public void CreateOrder(.....)
{
Order order = new Order();
_orderRepository.Add(order);
_unitOfWork.Commit();
}
}
The above code works only for the first time the CreateOrder() method is called, but not for subsequent calls because the session is closed. I have tried removing the 'context.Close()' call after committing the transaction but this also fails. Although the above approach doesn't work, it seems more correct to me to declare my repositories and unit of work with this scope.
However, if I implement it as below instead it works fine, but it seems unnatural to declare the repositories and unit of work within the scope of the method itself. If I had a tonne of business methods then I would be declaring repositories and Units of Work all over the place:
public class BusinessLogic
{
public void CreateOrder(.....)
{
UnitOfWork _unitOfWork = new UnitOfWork(NHibernateDAL);
var _orderRepository = new NhRepository<Order>(_unitOfWork);
NhRepository<Customer> _customerRepository = null;
Order order = new Order();
_orderRepository.Add(order);
_unitOfWork.Commit();
}
}
If I were to implement this with class level declaration then I think I would need some means of re-opening the same unit of work at the start of the CreateOrder method.
What is the correct way to use the unit of work and repositories within the business layer?
Looks to me like you've almost got it. In our new server stack I have this setup:
WCF Service Layer --> just returns results from my Business Layer
My business layer is called, creates a unitofwork, creates the respository
Calls the respository function
Uses AutoMapper to move returned results into a DTO
My repository gets the query results and populates a composite object.
Looks almost like what you've got there. Though we use Unity to locate what you call the business layer. (we just call it our function processor)
What I would highly suggest, though, is that you do NOT keep the UnitOfWork at the class level. After all each descreet function is a unit of work. So mine is like this (the names have been changed to protect the innocent):
using ( UnitOfWorkScope scope = new UnitOfWorkScope( TransactionMode.Default ) )
{
ProcessRepository repository = new ProcessRepository( );
CompositionResultSet result = repository.Get( key );
scope.Commit( );
MapData( );
return AutoMapper.Mapper.Map<ProcessSetDTO>( result );
}
We also had a long discussion on when to do a scope.Commit and while it isn't needed for queries, it establishes a consistent pattern for every function in the application layer. BTW we are using NCommon for our repository/unitofwork patterns and do not have to pass the UoW to the repository.
Your IUnitOfWork implementation contains all repositories.
Your IUnitOfWork is injected into your presentation layer like mvc controller.
Your IUnitOfWork is injected into mvc controller.
Your IRepository is injected into your UnitOfWork implementation.