I've been struggling around a DDD-related issue with Specifications and I've read much into DDD and specifications and repositories.
However, there is an issue if trying to combine all 3 of these without breaking the domain-driven design. It boils down to how to apply filters with performance in mind.
First a few obvious facts:
Repositories to got DataAccess/Infrastructure Layer
Domain Models represent Business Logic and go to the Domain layer
Data Access Models represent the Persistence layer and go to the Persistence/Infrastructure/DataAccess layer
Business Logic goes to Domain Layer
Specifications are Business Logic, so they belong to the Domain layer too.
In all these examples, an ORM Framework and SQL Server is used inside the Repository
Persistance Models may not leak into Domain Layer
So far, so easy. The problem arises when/if we try to apply Specifications to the Repository and not breaking the DDD pattern or having performance issues.
The possible ways to apply Specifications:
1) Classic way: Specifications using Domain Model in Domain Layer
Apply the traditional Specification Pattern, with a IsSatisfiedBy method, returning a bool and Composite Specifications to combine multiple Specifications.
This let us keep specifications in Domain Layer, but...
It has to work with Domain Models, while the repository uses Persistence Models which represent the data structure of the persistence layer. This one is easy to fix with the usage of mappers such as AutoMapper.
However, the problem which can't be solved: All the specifications would have to be performed in memory. In a big table/database this means a huge impact if you have to iterate through ALL Entities only to filter out the one which meets your specifications
2) Specifications using Persistence Model
This is similar to 1), but using Persistence Models in the specification. This allows direct use of the Specification as part of our .Where predicate which will be translated into a query (i.e. TSQL) and the filtering will be performed on the Persistence storage (i.e. SQL Server).
While this gives us good performance, it clearly violates the DDD pattern. Our Persistence model leaks into the Domain layer, making the Domain Layer depend on the Persistence Layer instead of the other way around.
3) Like 2), but make Specifications Part of the Persistence Layer
This doesn't work, because Domain Layer needs to reference the Specifications. It would still depend on the persistence layer.
We would have business logic inside the Persistence layer. Which also violates the DDD pattern
4) Like 3, but use abstract the Specifications as Interfaces
We would have Specification interfaces in our Domain layer, our concrete implementations of the Specifications in the Persistence Layer. Now our Domain Layer would only interact with the interfaces and not depend on the Persistence layer.
This still violates #2 from 3). We would have business logic in the persistence layer, which is bad.
5) Translate the Expression Tree from Domain Model into Persistence Model
This certainly solves the problem, but it's a non-trivial task but it would keep the Specifications inside our Domain Layer while still benefiting from SQL optimization because the Specifications becomes part of the Repositories Where clause and translates into TSQL
I tried going this approach and there are several issues (from the implementation side):
We would need to know the Configuration from the Mapper (if we use one) or keep our own mapping system. This can be partly done (reading Mapper configuration) with i.e. AutoMapper, but further issues exist
It's acceptable for one where one Property of Model A maps to one Property of Model B. It becomes more difficult if the types are different (i.e. due to persistence types, for example, Enums being saved as strings or key/value pairs in another table and we need to do conversions inside the resolver.
It gets pretty complicated if multiple fields get mapped into one destination field. I believe this is non an issue for Domain Model -> Persistence Model mappings
6) Query Builder like API
The last one is making some kind of query API which is passed into the specification and from whom the Repository/Persistence layer would generate an Expression Tree to be passed to .Where clause and which uses an Interface to declare all filterable fields.
I did a few attempts in that direction too but wasn't too happy about the results. Something like
public interface IQuery<T>
{
IQuery<T> Where(Expression<Func<T, T>> predicate);
}
public interface IQueryFilter<TFilter>
{
TFilter And(TFilter other);
TFilter Or(TFilter other);
TFilter Not(TFilter other);
}
public interface IQueryField<TSource, IQueryFilter>
{
IQueryFilter Equal(TSource other);
IQueryFilter GreaterThan(TSource other);
IQueryFilter Greater(TSource other);
IQueryFilter LesserThan(TSource other);
IQueryFilter Lesser(TSource other);
}
public interface IPersonQueryFilter : IQueryFilter<IPersonQueryFilter>
{
IQueryField<int, IPersonQueryFilter> ID { get; }
IQueryField<string, IPersonQueryFilter> Name { get; }
IQueryField<int, IPersonQueryFilter> Age { get; }
}
and in the specification, we would pass an IQuery<IPersonQueryFilter> query to the specifications constructor and then apply the specifications to it when using or combining it.
IQuery<IGridQueryFilter> query = null;
query.Where(f => f.Name.Equal("Bob") );
I don't like this approach much, as it makes handling complex specifications somewhat hard (like and or if chaining) and I don't like the way the And/Or/Not would work, especially creating expression trees from this "API".
I have been looking for weeks all over the Internet read dozens of articles on DDD and Specification, but they always only handle simple cases and don't take the performance into consideration or they violate the DDD pattern.
How do you solve this in a real-world application without doing in-memory filtering or leaking Persistence into Domain Layer??
Are there any frameworks that solve the issues above with one of the two ways (Query Builder like syntax to Expression Trees or an Expression Tree translator)?
I think Specification pattern is not designed for query criteria. Actually, the whole concept of DDD is not, either. Consider CQRS if there are plethora of query requirements.
Specification pattern helps develop ubiquitous language, I think it's like kind of a DSL. It declares what to do rather than how to do it. For example, in a ordering context, orders are considered as overdue if it was placed but not paid within 30 minutes. With Specification pattern, your team can talk with a short but unique term: OverdueOrderSpecification. Imagine the discussion below:
case -1
Business people: I want to find out all overdue orders and ...
Developer: I can do that, it is easy to find all satisfying orders with an overdue order specification and..
case -2
Business people: I want to find out all orders which were placed before 30 minutes and still unpaid...
Developer: I can do that, it is easy to filter order from tbl_order where placed_at is less that 30minutes before sysdate....
Which one do you prefer?
Usually, we need a DSL handler to parse the dsl, in this case, it may be in the persistence adapter, translates the specification to a query criteria. This dependence (infrastrructure.persistence => domain) does not violates the architecture principal.
class OrderMonitorApplication {
public void alarm() {
// The specification pattern keeps the overdue order ubiquitous language in domain
List<Order> overdueOrders = orderRepository.findBy(new OverdueSpecification());
for (Order order: overdueOrders) {
//notify admin
}
}
}
class HibernateOrderRepository implements orderRepository {
public List<Order> findBy(OrderSpecification spec) {
criteria.le("whenPlaced", spec.placedBefore())//returns sysdate - 30
criteria.eq("status", spec.status());//returns WAIT_PAYMENT
return ...
}
}
Once I implemented Specification but...
It was based on LINQ and IQueryable.
It used single unified Repository (but as for me it's not bad and I think that it's main reason to use Specification).
It used single model for domain and persistant needs (which I think to be bad).
Repository:
public interface IRepository<TEntity> where TEntity : Entity, IAggregateRoot
{
TEntity Get<TKey>(TKey id);
TEntity TryGet<TKey>(TKey id);
void DeleteByKey<TKey>(TKey id);
void Delete(TEntity entity);
void Delete(IEnumerable<TEntity> entities);
IEnumerable<TEntity> List(FilterSpecification<TEntity> specification);
TEntity Single(FilterSpecification<TEntity> specification);
TEntity First(FilterSpecification<TEntity> specification);
TResult Compute<TResult>(ComputationSpecification<TEntity, TResult> specification);
IEnumerable<TEntity> ListAll();
//and some other methods
}
Filter specification:
public abstract class FilterSpecification<TAggregateRoot> where TAggregateRoot : Entity, IAggregateRoot
{
public abstract IQueryable<TAggregateRoot> Filter(IQueryable<TAggregateRoot> aggregateRoots);
public static FilterSpecification<TAggregateRoot> CreateByPredicate(Expression<Func<TAggregateRoot, bool>> predicate)
{
return new PredicateFilterSpecification<TAggregateRoot>(predicate);
}
public static FilterSpecification<TAggregateRoot> operator &(FilterSpecification<TAggregateRoot> op1, FilterSpecification<TAggregateRoot> op2)
{
return new CompositeFilterSpecification<TAggregateRoot>(op1, op2);
}
public static FilterSpecification<TAggregateRoot> CreateDummy()
{
return new DummyFilterSpecification<TAggregateRoot>();
}
}
public class CompositeFilterSpecification<TAggregateRoot> : FilterSpecification<TAggregateRoot> where TAggregateRoot : Entity, IAggregateRoot
{
private readonly FilterSpecification<TAggregateRoot> _firstOperand;
private readonly FilterSpecification<TAggregateRoot> _secondOperand;
public CompositeFilterSpecification(FilterSpecification<TAggregateRoot> firstOperand, FilterSpecification<TAggregateRoot> secondOperand)
{
_firstOperand = firstOperand;
_secondOperand = secondOperand;
}
public override IQueryable<TAggregateRoot> Filter(IQueryable<TAggregateRoot> aggregateRoots)
{
var operand1Results = _firstOperand.Filter(aggregateRoots);
return _secondOperand.Filter(operand1Results);
}
}
public class PredicateFilterSpecification<TAggregateRoot> : FilterSpecification<TAggregateRoot> where TAggregateRoot : Entity, IAggregateRoot
{
private readonly Expression<Func<TAggregateRoot, bool>> _predicate;
public PredicateFilterSpecification(Expression<Func<TAggregateRoot, bool>> predicate)
{
_predicate = predicate;
}
public override IQueryable<TAggregateRoot> Filter(IQueryable<TAggregateRoot> aggregateRoots)
{
return aggregateRoots.Where(_predicate);
}
}
Another kind of specification:
public abstract class ComputationSpecification<TAggregateRoot, TResult> where TAggregateRoot : Entity, IAggregateRoot
{
public abstract TResult Compute(IQueryable<TAggregateRoot> aggregateRoots);
public static CompositeComputationSpecification<TAggregateRoot, TResult> operator &(FilterSpecification<TAggregateRoot> op1, ComputationSpecification<TAggregateRoot, TResult> op2)
{
return new CompositeComputationSpecification<TAggregateRoot, TResult>(op1, op2);
}
}
and usages:
OrderRepository.Compute(new MaxInvoiceNumberComputationSpecification()) + 1
PlaceRepository.Single(FilterSpecification<Place>.CreateByPredicate(p => p.Name == placeName));
UnitRepository.Compute(new UnitsAreAvailableForPickingFilterSpecification() & new CheckStockContainsEnoughUnitsOfGivenProductComputatonSpecification(count, product));
Custom implementations may look like
public class CheckUnitsOfGivenProductExistOnPlaceComputationSpecification : ComputationSpecification<Unit, bool>
{
private readonly Product _product;
private readonly Place _place;
public CheckUnitsOfGivenProductExistOnPlaceComputationSpecification(
Place place,
Product product)
{
_place = place;
_product = product;
}
public override bool Compute(IQueryable<Unit> aggregateRoots)
{
return aggregateRoots.Any(unit => unit.Product == _product && unit.Place == _place);
}
}
Finally, I'm forced to tell that simple Specficiation implementation fits bad according to DDD. You have done great research in this area and it's unlikely that someone proposes something new :). Also, take a look at http://www.sapiensworks.com/blog/ blog.
I´m late for the party, bug here are my 2 cents...
I did also struggle implementing the specification pattern for exactly the same reasons you described above. If you abandon the requirement for a separate model (Persistence / Domain) then your problem is greatly simplified. you could add another method to the specification to generate the expression tree for the ORM:
public interface ISpecification<T>
{
bool IsSpecifiedBy(T item);
Expression<Func<T, bool>> GetPredicate()
}
There is a post from Valdmir Khorikov describing how to do that in detail.
However, I really don´t like have a single model. Like you I find that Peristence model should be kept in the infrastructure layer to not contaminate your domain because of ORM limitations.
Eventually I came up with a solution using a visitor to translate the the domain model to an expression tree of the persistence model.
I recently wrote a series of posts where I explain
How to create a Generic specification in C#
What is a Visitor Design Pattern and how to make it generic.
And my take on how to implement specification pattern in entity framework
The end result becomes very simple to use actually, you´ll need to make the specification Visitable...
public interface IProductSpecification
{
bool IsSpecifiedBy(Product item);
TResult Accept<TResult>(IProductSpecificationVisitor<TResult> visitor);
}
Create a SpecificationVisitor to translate the specification to an expression:
public class ProductEFExpressionVisitor : IProductSpecificationVisitor<Expression<Func<EFProduct, bool>>>
{
public Expression<Func<EFProduct, bool>> Visit (ProductMatchesCategory spec)
{
var categoryName = spec.Category.CategoryName;
return ef => ef.Category == categoryName;
}
//other specification-specific visit methods
}
There is just some tweeking that needs to be done if you want to create a generic spefication. It´s all detailed in the posts referenced above.
I have been looking for weeks all over the Internet, read dozens of
articles on DDD and Specification, but they always only handle simple
cases and don't take the performance into consideration or they
violate DDD pattern.
Someone will correct me if I'm wrong, but it seems to me that the concept of a "Persistence Model" didn't appear until very recently in the DDD space (by the way, where did you read about it ?). I'm not sure it's described in the original blue book.
I personally don't see many advantages to it. My view is that you have a persisted (usually) relational model in your database and an in-memory domain model in your application. The gap between the two is bridged by an action, not a model. This action can be performed by an ORM. I have yet to be sold on the fact that a "Persistence object model" really makes sense semantically, let alone is mandatory to respect DDD principles (*).
Now there's the CQRS approach where you have a separate Read Model, but this is a totally different animal and I wouldn't see Specifications acting on Read Model objects instead of Entities as a DDD violation in this case. Specification is after all a very general pattern that nothing in DDD fundamentally restricts to Entities.
(*) Edit : Automapper creator Jimmy Bogard seems to find it overcomplicated as well - See How do I use automapper to map many-to-many relationships?
Related
I was reading through strategy pattern and was trying to implement it but I have got stuck at deciding the strategy implementation which I feel violates the open-closed principle.
In strategy pattern we code to interface and based on client interaction we will pass in the strategy implementation.
Now if we have bunch of strategies so we need to decide using conditions which strategy the client chooses something like
IStrategy str;
if(stragety1) {
str = new Strategy1()
} else if (stragety2) {
str = new Strategy2()
} and so on..
str.run()
Now as per open-closed principle the above is open to extension but it is not closed to modification
If I need to add another strategy(extension) in future I do need to alter this code.
is there a way where this could be avoided or it is how we need to implement strategy pattern ?
1) You must separate selecting/creating a concrete strategy from its uses. I. e. use function selectStrategy, pass it as (constructor) parameter, etc.
2) There is no way to fully avoid conditional creation, but you can hide it (e. g. using some dictionary for mapping state=>strategy) and/or shift it into another level of the application. The last approach is very powerful and flexible, but depends on the task. In some cases you may put selecting/creating on the same level that uses it. In other cases you may even end up with delegation selecting/creating to the highest/lowest level.
2.1) You can use the Registry pattern and kinda avoid modification of "core" object when adding new strategy's.
This is indeed not closed to modification, but that is due to the way you initialize. You are using a value (enum?) to determine which Strategy subclass should be used. As #bpjoshi points out their comment, this is more of a Factory pattern.
Wikipedia discusses how a Strategy pattern can support the Open/Closed Principle, instead of hampering it.
In that example, they use a Car class with a Brake Strategy. Some cars brake with ABS, some don't. Different Car subclasses and instances can be given different Strategies for braking.
To get your code closed for modification, you need to select the Strategies differently. You want to select the Strategy in the place where new behavior or subclass is defined. You'd have to refactor your code so that the specific Strategy subclass is applied at the point where the code is extended.
I think, there is misunderstanding about Closed for Modifications.
In 1988, Mayer said:
Software that works should when possible not be changed when your application is extended with new functionality.
and Rober C. Matrin said:
This definition is obviously dated.
Think about that very carefully. If the behaviors of all the modules in your system could be extended, without modifying them, then you could add new features to that system without modifying any old code. The features would be added solely by writing new code.
https://8thlight.com/blog/uncle-bob/2014/05/12/TheOpenClosedPrinciple.html
Adding some new codes without modifying old codes do not conflict with Open-Closed Principle.
I think the decision you are referring to should be the responsibility of a factory class. The following is some example code:
public interface ISalary
{
decimal Calculate();
}
public class ManagerSalary : ISalary
{
public decimal Calculate()
{
return 0;
}
}
public class AdminSalary : ISalary
{
public decimal Calculate()
{
return 0;
}
}
public class Employee
{
private ISalary salary;
public Employee(ISalary salary)
{
this.salary = salary;
}
public string Name { get; set; }
public decimal CalculateSalary()
{
return this.salary.Calculate();
}
}
The Employee class uses the Strategy pattern and follows the Open/Closed principle, i.e. it is open to new strategy types (ISalary implementations) through injection via the constructor, but closed to modification.
The piece that is missing is the code that creates the Employee objects, something like:
public enum EmployeeType
{
Manager,
Admin
}
public class EmployeeFactory
{
public Employee CreateEmployee(EmployeeType type)
{
if (type == EmployeeType.Manager)
return new Employee(new ManagerSalary());
else if (type == EmployeeType.Admin)
return new Employee(new AdminSalary());
etc
}
}
This is a very simple factory pattern. There are better ways to do this but this is the simplest way to explain the concept.
I have a set of entities that I would like to persist via the repository pattern.
Vanilla SQL is pretty straight forward, write some methods that have queries that take/return the entities.
Azure table storage is also pretty straight forward, except that most of the implementations I have seen want the entities to be decended from some common Azure base class. (TableServiceEntity etc)
EF works as well, but also wants to own a bit more of the entities.
Is there a good way to abstract away both the SQL and Azure table stuff so that the entities can be persisted either way?
Bidirectional support is not really needed, we are just going to have two different deployment types that need to be supported.
I would like the models to be as agnostic as possible of the repository they are being persisted in, with as few dependancies (none?!) if possible.
This is doable. I've helped architect this for a client on a decently large scale.
1) Do not inherit from TableServiceEntity but instead implement the following attribute on your entities:
[DataServiceKey(new string[] { "PartitionKey", "RowKey" }), Serializable]
Also, implement some sort of an interface on your entities that provides PartitionKey,RowKey, and Timestamp to your entities.
public interface ITableEntity
{
string PartitionKey { get; set; }
string RowKey { get; set; }
DateTime Timestamp { get; set; }
}
At least this approach will allow you to have your own inheritance strategy for your own entities and not be restricted due to lack of multi-inheritance. Try to have PartitionKey and RowKey simply provide a pass-through to the real key properties instead of duplicating the keys.
public string PartitionKey
{
get
{
return this.Id;
}
set
{
this.Id = value;
}
}
2) Do realize that you will have two types of repositories in your system: relational-specific and ATS-specific.
3) You can generate your entities via EDMX and use partial classes to inject them with ITableEntity and DataServiceKey attribute
4) At some point you will need your ATS-specific repositories to do some transformations of your entities for persistence's sake, because of the way you'll be saving data into ATS is not the way you'll want it modeled in your domain (this especially relates to hierarchical or relational data)
HTH
In my domain I have something called Project which basically holds a lot of simple configuration propeties that describe what should happen when the project gets executed. When the Project gets executed it produces a huge amount of LogEntries. In my application I need to analyse these log entries for a given Project, so I need to be able to partially successively load a portion (time frame) of log entries from the database (Oracle). How would you model this relationship as DB tables and as objects?
I could have a Project table and ProjectLog table and have a foreign key to the primary key of Project and do the "same" thing at object level have class Project and a property
IEnumerable<LogEntry> LogEntries { get; }
and have NHibernate do all the mapping. But how would I design my ProjectRepository in this case? I could have a methods
void FillLog(Project projectToFill, DateTime start, DateTime end);
How can I tell NHibernate that it should not load the LogEntries until someone calls this method and how would I make NHibernate to load a specifc timeframe within that method?
I am pretty new to ORM, maybe that design is not optimal for NHibernate or in general? Maybe I shoul design it differently?
Instead of having a Project entity as an aggregate root, why not move the reference around and let LogEntry have a Product property and also act as an aggregate root.
public class LogEntry
{
public virtual Product Product { get; set; }
// ...other properties
}
public class Product
{
// remove the LogEntries property from Product
// public virtual IList<LogEntry> LogEntries { get; set; }
}
Now, since both of those entities are aggregate roots, you would have two different repositories: ProductRepository and LogEntryRepository. LogEntryRepository could have a method GetByProductAndTime:
IEnumerable<LogEntry> GetByProductAndTime(Project project, DateTime start, DateTime end);
The 'correct' way of loading partial / filtered / criteria-based lists under NHibernate is to use queries. There is lazy="extra" but it doesn't do what you want.
As you've already noted, that breaks the DDD model of Root Aggregate -> Children. I struggled with just this problem for an absolute age, because first of all I hated having what amounted to persistence concerns polluting my domain model, and I could never get the API surface to look 'right'. Filter methods on the owning entity class work but are far from pretty.
In the end I settled for extending my entity base class (all my entities inherit from it, which I know is slightly unfashionable these days but it does at least let me do this sort of thing consistently) with a protected method called Query<T>() that takes a LINQ expression defining the relationship and, under the hood in the repository, calls LINQ-to-NH and returns an IQueryable<T> that you can then query into as you require. I can then facade that call beneath a regular property.
The base class does this:
protected virtual IQueryable<TCollection> Query<TCollection>(Expression<Func<TCollection, bool>> selector)
where TCollection : class, IPersistent
{
return Repository.For<TCollection>().Where(selector);
}
(I should note here that my Repository implementation implements IQueryable<T> directly and then delegates the work down to the NH Session.Query<T>())
And the facading works like this:
public virtual IQueryable<Form> Forms
{
get
{
return Query<Form>(x => x.Account == this);
}
}
This defines the list relationship between Account and Form as the inverse of the actual mapped relationship (Form -> Account).
For 'infinite' collections - where there is a potentially unbounded number of objects in the set - this works OK, but it means you can't map the relationship directly in NHibernate and therefore can't use the property directly in NH queries, only indirectly.
What we really need is a replacement for NHibernate's generic bag, list and set implementations that knows how to use the LINQ provider to query into lists directly. One has been proposed as a patch (see https://nhibernate.jira.com/browse/NH-2319). As you can see the patch was not finished or accepted and from what I can see the proposer didn't re-package this as an extension - Diego Mijelshon is a user here on SO so perhaps he'll chime in... I have tested out his proposed code as a POC and it does work as advertised, but obviously it's not tested or guaranteed or necessarily complete, it might have side-effects, and without permission to use or publish it you couldn't use it anyway.
Until and unless the NH team get around to writing / accepting a patch that makes this happen, we'll have to keep resorting to workarounds. NH and DDD just have conflicting views of the world, here.
This is quite a common problem I run into. Let's hear your solutions. I'm going to use an Employee-managing application as an example:-
We've got some entity classes, some of which implement a particular interface.
public interface IEmployee { ... }
public interface IRecievesBonus { int Amount { get; } }
public class Manager : IEmployee, IRecievesBonus { ... }
public class Grunt : IEmployee /* This company sucks! */ { ... }
We've got a collection of Employees that we can iterate over. We need to grab all the objects that implement IRecievesBonus and pay the bonus.
The naive implementation goes something along the lines of:-
foreach(Employee employee in employees)
{
IRecievesBonus bonusReciever = employee as IRecievesBonus;
if(bonusReciever != null)
{
PayBonus(bonusReciever);
}
}
or alternately in C#:-
foreach(IRecievesBonus bonusReciever in employees.OfType<IRecievesBonus>())
{
PayBonus(bonusReciever);
}
We cannot modify the IEmployee interface to include details of the child type as we don't want to pollute the super-type with details that only the sub-type cares about.
We do not have an existing collection of only the subtype.
We cannot use the Visitor pattern because the element types are not stable. Also, we might have a type which implements both IRecievesBonus and IDrinksTea. Its Accept method would contain an ambiguous call to visitor.Visit(this).
Often we're forced down this route because we can't modify the super-type, nor the collection e.g. in .NET we may need to find all the Buttons on this Form via the child Controls collection. We may need to do something to the child types that depends on some aspect of the child type (e.g. the bonus amount in the example above).
Strikes me as odd that there isn't an "accepted" way to do this, given how often it comes up.
1) Is the type conversion worth avoiding?
2) Are there any alternatives I haven't thought of?
EDIT
Péter Török suggests composing Employee and pushing the type conversion further down the object tree:-
public interface IEmployee
{
public IList<IEmployeeProperty> Properties { get; }
}
public interface IEmployeeProperty { ... }
public class DrinksTeaProperty : IEmployeeProperty
{
int Sugars { get; set; }
bool Milk { get; set; }
}
foreach (IEmployee employee in employees)
{
foreach (IEmployeeProperty property in employee.Propeties)
{
// Handle duplicate properties if you need to.
// Since this is just an example, we'll just
// let the greedy ones have two cups of tea.
DrinksTeaProperty tea = property as DrinksTeaProperty;
if (tea != null)
{
MakeTea(tea.Sugers, tea.Milk);
}
}
}
In this example it's definitely worth pushing these traits out of the Employee type - particularly because some managers might drink tea and some might not - but we still have the same underlying problem of the type conversion.
Is it the case that it's "ok" so long as we do it at the right level? Or are we just moving the problem around?
The holy grail would be a variant on the Visitor pattern where:-
You can add element members without modifying all the visitors
Visitors should only visit types they're interested in visiting
The visitor can visit the member based on an interface type
Elements might implement multiple interfaces which are visited by different visitors
Doesn't involve casting or reflection
but I appreciate that's probably unrealistic.
I would definitely try to resolve this with composition instead of inheritance, by associating the needed properties/traits to Employee, instead of subclassing it.
I can give an example partly in Java, I think it's close enough to your language (C#) to be useful.
public enum EmployeeProperty {
RECEIVES_BONUS,
DRINKS_TEA,
...
}
public class Employee {
Set<EmployeeProperty> properties;
// methods to add/remove/query properties
...
}
And the modified loop would look like this:
foreach(Employee employee in employees) {
if (employee.getProperties().contains(EmployeeProperty.RECEIVES_BONUS)) {
PayBonus(employee);
}
}
This solution is much more flexible than subclassing:
it can trivially handle any combination of employee properties, while with subclassing you would experience a combinatorial explosion of subclasses as the number of properties grow,
it trivially allows you to change Employee properties runtime, while with subclassing this would require changing the concrete class of your object!
In Java, enums can have properties or (even virtual) methods themselves - I don't know whether this is possible in C#, but in the worst case, if you need more complex properties, you can implement them with a class hierarchy. (Even in this case, you are not back to square one, since you have an extra level of indirection which gives you the flexibility described above.)
Update
You are right that in the most general case (discussed in the last sentence above) the type conversion problem is not resolved, just pushed one level down on the object graph.
In general, I don't know a really satisfying solution to this problem. The typical way to handle it is using polymorphism: pull up the common interface and manipulate the objects via that, thus eliminating the need for downcasts. However, in cases when the objects in question do not have a common interface, what to do? It may help to realize that in these cases the design does not reflect reality well: practically, we created a marker interface solely to enable us to put a bunch of distinct objects into a common collection, but there is no semantical relationship between the objects.
So I believe in these cases the awkwardness of downcasts is a signal that there may be a deeper problem with our design.
You could implement a custom iterator that only iterates over the IRecievesBonus types.
Is it a violation of the Persistance igorance to inject a repository interface into a Entity object Like this. By not using a interface I clearly see a problem but when using a interface is there really a problem? Is the code below a good or bad pattern and why?
public class Contact
{
private readonly IAddressRepository _addressRepository;
public Contact(IAddressRepository addressRepository)
{
_addressRepository = addressRepository;
}
private IEnumerable<Address> _addressBook;
public IEnumerable<Address> AddressBook
{
get
{
if(_addressBook == null)
{
_addressBook = _addressRepository.GetAddresses(this.Id);
}
return _addressBook;
}
}
}
It's not exactly a good idea, but it may be ok for some limited scenarios. I'm a little confused by your model, as I have a hard time believing that Address is your aggregate root, and therefore it wouldn't be ordinary to have a full-blown address repository. Based on your example, you probably are actually using a table data gateway or dao rather than a respository.
I prefer to use a data mapper to solve this problem (an ORM or similar solution). Basically, I would take advantage of my ORM to treat address-book as a lazy loaded property of the aggregate root, "Contact". This has the advantage that your changes can be saved as long as the entity is bound to a session.
If I weren't using an ORM, I'd still prefer that the concrete Contact repository implementation set the property of the AddressBook backing store (list, or whatever). I might have the repository set that enumeration to a proxy object that does know about the other data store, and loads it on demand.
You can inject the load function from outside. The new Lazy<T> type in .NET 4.0 comes in handy for that:
public Contact(Lazy<IEnumerable<Address>> addressBook)
{
_addressBook = addressBook;
}
private Lazy<IEnumerable<Address>> _addressBook;
public IEnumerable<Address> AddressBook
{
get { return this._addressBook.Value; }
}
Also note that IEnumerable<T>s might be intrinsically lazy anyhow when you get them from a query provider. But for any other type you can use the Lazy<T>.
Normally when you follow DDD you always operate with the whole aggregate. The repository always returns you a fully loaded aggregate root.
It doesn't make much sense (in DDD at least) to write code as in your example. A Contact aggregate will always contain all the addresses (if it needs them for its behavior, which I doubt to be honest).
So typically ContactRepository supposes to construct you the whole Contact aggregate where Address is an entity or, most likely, a value object inside this aggregate.
Because Address is an entity/value object that belongs to (and therefore managed by) Contact aggregate it will not have its own repository as you are not suppose to manage entities that belong to an aggregate outside this aggregate.
Resume: always load the whole Contact and call its behavior method to do something with its state.
Since its been 2 years since I asked the question and the question somewhat misunderstood I will try to answer it myself.
Rephrased question:
"Should Business entity classes be fully persistance ignorant?"
I think entity classes should be fully persistance ignorant, because you will instanciate them many places in your code base so it will quickly become messy to always have to inject the Repository class into the entity constructor, neither does it look very clean. This becomes even more evident if you are in need of injecting several repositories. Therefore I always use a separate handler/service class to do the persistance jobs for the entities. These classes are instanciated far less frequently and you usually have more control over where and when this happens. Entity classes are kept as lightweight as possible.
I now always have 1 Repository pr aggregate root and if I have need for some extra business logic when entities are fetched from repositories I usually create 1 ServiceClass for the aggregate root.
By taking a tweaked example of the code in the question as it was a bad example I would do it like this now:
Instead of:
public class Contact
{
private readonly IContactRepository _contactRepository;
public Contact(IContactRepository contactRepository)
{
_contactRepository = contactRepository;
}
public void Save()
{
_contactRepository.Save(this);
}
}
I do it like this:
public class Contact
{
}
public class ContactService
{
private readonly IContactRepository _contactRepository;
public ContactService(IContactRepository contactRepository)
{
_contactRepository = contactRepository;
}
public void Save(Contact contact)
{
_contactRepository.Save(contact);
}
}