Configure behaviour of DDD aggregate with strategies - entity

I've an aggregate:
public class MyAggregate {
private Country country;
private List<SomeEntity> someEntities;
public SomeEntity getBestMatchingSomeEntity();
}
My problem is, that the behaviour of getBestMatchingSomeEntity() should be configurable depending in the country, so I like to use strategies (simple Spring beans).
Im not sure how to "inject" the strategy in my entity. Normally I would use some kind of selector, but I don't want to inject a service in my entity. Or should I select the right strategy while creating the entity and inject the strategy in the entity? Or is a domain service the way to go?
Thank you!

First thing that pops up in my mind.
Why don't you have the method that accepts a service like:
public class MyAggregate {
private Country country;
private List<SomeEntity> someEntities;
public SomeEntity getBestMatchingSomeEntity(MatchingEntityService service);
}
Then you define the service as:
public interface MatchingEntityService {
SearchEntityStrategy strategyFor(Country country);
}
In this way in your aggregate you can use the service without having the problem of injection, and the relatives serialization problems.
The MatchingEntityService will have the logic to select the right strategy based on the Country and you can keep all the different strategies separated.

In order to keep the Aggregate as clean as possible, it should have behavior that is used by itself to check its invariants. Complex query behavior should be contained in some domain service.
In your case, you should have a domain service that has the strategy injected or passed as a method parameter, depending on the desired configuration.
This separation is more clear if you would have used CQRS, where a read-model would have this responsibility.

Related

Strategy Pattern and Open-Closed Principle Conflict

I was reading through strategy pattern and was trying to implement it but I have got stuck at deciding the strategy implementation which I feel violates the open-closed principle.
In strategy pattern we code to interface and based on client interaction we will pass in the strategy implementation.
Now if we have bunch of strategies so we need to decide using conditions which strategy the client chooses something like
IStrategy str;
if(stragety1) {
str = new Strategy1()
} else if (stragety2) {
str = new Strategy2()
} and so on..
str.run()
Now as per open-closed principle the above is open to extension but it is not closed to modification
If I need to add another strategy(extension) in future I do need to alter this code.
is there a way where this could be avoided or it is how we need to implement strategy pattern ?
1) You must separate selecting/creating a concrete strategy from its uses. I. e. use function selectStrategy, pass it as (constructor) parameter, etc.
2) There is no way to fully avoid conditional creation, but you can hide it (e. g. using some dictionary for mapping state=>strategy) and/or shift it into another level of the application. The last approach is very powerful and flexible, but depends on the task. In some cases you may put selecting/creating on the same level that uses it. In other cases you may even end up with delegation selecting/creating to the highest/lowest level.
2.1) You can use the Registry pattern and kinda avoid modification of "core" object when adding new strategy's.
This is indeed not closed to modification, but that is due to the way you initialize. You are using a value (enum?) to determine which Strategy subclass should be used. As #bpjoshi points out their comment, this is more of a Factory pattern.
Wikipedia discusses how a Strategy pattern can support the Open/Closed Principle, instead of hampering it.
In that example, they use a Car class with a Brake Strategy. Some cars brake with ABS, some don't. Different Car subclasses and instances can be given different Strategies for braking.
To get your code closed for modification, you need to select the Strategies differently. You want to select the Strategy in the place where new behavior or subclass is defined. You'd have to refactor your code so that the specific Strategy subclass is applied at the point where the code is extended.
I think, there is misunderstanding about Closed for Modifications.
In 1988, Mayer said:
Software that works should when possible not be changed when your application is extended with new functionality.
and Rober C. Matrin said:
This definition is obviously dated.
Think about that very carefully. If the behaviors of all the modules in your system could be extended, without modifying them, then you could add new features to that system without modifying any old code. The features would be added solely by writing new code.
https://8thlight.com/blog/uncle-bob/2014/05/12/TheOpenClosedPrinciple.html
Adding some new codes without modifying old codes do not conflict with Open-Closed Principle.
I think the decision you are referring to should be the responsibility of a factory class. The following is some example code:
public interface ISalary
{
decimal Calculate();
}
public class ManagerSalary : ISalary
{
public decimal Calculate()
{
return 0;
}
}
public class AdminSalary : ISalary
{
public decimal Calculate()
{
return 0;
}
}
public class Employee
{
private ISalary salary;
public Employee(ISalary salary)
{
this.salary = salary;
}
public string Name { get; set; }
public decimal CalculateSalary()
{
return this.salary.Calculate();
}
}
The Employee class uses the Strategy pattern and follows the Open/Closed principle, i.e. it is open to new strategy types (ISalary implementations) through injection via the constructor, but closed to modification.
The piece that is missing is the code that creates the Employee objects, something like:
public enum EmployeeType
{
Manager,
Admin
}
public class EmployeeFactory
{
public Employee CreateEmployee(EmployeeType type)
{
if (type == EmployeeType.Manager)
return new Employee(new ManagerSalary());
else if (type == EmployeeType.Admin)
return new Employee(new AdminSalary());
etc
}
}
This is a very simple factory pattern. There are better ways to do this but this is the simplest way to explain the concept.

domain design with nhibernate

In my domain I have something called Project which basically holds a lot of simple configuration propeties that describe what should happen when the project gets executed. When the Project gets executed it produces a huge amount of LogEntries. In my application I need to analyse these log entries for a given Project, so I need to be able to partially successively load a portion (time frame) of log entries from the database (Oracle). How would you model this relationship as DB tables and as objects?
I could have a Project table and ProjectLog table and have a foreign key to the primary key of Project and do the "same" thing at object level have class Project and a property
IEnumerable<LogEntry> LogEntries { get; }
and have NHibernate do all the mapping. But how would I design my ProjectRepository in this case? I could have a methods
void FillLog(Project projectToFill, DateTime start, DateTime end);
How can I tell NHibernate that it should not load the LogEntries until someone calls this method and how would I make NHibernate to load a specifc timeframe within that method?
I am pretty new to ORM, maybe that design is not optimal for NHibernate or in general? Maybe I shoul design it differently?
Instead of having a Project entity as an aggregate root, why not move the reference around and let LogEntry have a Product property and also act as an aggregate root.
public class LogEntry
{
public virtual Product Product { get; set; }
// ...other properties
}
public class Product
{
// remove the LogEntries property from Product
// public virtual IList<LogEntry> LogEntries { get; set; }
}
Now, since both of those entities are aggregate roots, you would have two different repositories: ProductRepository and LogEntryRepository. LogEntryRepository could have a method GetByProductAndTime:
IEnumerable<LogEntry> GetByProductAndTime(Project project, DateTime start, DateTime end);
The 'correct' way of loading partial / filtered / criteria-based lists under NHibernate is to use queries. There is lazy="extra" but it doesn't do what you want.
As you've already noted, that breaks the DDD model of Root Aggregate -> Children. I struggled with just this problem for an absolute age, because first of all I hated having what amounted to persistence concerns polluting my domain model, and I could never get the API surface to look 'right'. Filter methods on the owning entity class work but are far from pretty.
In the end I settled for extending my entity base class (all my entities inherit from it, which I know is slightly unfashionable these days but it does at least let me do this sort of thing consistently) with a protected method called Query<T>() that takes a LINQ expression defining the relationship and, under the hood in the repository, calls LINQ-to-NH and returns an IQueryable<T> that you can then query into as you require. I can then facade that call beneath a regular property.
The base class does this:
protected virtual IQueryable<TCollection> Query<TCollection>(Expression<Func<TCollection, bool>> selector)
where TCollection : class, IPersistent
{
return Repository.For<TCollection>().Where(selector);
}
(I should note here that my Repository implementation implements IQueryable<T> directly and then delegates the work down to the NH Session.Query<T>())
And the facading works like this:
public virtual IQueryable<Form> Forms
{
get
{
return Query<Form>(x => x.Account == this);
}
}
This defines the list relationship between Account and Form as the inverse of the actual mapped relationship (Form -> Account).
For 'infinite' collections - where there is a potentially unbounded number of objects in the set - this works OK, but it means you can't map the relationship directly in NHibernate and therefore can't use the property directly in NH queries, only indirectly.
What we really need is a replacement for NHibernate's generic bag, list and set implementations that knows how to use the LINQ provider to query into lists directly. One has been proposed as a patch (see https://nhibernate.jira.com/browse/NH-2319). As you can see the patch was not finished or accepted and from what I can see the proposer didn't re-package this as an extension - Diego Mijelshon is a user here on SO so perhaps he'll chime in... I have tested out his proposed code as a POC and it does work as advertised, but obviously it's not tested or guaranteed or necessarily complete, it might have side-effects, and without permission to use or publish it you couldn't use it anyway.
Until and unless the NH team get around to writing / accepting a patch that makes this happen, we'll have to keep resorting to workarounds. NH and DDD just have conflicting views of the world, here.

An alternative way to use Azure Table Storage?

I'd like to use for table storage an entity like this:
public class MyEntity
{
public String Text { get; private set; }
public Int32 SomeValue { get; private set; }
public MyEntity(String text, Int32 someValue)
{
Text = text;
SomeValue = someValue;
}
}
But it's not possible, because the ATS needs
Parameterless constructor
All properties public and
read/write.
Inherit from TableServiceEntity;
The first two, are two things I don't want to do. Why should I want that anybody could change some data that should be readonly? or create objects of this kind in a inconsistent way (what are .ctor's for then?), or even worst, alter the PartitionKey or the RowKey. Why are we still constrained by these deserialization requirements?
I don't like develop software in that way, how can I use table storage library in a way that I can serialize and deserialize myself the objects? I think that as long the objects inherits from TableServiceEntity it shouldn't be a problem.
So far I got to save an object, but I don't know how retrieve it:
Message m = new Message("message XXXXXXXXXXXXX");
CloudTableClient tableClient = account.CreateCloudTableClient();
tableClient.CreateTableIfNotExist("Messages");
TableServiceContext tcontext = new TableServiceContext(account.TableEndpoint.AbsoluteUri, account.Credentials);
var list = tableClient.ListTables().ToArray();
tcontext.AddObject("Messages", m);
tcontext.SaveChanges();
Is there any way to avoid those deserialization requirements or get the raw object?
Cheers.
If you want to use the Storage Client Library, then yes, there are restrictions on what you can and can't do with your objects that you want to store. Point 1 is correct. I'd expand point 2 to say "All properties that you want to store must be public and read/write" (for integer properties you can get away with having read only properties and it won't try to save them) but you don't actually have to inherit from TableServiceEntity.
TableServiceEntity is just a very light class that has the properties PartitionKey, RowKey, Timestamp and is decorated with the DataServiceKey attribute (take a look with Reflector). All of these things you can do to a class that you create yourself and doesn't inherit from TableServiceEntity (note that the casing of these properties is important).
If this still doesn't give you enough control over how you build your classes, you can always ignore the Storage Client Library and just use the REST API directly. This will give you the ability to searialize and deserialize the XML any which way you like. You will lose the all of the nice things that come with using the library, like ability to create queries in LINQ.
The constraints around that ADO.NET wrapper for the Table Storage are indeed somewhat painful. You can also adopt a Fat Entity approach as implemented in Lokad.Cloud. This will give you much more flexibility concerning the serialization of your entities.
Just don't use inheritance.
If you want to use your own POCO's, create your class as you want it and create a separate tableEntity wrapper/container class that holds the pK and rK and carries your class as a serialized byte array.
You can use composition to achieve what you want.
Create your Table Entities as you need to for storage and create your POCOs as wrappers on those providing the API you want the rest of your application code to see.
You can even mix in some interfaces for better code.
How about generating the POCO wrappers at runtime using System.Reflection.Emit http://blog.kloud.com.au/2012/09/30/a-better-dynamic-tableserviceentity/

Is this a ddd anti-pattern?

Is it a violation of the Persistance igorance to inject a repository interface into a Entity object Like this. By not using a interface I clearly see a problem but when using a interface is there really a problem? Is the code below a good or bad pattern and why?
public class Contact
{
private readonly IAddressRepository _addressRepository;
public Contact(IAddressRepository addressRepository)
{
_addressRepository = addressRepository;
}
private IEnumerable<Address> _addressBook;
public IEnumerable<Address> AddressBook
{
get
{
if(_addressBook == null)
{
_addressBook = _addressRepository.GetAddresses(this.Id);
}
return _addressBook;
}
}
}
It's not exactly a good idea, but it may be ok for some limited scenarios. I'm a little confused by your model, as I have a hard time believing that Address is your aggregate root, and therefore it wouldn't be ordinary to have a full-blown address repository. Based on your example, you probably are actually using a table data gateway or dao rather than a respository.
I prefer to use a data mapper to solve this problem (an ORM or similar solution). Basically, I would take advantage of my ORM to treat address-book as a lazy loaded property of the aggregate root, "Contact". This has the advantage that your changes can be saved as long as the entity is bound to a session.
If I weren't using an ORM, I'd still prefer that the concrete Contact repository implementation set the property of the AddressBook backing store (list, or whatever). I might have the repository set that enumeration to a proxy object that does know about the other data store, and loads it on demand.
You can inject the load function from outside. The new Lazy<T> type in .NET 4.0 comes in handy for that:
public Contact(Lazy<IEnumerable<Address>> addressBook)
{
_addressBook = addressBook;
}
private Lazy<IEnumerable<Address>> _addressBook;
public IEnumerable<Address> AddressBook
{
get { return this._addressBook.Value; }
}
Also note that IEnumerable<T>s might be intrinsically lazy anyhow when you get them from a query provider. But for any other type you can use the Lazy<T>.
Normally when you follow DDD you always operate with the whole aggregate. The repository always returns you a fully loaded aggregate root.
It doesn't make much sense (in DDD at least) to write code as in your example. A Contact aggregate will always contain all the addresses (if it needs them for its behavior, which I doubt to be honest).
So typically ContactRepository supposes to construct you the whole Contact aggregate where Address is an entity or, most likely, a value object inside this aggregate.
Because Address is an entity/value object that belongs to (and therefore managed by) Contact aggregate it will not have its own repository as you are not suppose to manage entities that belong to an aggregate outside this aggregate.
Resume: always load the whole Contact and call its behavior method to do something with its state.
Since its been 2 years since I asked the question and the question somewhat misunderstood I will try to answer it myself.
Rephrased question:
"Should Business entity classes be fully persistance ignorant?"
I think entity classes should be fully persistance ignorant, because you will instanciate them many places in your code base so it will quickly become messy to always have to inject the Repository class into the entity constructor, neither does it look very clean. This becomes even more evident if you are in need of injecting several repositories. Therefore I always use a separate handler/service class to do the persistance jobs for the entities. These classes are instanciated far less frequently and you usually have more control over where and when this happens. Entity classes are kept as lightweight as possible.
I now always have 1 Repository pr aggregate root and if I have need for some extra business logic when entities are fetched from repositories I usually create 1 ServiceClass for the aggregate root.
By taking a tweaked example of the code in the question as it was a bad example I would do it like this now:
Instead of:
public class Contact
{
private readonly IContactRepository _contactRepository;
public Contact(IContactRepository contactRepository)
{
_contactRepository = contactRepository;
}
public void Save()
{
_contactRepository.Save(this);
}
}
I do it like this:
public class Contact
{
}
public class ContactService
{
private readonly IContactRepository _contactRepository;
public ContactService(IContactRepository contactRepository)
{
_contactRepository = contactRepository;
}
public void Save(Contact contact)
{
_contactRepository.Save(contact);
}
}

WCF Data Contract and Reference Entity Data?

Soliciting feedback/options/comments regarding a "best" pattern to use for reference data in my services.
What do I mean by reference data?
Let's use Northwind as an example. An Order is related to a Customer in the database. When I implement my Orders Service, in some cases I'll want the reference a "full" Customer from an Order and other cases when I just want a reference to the Customer (for example a Key/Value pair).
For example, if I were doing a GetAllOrders(), I wouldn't want to return a fully filled out Order, I'd want to return a lightweight version of an Order with only reference data for each order's Customer. If I did a GetOrder() method, though, I'd probably want to fill in the Customer details because chances are a consumer of this method might need it. There might be other situations where I might want to ask that the Customer details be filled in during certain method calls, but left out for others.
Here is what I've come up with:
[DataContract]
public OrderDTO
{
[DataMember(Required)]
public CustomerDTO;
//etc..
}
[DataContract]
public CustomerDTO
{
[DataMember(Required)]
public ReferenceInfo ReferenceInfo;
[DataMember(Optional)]
public CustomerInfo CustomerInfo;
}
[DataContract]
public ReferenceInfo
{
[DataMember(Required)]
public string Key;
[DataMember(Required)]
public string Value;
}
[DataContract]
public CustomerInfo
{
[DataMember(Required)]
public string CustomerID;
[DataMember(Required)]
public string Name;
//etc....
}
The thinking here is that since ReferenceInfo (which is a generic Key/Value pair) is always required in CustomerDTO, I'll always have ReferenceInfo. It gives me enough information to obtain the Customer details later if needed. The downside to having CustomerDTO require ReferenceInfo is that it might be overkill when I am getting the full CustomerDTO (i.e. with CustomerInfo filled in), but at least I am guaranteed the reference info.
Is there some other pattern or framework piece I can use to make this scenario/implementation "cleaner"?
The reason I ask is that although we could simply say in Northwind to ALWAYS return a full CustomerDTO, that might work fine in the simplistic Northwind situation. In my case, I have an object that has 25-50 fields that are reference/lookup type data. Some are more important to load than others in different situations, but i'd like to have as few definitions of these reference types as possible (so that I don't get into "DTO maintenance hell").
Opinions? Feedback? Comments?
Thanks!
We're at the same decision point on our project. As of right now, we've decided to create three levels of DTOs to handle a Thing: SimpleThing, ComplexThing, and FullThing. We don't know how it'll work out for us, though, so this is not yet an answer grounded in reality.
One thing I'm wondering is if we might learn that our services are designed at the "wrong" level. For example, is there ever an instance where we should bust a FullThing apart and only pass a SimpleThing? If we do, does that imply we've inappropriately put some business logic at too high of a level?
Amazon Product Advertising API Web service is a good example of the same problem that you are experiencing.
They use different DTOs to provide callers with more or less detail depending on their circumstances. For example there is the small response group, the large response group and in the middle medium response group.
Having different DTOs is a good technique if as you say you don't want a chatty interface.
It seems like a complicated solution to me. Why not just have a customer id field in the OrderDTO class and then let the application decide at runtime whether it needs the customer data. Since it has the customer id it can pull the data down when it so decides.
I've decided against the approach I was going to take. I think much of my initial concerns were a result of a lack of requirements. I sort of expected this to be the case, but was curious to see how others might have tackled this issue of determining when to load up certain data and when not to.
I am flattening my Data Contract to contain the most used fields of reference data elements. This should work for a majority of consumers. If the supplied data is not enough for a given consumer, they'll have the option to query a separate service to pull back the full details for a particular reference entity (for example a Currency, State, etc). For simple lookups that really are basically Key/Value pairs, we'll be handling them with a generic Key/Value pair Data Contract. I might even use the KnownType attribute for my more specialized Key/Value pairs.
[DataContract]
public OrderDTO
{
[DataMember(Required)]
public CustomerDTO Customer;
//in this case, I think consumers will need currency data,
//so I pass back a full currency item
[DataMember(Required)]
public Currency Currency;
//in this case, I think consumers are not likely to need full StateRegion data,
//so I pass back a "reference" to it
//User's can call a separate service method to get full details if needed, or
[DataMember(Required)]
public KeyValuePair ShipToStateRegion;
//etc..
}
[DataContract]
[KnownType(Currency)]
public KeyValuePair
{
[DataMember(Required)]
public string Key;
[DataMember(Required)]
public string Value;
//enum consisting of all possible reference types,
//such as "Currency", "StateRegion", "Country", etc.
[DataMember(Required)]
public ReferenceType ReferenceType;
}
[DataContract]
public Currency : KeyValuePair
{
[DataMember(Required)]
public decimal ExchangeRate;
[DataMember(Required)]
public DateTime ExchangeRateAsOfDate;
}
[DataContract]
public CustomerDTO
{
[DataMember(Required)]
public string CustomerID;
[DataMember(Required)]
public string Name;
//etc....
}
Thoughts? Opinions? Comments?
We've faced this problem in object-relational mapping as well. There are situations where we want the full object and others where we want a reference to it.
The difficulty is that by baking the serialization into the classes themselves, the datacontract pattern enforces the idea that there's only one right way to serialize an object. But there are lots of scenarios where you might want to partially serialize a class and/or its child objects.
This usually means that you have to have multiple DTOs for each class. For example, a FullCustomerDTO and a CustomerReferenceDTO. Then you have to create ways to map the different DTOs back to the Customer domain object.
As you can imagine, it's a ton of work, most of it very tedious.
One other possibility is to treat the objects as property bags. Specify the properties you want when querying, and get back exactly the properties you need.
Changing the properties to show in the "short" version then won't require multiple round trips, you can get all of the properties for a set at one time (avoiding chatty interfaces), and you don't have to modify your data or operation contracts if you decide you need different properties for the "short" version.
I typically build in lazy loading to my complex web services (ie web services that send/receive entities). If a Person has a Father property (also a Person), I send just an identifier for the Father instead of the nested object, then I just make sure my web service has an operation that can accept an identifier and respond with the corresponding Person entity. The client can then call the web service back if it wants to use the Father property.
I've also expanded on this so that batching can occur. If an operation sends back 5 Persons, then if the Father property is accessed on any one of those Persons, then a request is made for all 5 Fathers with their identifiers. This helps reduce the chattiness of the web service.