DDD - Invariant enforcement using instance methods and a factory method - oop

I'm designing a system using Domain-Driven design principals.
I have an aggregate named Album.
It contains a collection of Tracks.
Album instances are created using a factory method named create(props).
Rule 1: An Album must contain at least one Track.
This rule must be checked upon creation (in Album.create(props)).
Also, there must a method named addTrack(track: Track) so that a new Track can be added after the instance is created. That means addTrack(track: Track) must check the rule too.
How can I avoid this logic code duplication?

Well, if Album makes sure it has at least one Track upon instantiation I don't see why addTrack would be concerned that rule could ever be violated? Did you perhaps mean removeTrack?
In that case you could go for something as simple as the following:
class Album {
constructor(tracks) {
this._tracks = [];
this._assertWillHaveOneTrack(tracks.length);
//add tracks
}
removeTrack(trackId) {
this._assertWillHaveOneTrack(-1);
//remove track
}
_assertWillHaveOneTrack(change) {
if (this._tracks.length + change <= 0) throw new Error('Album must have a minimum of one track.');
}
}
Please note that you could also have mutated the state first and checked the rule after which makes things simpler at first glance, but it's usually a bad practice because the model could be left in an invalid state if the exception is handled, unless the model reverts the change, but that gets even more complex.
Also note that if Track is an entity, it's probably a better idea not to let the client code create the Track to preserve encapsulation, but rather pass a TrackInfo value object or something similar.

Related

Value object in event sourcing

Is there a place for value objects in an event sourced domain model?
Lets define a value object as an object with immutable state that guards its invariants and has no particular identifier.
An event sourced domain model in this context is a domain that is entirely or partially event sourced, meaning that its current state can be derived from applying all events that have occurred in the past. Events themselves are considered immutable, even over time.
Debate has taken place about the validity of using value objects within events - this question goes slightly further: Do value objects have a place in event sourced domains at all?
The (potential) problem with using value objects is that it becomes rather tricky to alter the domain in such a way that invariants are tightened.
An example of this scenario would be to have a Username value object, with the sole constraint that the name must be anywhere between 2 and 16 characters.
While this has been working well for some time, the business decides to only allow usernames of at least 5 characters.
A migration period begins and users with names of less than 5 characters are asked to update their names.
Lets say the process was successful, correction events are applied and everyone is happy.
We tighten the constraints on our Username value object to require at least 5 characters.
For a while everyone is happy, but then we discover a problem with the snapshots and replay all events.
We now face an exception from our Username object: by loading the historic data, we're breaking an invariant of our domain.
The rules of a value objects apply retroactively - does this make them inherently unsuitable for event sourcing? Would it be worth applying versioning of value objects? Is there a simpler way of avoiding such problems?
I would say, that at the moment you redefined what Username means, and you don't migrate historical data somehow, you've essentially created 2 different Username meanings.
Because there are 2 different meanings of the word, you have to make it explicit in the code somehow. "Versioning" is one way, although I wouldn't use such a generic solution, there are different modeling options.
You could make it explicit that the history of a "username" is just that, a history. So for example create a HistoricUsername, which is the event-sourced object, even a value object if you want. And create a Username which is at all times the username with the most current rules, which is not persisted at all, but created from a HistoricUsername if it can.
Some people suggest sometimes to extract the "rules" from the object, and re-apply it later. That way the object itself is valid at all times and you can ask it to validate itself against rules that might change. I don't really prefer these kinds of solutions, but it's an option, and the Username would still be a value-object.
So the problem is not really that value-objects don't fit into event-sourcing, it's just that the modeling has to be more accurate.
Do value objects have a place in event sourced domains at all?
Yes.
Is there a simpler way of avoiding such problems?
"Don't do that."
The problem you are describing is really one about messaging - if we make backwards incompatible changes to our messages, then things break.
(More precisely, you have a "Username" message, and you are trying to re-use that message with a new set of constraints that reject some previously valid uses of the message).
The answer is that you don't introduce backwards incompatible changes - instead, introduce new names that match the new requirements, and deprecated the old ones.
Which is to say, adding support for new messages, and removing support for the old messages, become two separately managed options.
Greg Young's book Versioning in an Event Sourced System dedicates some chapters to this idea. Also, Rich Hickey ends up touching on these important ideas in most of his talks -- I'd suggest starting from Spec-ulation.
The "value object", meaning that the type that the current implementation of the domain model uses to move the information around, is a separate concern from the messages. The data structures we use in memory don't need to be coupled to our serialization formats.
The representation of the information on the wire is distinct from the representation of information in memory, and that in turn is distinct from the abstractions that manipulate the information in memory.
The challenging thing is that, at the beginning of a project, you have the least amount of information about when the different representations are going to diverge.
We've solved this in a slightly different way. By separating the public API of our value objects from the internal (domain only) API, we are able to evolve one without affecting the other.
For example:
public class Username
{
private readonly string value;
// Domain-only (internal) constructor.
// Does not enforce constriants and can only be called within the domain.
internal Username(string value)
{
this.value = value;
}
// Public factory method.
// Enforces business constraints. Used by consumers of the domain (application layer etc.)
// to create new instances of the value object.
public static Username Create(string value)
{
// Business constraints. These will evolve and grow over time.
if (value == null)
{
// throw exception etc.
}
if (value.Length < 2)
{
// throw exception etc.
}
return new Username(value);
}
}
Consumers of the domain must use the static Create method to create a new instance of the value object. This factory method contains all of our business constraints and prevents an instance being created in an invalid state.
Inside the domain, classes have access to the internal (constraint-less) constructor. Since this does not enforce any business constraints, an instance of the value object can always be created in this way (regardless of its value). By using this constructor when replaying events we can ensure that historical data will always succeed.
The benefits of this design are:
A single class is used to represent the domain concept (no need for multiple classes, versioning etc.).
Business rules are free to evolve over time.
Historical data always works. A Username from a year ago is still a user name, even if our rules have changed.
Although already answered I do find this an interesting situation.
I agree with others that the event data should be record-based and, therefore, nothing more than a data container that may be used to reconstitute the aggregate.
That being said when the rules change so does the domain. A major portion of domain-driven design is to capture as much of the domain (rules/structure) as is required. If this is the case should the changes in the rules not also be kept?
For instance, if we have a Username Value Object and it starts out with the 2 to 16 characters rules then that is coded as such:
public class Username
{
public string Value { get; }
public Username(string value)
{
if (value.Length < 2 || value.Length > 16)
{
throw new DomainException("Username must be between 2 and 16 characters");
}
Value = value;
}
}
Now we get to 1 March 2018 and the rule changes. We can keep the rule around:
public class Username
{
public string Value { get; }
public Username(string value, DateTime registrationDate)
{
if (registrationDate < new Date(2018, 3, 1) &&
(value.Length < 2 || value.Length > 16))
{
throw new DomainException("Username must be between 2 and 16 characters");
}
if (registrationDate >= new Date(2018, 3, 1) &&
(value.Length < 5 || value.Length > 16))
{
throw new DomainException("Username must be between 5 and 16 characters");
}
Value = value;
}
}
That is the basic idea. In this way we keep our "old" rules around as well. This may become quite a hassle but I don't have enough experience to say. Changing our rules retroactively may introduce some pretty tricky situation so I guess one would need to evaluate this on a case-by-case basis.
Just a thought.

Making decisions on designing classes interfaces

I would like to get some thoughts from others about the following problem.
Let's assume we have two classes Products and Items. Products object allows us to access any Item object. Here's the example.
$products = new Products();
// get existing item from products
$item = $products->get(123);
// create item
$item = $products->create();
$item->setName("Some new product");
$item->setPrice(2.50);
Now what would be the best way to update/save state of the item? I see 2 options:
$item->save();
or
$products->save($item);
First aproach seems very straigh forward. Once attributes of Item object are set calling save method on it will persist changes.
On the other hand I feel like latter approach is better. We're separating the roles of two objects. Item object contains only state and Products object operates on that state. This solution may be also better for writing unit tests.
Any thoughts?
So, effectively the items are buffering the actual changes.
Clearly both approaches will work, however it comes down to how closely you want to adhere to the underlying database's model or the overlaid object model.
Viewed from the outside, $item->save() makes the most sense in terms of the model - as you point out, you update the item's properties and then save them down. Plus it is conceptually an action that is performed on the item.
However, $products->save($item) offers two noticable advantages, and a drawback.
On the plus side, by moving save into products, it can (potentially) handle batching / reordering of the updates in a smarter way since it has visibility of all the items. It also allows the save code to be used as ->add() (more or less)
A downside is it is going to (from the object model view) add the following possible use, which you probably don't want:
$p1 = new Products();
$p2 = new Products();
$item = $p1->create();
// set $item values
$p2->save($item);
Obviously, you could just add an 'is this mine? no? then throw an error' test to Products::save, but that is extra code for blocking a use case the syntax implies could/should work. Or at least would probably slip through a code review.
So, I'd say go with the approach that seems the simplest and binds tightest to the desired functionality ($item->save()), unless you need to do caching/batching/whatever that forces you to go with the other.

Business Entity - should lists be exposed only as ReadOnlyCollections?

In trying to centralize how items are added, or removed from my business entity classes, I have moved to the model where all lists are only exposed as ReadOnlyCollections and I provide Add and Remove methods to manipulate the objects in the list.
Here is an example:
public class Course
{
public string Name{get; set;}
}
public class Student
{
private List<Course>_courses = new List<Course>();
public string Name{get; set;}
public ReadOnlyCollection<Course> Courses {
get{ return _courses.AsReadOnly();}
}
public void Add(Course course)
{
if (course != null && _courses.Count <= 3)
{
_courses.Add(course);
}
}
public bool Remove(Course course)
{
bool removed = false;
if (course != null && _courses.Count <= 3)
{
removed = _courses.Remove(course);
}
return removed;
}
}
Part of my objective in doing the above is to not end up with an Anemic data-model (an anti-pattern) and also avoid having the logic that adds and removes courses all over the place.
Some background: the application I am working with is an Asp.net application, where the lists used to be exposed as a list previously, which resulted in all kinds of ways in which Courses were added to the Student (some places a check was made and others the check was not made).
But my question is: is the above a good idea?
Yes, this is a good approach, in my opinion you're not doing anything than decorating your list, and its better than implementing your own IList (as you save many lines of code, even though you lose the more elegant way to iterate through your Course objects).
You may consider receiving a validation strategy object, as in the future you might have a new requirement, for ex: a new kind of student that can have more than 3 courses, etc
I'd say this is a good idea when adding/removing needs to be controlled in the manner you suggest, such as for business rule validation. Otherwise, as you know from previous code, there's really no way to ensure that the validation is performed.
The balance that you'll probably want to reach, however, is when to do this and when not to. Doing this for every collection of every kind seems like overkill. However, if you don't do this and then later need to add this kind of gate-keeping code then it would be a breaking change for the class, which may or may not be a headache at the time.
I suppose another approach could be to have a custom descendant of IList<T> which has generic gate-keeping code for its Add() and Remove() methods which notifies the system of what's happening. Something like exposing an event which is raised before the internal logic of those methods is called. Then the Student class would supply a delegate or something (sorry for being vague, I'm very coded-out today) when instantiating _courses to apply business logic to the event and cancel the operation (throw an exception, I imagine) if the business validation fails.
That could be overkill as well, depending on the developer's disposition. But at least with something a little more engineered like this you get a single generic implementation for everything with the option to add/remove business validation as needed over time without breaking changes.
I've done that in the past and regretted it: a better option is to use different classes to read domain objects than the ones you use to modify them.
For example, use a behavior-rich Student domain class that jealously guards its ownership of courses - it shouldn't expose them at all if student is responsible for them - and a StudentDataTransferObject (or ViewModel) that provides a simple list of strings of courses (or a dictionary when you need IDs) for populating interfaces.

BDD/DDD Where to put specifications for basic entity validation?

Alternatively, is basic entity validation considered a specification(s)?
In general, is it better to keep basic entity validation (name cannot be null or empty, date must be greater than xxx) in the actual entity, or outside of it in a specification?
If in a specification, what would that look like? Would you have a spec for each field, or wrap it all up in one EntityIsValid type spec?
It seems to me that once people have learned a little about DDD, they pick up the Specification pattern and look to apply it everywhere. That is really the Golden Hammer anti-pattern.
The way I see a place for the Specification pattern, and the way I understood Domain-Driven Design, is that it is a design pattern you can choose to apply when you need to vary a business rule independently of an Entity.
Remember that DDD is an iterative approach, so you don't have to get it 'right' in the first take. I would start out with putting basic validation inside Entities. This fits well with the basic idea about OOD because it lets the object that represents a concept know about the valid ranges of data.
In most cases, you shouldn't even need explicit validation because Entities should be designed so that constraints are represented as invariants, making it impossible to create an instance that violates a constraint.
If you have a rule that says that Name cannot be null or empty, you can actively enforce it directly in your Entity:
public class MyEntity
{
private string name;
public MyEntity(string name)
{
if(string.IsNullOrEmpty(name))
{
throw new ArgumentException();
}
this.name = name;
}
public string Name
{
get { return this.name; }
set
{
if(string.IsNullOrEmpty(value))
{
throw new ArgumentException();
}
this.name = value;
}
}
}
The rule that name cannot be null is now an invariant for the class: it is now impossible for the MyEntity class to get into a state where that rule is broken.
If later on you discover that the rule is more complex, or shared between many different concepts, you can always extract it into a Specification.
Entities have both data and behavior, so letting your entities ensure their invariants is the way to go IMHO. Else, you might end up with an anemic domain model [Fowler].
If your context allows you to enforce the rules in the setters as Mark Seemann suggests, it would be great since you don't have all the "IsValid" and/or "BrokenRules" logic in your model.
I've been in two contexts where we found ourselves needing the aforementioned solution though:
A classic response/request web solution where the web page displays all the broken rules of an entity upon failing save.
The model is read from a database which is updated externally (hence it's not impossible for the entity to be invalid despite the setter logic, unless you let your ORM use the setters, but the whole point for us was to find out about the validity).

Databinding Silverlight Comboboxes with Lists of Objects - working but ugly

I'm developing a business application, using Silverlight for the UI and a WCF webservice for the back-end. In the database I have a number of lookup tables. When the WCF service returns a business object, one of the properties contains the entire row out of the lookup table instead of just the foreign key, so in the UI I can display things like the description from the lookup table without making another call to the service. What I am trying to do at the moment is provide a combobox bound to the entire list of lookup values and have it update properly. The business object I'm dealing with in this example is called Session and the lookup is called SessionType.
Below is the definition of the combobox. The DataContext is set to an instance of Session. I am setting an ItemTemplate because the combobox is displaying more than just a list of strings.
<ComboBox
x:Name="SessionTypesComboBox"
ItemTemplate="{StaticResource SessionTypeDataTemplate}"
ItemsSource="{Binding Source={StaticResource AllSessionTypes}}"
SelectedItem="{Binding Path=SessionType, Mode=TwoWay}"
/>
Both the business object and the lookup table are being loaded asynchronously via the web service. If I do nothing else, the combobox list will be populated with SessionTypes, but it will not show the initial SessionType value from Session. However Session will be updated with the correct SessionType if the combobox selection is changed.
What seems to be happening is that the SelectedItem binding cant match the SessionType in Session to its equivalent in the SessionType list. The object values are the same but the references are not.
The workaround I have found is to load the Session and the SessionTypes list, then update the current SessionType of Session with the corresponding one from the SesstionTypes list. If I do that then the combobox displays correctly. However to me this has a bad code smell. Because everything is loaded asyncronously, I have to determine when everything is available. Here's how I'm doing that:
In the code-behind of my Silverlight user control:
// incremented every time we get data back during initial form load.
private volatile int m_LoadSequence = 0;
...
// Loaded event, called when the form is er... loaded.
private void UserControl_Loaded(object sender, RoutedEventArgs e)
{
// load session types
var sessionTypes = this.Resources["AllSessionTypes"] as Lookups.AllSessionTypes;
if (sessionTypes != null)
{
sessionTypes.DataLoadCompleted += (s, ea) =>
{
IncrementLoadSequence();
};
sessionTypes.LoadAsync();
}
// start loading another lookup table, same as above
// omitted for clarity
// set our DataContect to our business object (passed in when form was created)
this.LayoutRoot.DataContext = this.m_Session;
IncrementLoadSequence();
}
// This is the smelly part. This gets called by OnBlahCompleted events as web service calls return.
private void IncrementLoadSequence()
{
// check to see if we're expecting any more service calls to complete.
if (++m_LoadSequence < 3)
return;
// set lookup values on m_Session to the correct one in SessionType list.
// Get SessionType list from page resources
var sessionTypes = this.Resources["AllSessionTypes"] as Lookups.AllSessionTypes;
// Find the matching SessionType based on ID
this.m_Session.SessionType = sessionTypes.Where((st) => { return st.SessionTypeID == this.m_Session.SessionType.SessionTypeID; }).First();
// (other lookup table omitted for clarity)
}
So basically I have a counter that gets incremented each time I get data back from the webservice. Since I'm expecting 3 things (core business object + 2 lookup tables), when that counter gets to 3 I match up the references.
To me, this seems very hacky. I would rather see the combobox specify a ValueMemberPath and SelectedValue to match the selected item with one in the list.
Can anyone see a cleaner way of doing this? This situation is very common in business apps, so I'm sure there must be a nice way of doing it.
Geoff,
To confirm I understand your problem: the databinding infrastructure doesn't seem to recognise that two objects you consider 'equal' are actually equal - therefore the initial SelectedItem isn't set correctly since the databinding doesn't find a reference-equals object in your StaticResource collection to match Session.SessionType.
You get around this by 'flattening' the references (ie. you force the Session.SessionType to be reference-equals in the Where((st)...First() code.
We have had a similar problem.
It does kinda of make sense that Silverlight won't automatically 'equate' two objects from difference 'sources' just because you know they represent the same data. Like you said "The object values are the same but the references are not". But how can you MAKE the databinding equate them?
Things we thought of/tried:
implementing .Equals() on the class (SessionType in your case)
implementing operator == on the class (SessionType in your case)
implementing IEquatable on the class (SessionType in your case)
making the collection only Strings and binding to a string property
but in the end we have given up and used the same approach as you - 'extracting' the correct reference-equals object from the 'collection' (after everything is loaded) and poking it into the SelectedItem-bound property.
I agree with you about the code-smell, and suspect there must be a better solution. So far all our debugging in the property accessors and no-op IValueConverters hasn't found a solution -- but if we do I'll post it back here too.
I'm not sure I'm fully understanding the problem (it's early :)) But can't you just transfer all the items you need in one call? (even if you have to wrap the 3 in a new DTO class), then you can just update the current session type using a complete event. It's still not perfect, but at least you don't have to keep any counters.
I'd also move all that logic to a ViewModel and just bind to that, but that's just me :)
You'd be better off binding to an ObservableCollection then using some other code (a View Model part of a MVVM isn't a bad choice) to update it in the background. That way you get separation from the UI and its a lot easier to handle the updates as the UI is just bound.
Thanks for the answers, all of the above were useful! I'm moving towards the MVVM way as well as combining several service calls into a single one (also reduces round-trip overhead). Looks like I'll stick with the lookup re-referencing for the time being - if I find a better way I'll post it as well.
geofftnz,
Have you find any nice solution for this?
CraigD,
I doubt that overriding Equals etc is a good solution. First, this is to be done inside the generated proxy class SessionType, so these changes will be lost on each service reference update. Second, notification in the SessionType setter (here SessionType is the same generated client proxy class) uses ReferenceEquals call... so that's one more place to touch the generated code! OK, the first thing can be done via handmade partial class SessionType (and so will not be lost after updates), but the second thing certainly cannot be done the same way.