Simple domain objects attribute values - oop

Let's say you've got a domain model class for a Song. Song's have a tempo attribute (an int) that should always be a positive number. Should this requirement be part of the domain model or externally (such as in a SongManager / business logic layer class)?
Let's say you've chosen to implement it like so:
class Song {
private int tempo;
// ...
public void setTempo(int tempo) {
if (tempo > 0) {
this.tempo = tempo;
} else {
// what?
}
}
}
Would you replace // what? above with:
Nothing. Given a Song instance s, s.setTempo(-10) would simply not modify the object's state.
Set tempo to some minimum value, eg 1.
Mark setTempo with a checked throws InvalidTempoException and throw it in the else clause. That way, a controller or other component is responsible for catching the invalid tempo value and making a decision about how to handle the exception.
Throw a runtime InvalidTempoException.
Extract the tempo attribute into a Tempo class with the "must be greater than 0" encapsulated there.
Something else.
I ask because I've been exploring the common "layered architecture" approach lately and in particular the domain layer.

Song's have a tempo attribute (an int) that should always be a
positive number
If this is a requirement for the proper initialization of the Song then you should throw a corresponding exception e.g. IllegalArgumentException

Should this requirement be part of the domain model or externally (such as in a SongManager / business logic layer class)?
Domain related logic should live in domain model. Calling something as Manager often is a sign that you just don't know where to put code and how to name it properly.
Nothing. Given a Song instance s, s.setTempo(-10) would simply not modify the object's state.
In case you need trying to set tempo and if you have little control over argument that is going to be passed, ignoring failure to set tempo can be acceptable. It's like with UDP protocol - it's fine if it fails. Only I would name it accordingly - something like s.trySetTempo(-10).
Set tempo to some minimum value, eg 1.
That is like extension of first case. Only I can't think of any business case when there would be a need to default song tempo to near zero.
Mark setTempo with a checked throws InvalidTempoException and throw it in the else clause. That way, a controller or other component is responsible for catching the invalid tempo value and making a decision about how to handle the exception.
Throw a runtime InvalidTempoException.
Throwing an exception means that anything that tries to set tempo is responsible for proper input because there should never be expected exceptions. Usually you would also have to write canSetTempo (or something like isTempoChangeable) indicator method so outside world could hit it before trying real deal.
Extract the tempo attribute into a Tempo class with the "must be greater than 0" encapsulated there.
This depends on shape of your domain. It might be a good idea if anything else (like SoundSample class, MidiClip class) needs to use concept of tempo. Also - it might be good to encapsulate tempo if it is almost unrelated to your business and just polluting more interesting logic in Song class.
I personally like when domain model provides an answer if action is available but action itself throws an error. That allows to make better UX too. Fields and buttons that are responsible for changing tempo would just not appear/be disabled - an immediate feedback that something can't be done just yet.

If a Song is not valid with a zero or negative tempo value, this certainly should be an invariant modelled within Song.
You should throw an exception if an invalid value is attempted - this makes certain you don't have a Song in invalid state.

Related

DDD - Invariant enforcement using instance methods and a factory method

I'm designing a system using Domain-Driven design principals.
I have an aggregate named Album.
It contains a collection of Tracks.
Album instances are created using a factory method named create(props).
Rule 1: An Album must contain at least one Track.
This rule must be checked upon creation (in Album.create(props)).
Also, there must a method named addTrack(track: Track) so that a new Track can be added after the instance is created. That means addTrack(track: Track) must check the rule too.
How can I avoid this logic code duplication?
Well, if Album makes sure it has at least one Track upon instantiation I don't see why addTrack would be concerned that rule could ever be violated? Did you perhaps mean removeTrack?
In that case you could go for something as simple as the following:
class Album {
constructor(tracks) {
this._tracks = [];
this._assertWillHaveOneTrack(tracks.length);
//add tracks
}
removeTrack(trackId) {
this._assertWillHaveOneTrack(-1);
//remove track
}
_assertWillHaveOneTrack(change) {
if (this._tracks.length + change <= 0) throw new Error('Album must have a minimum of one track.');
}
}
Please note that you could also have mutated the state first and checked the rule after which makes things simpler at first glance, but it's usually a bad practice because the model could be left in an invalid state if the exception is handled, unless the model reverts the change, but that gets even more complex.
Also note that if Track is an entity, it's probably a better idea not to let the client code create the Track to preserve encapsulation, but rather pass a TrackInfo value object or something similar.

Value object in event sourcing

Is there a place for value objects in an event sourced domain model?
Lets define a value object as an object with immutable state that guards its invariants and has no particular identifier.
An event sourced domain model in this context is a domain that is entirely or partially event sourced, meaning that its current state can be derived from applying all events that have occurred in the past. Events themselves are considered immutable, even over time.
Debate has taken place about the validity of using value objects within events - this question goes slightly further: Do value objects have a place in event sourced domains at all?
The (potential) problem with using value objects is that it becomes rather tricky to alter the domain in such a way that invariants are tightened.
An example of this scenario would be to have a Username value object, with the sole constraint that the name must be anywhere between 2 and 16 characters.
While this has been working well for some time, the business decides to only allow usernames of at least 5 characters.
A migration period begins and users with names of less than 5 characters are asked to update their names.
Lets say the process was successful, correction events are applied and everyone is happy.
We tighten the constraints on our Username value object to require at least 5 characters.
For a while everyone is happy, but then we discover a problem with the snapshots and replay all events.
We now face an exception from our Username object: by loading the historic data, we're breaking an invariant of our domain.
The rules of a value objects apply retroactively - does this make them inherently unsuitable for event sourcing? Would it be worth applying versioning of value objects? Is there a simpler way of avoiding such problems?
I would say, that at the moment you redefined what Username means, and you don't migrate historical data somehow, you've essentially created 2 different Username meanings.
Because there are 2 different meanings of the word, you have to make it explicit in the code somehow. "Versioning" is one way, although I wouldn't use such a generic solution, there are different modeling options.
You could make it explicit that the history of a "username" is just that, a history. So for example create a HistoricUsername, which is the event-sourced object, even a value object if you want. And create a Username which is at all times the username with the most current rules, which is not persisted at all, but created from a HistoricUsername if it can.
Some people suggest sometimes to extract the "rules" from the object, and re-apply it later. That way the object itself is valid at all times and you can ask it to validate itself against rules that might change. I don't really prefer these kinds of solutions, but it's an option, and the Username would still be a value-object.
So the problem is not really that value-objects don't fit into event-sourcing, it's just that the modeling has to be more accurate.
Do value objects have a place in event sourced domains at all?
Yes.
Is there a simpler way of avoiding such problems?
"Don't do that."
The problem you are describing is really one about messaging - if we make backwards incompatible changes to our messages, then things break.
(More precisely, you have a "Username" message, and you are trying to re-use that message with a new set of constraints that reject some previously valid uses of the message).
The answer is that you don't introduce backwards incompatible changes - instead, introduce new names that match the new requirements, and deprecated the old ones.
Which is to say, adding support for new messages, and removing support for the old messages, become two separately managed options.
Greg Young's book Versioning in an Event Sourced System dedicates some chapters to this idea. Also, Rich Hickey ends up touching on these important ideas in most of his talks -- I'd suggest starting from Spec-ulation.
The "value object", meaning that the type that the current implementation of the domain model uses to move the information around, is a separate concern from the messages. The data structures we use in memory don't need to be coupled to our serialization formats.
The representation of the information on the wire is distinct from the representation of information in memory, and that in turn is distinct from the abstractions that manipulate the information in memory.
The challenging thing is that, at the beginning of a project, you have the least amount of information about when the different representations are going to diverge.
We've solved this in a slightly different way. By separating the public API of our value objects from the internal (domain only) API, we are able to evolve one without affecting the other.
For example:
public class Username
{
private readonly string value;
// Domain-only (internal) constructor.
// Does not enforce constriants and can only be called within the domain.
internal Username(string value)
{
this.value = value;
}
// Public factory method.
// Enforces business constraints. Used by consumers of the domain (application layer etc.)
// to create new instances of the value object.
public static Username Create(string value)
{
// Business constraints. These will evolve and grow over time.
if (value == null)
{
// throw exception etc.
}
if (value.Length < 2)
{
// throw exception etc.
}
return new Username(value);
}
}
Consumers of the domain must use the static Create method to create a new instance of the value object. This factory method contains all of our business constraints and prevents an instance being created in an invalid state.
Inside the domain, classes have access to the internal (constraint-less) constructor. Since this does not enforce any business constraints, an instance of the value object can always be created in this way (regardless of its value). By using this constructor when replaying events we can ensure that historical data will always succeed.
The benefits of this design are:
A single class is used to represent the domain concept (no need for multiple classes, versioning etc.).
Business rules are free to evolve over time.
Historical data always works. A Username from a year ago is still a user name, even if our rules have changed.
Although already answered I do find this an interesting situation.
I agree with others that the event data should be record-based and, therefore, nothing more than a data container that may be used to reconstitute the aggregate.
That being said when the rules change so does the domain. A major portion of domain-driven design is to capture as much of the domain (rules/structure) as is required. If this is the case should the changes in the rules not also be kept?
For instance, if we have a Username Value Object and it starts out with the 2 to 16 characters rules then that is coded as such:
public class Username
{
public string Value { get; }
public Username(string value)
{
if (value.Length < 2 || value.Length > 16)
{
throw new DomainException("Username must be between 2 and 16 characters");
}
Value = value;
}
}
Now we get to 1 March 2018 and the rule changes. We can keep the rule around:
public class Username
{
public string Value { get; }
public Username(string value, DateTime registrationDate)
{
if (registrationDate < new Date(2018, 3, 1) &&
(value.Length < 2 || value.Length > 16))
{
throw new DomainException("Username must be between 2 and 16 characters");
}
if (registrationDate >= new Date(2018, 3, 1) &&
(value.Length < 5 || value.Length > 16))
{
throw new DomainException("Username must be between 5 and 16 characters");
}
Value = value;
}
}
That is the basic idea. In this way we keep our "old" rules around as well. This may become quite a hassle but I don't have enough experience to say. Changing our rules retroactively may introduce some pretty tricky situation so I guess one would need to evaluate this on a case-by-case basis.
Just a thought.

Unit testing value objects in isolation from its dependencies

TL;DR
How do you test a value object in isolation from its dependencies without stubbing or injecting them?
In Misko Hevery's blog post To “new” or not to “new”… he advocates the following (quoted from the blog post):
An Injectable class can ask for other Injectables in its constructor.(Sometimes I refer to Injectables as Service Objects, but
that term is overloaded.). Injectable can never ask for a non-Injectable (Newable) in its constructor.
Newables can ask for other Newables in their constructor, but not for Injectables (Sometimes I refer to Newables as Value Object, but
again, the term is overloaded)
Now if I have a Quantity value object like this:
class Quantity{
$quantity=0;
public function __construct($quantity){
$intValidator = new Zend_Validate_Int();
if(!$intValidator->isValid($quantity)){
throw new Exception("Quantity must be an integer.");
}
$gtValidator = new Zend_Validate_GreaterThan(0);
if(!$gtvalidator->isValid($quantity)){
throw new Exception("Quantity must be greater than zero.");
}
$this->quantity=$quantity;
}
}
My Quantity value object depends on at least 2 validators for its proper construction. Normally I would have injected those validators through the constructor, so that I can stub them during testing.
However, according to Misko a newable shouldn't ask for injectables in its constructor. Frankly a Quantity object that looks like this
$quantity=new Quantity(1,$intValidator,$gtValidator); looks really awkward.
Using a dependency injection framework to create a value object is even more awkward. However now my dependencies are hard coded in the Quantity constructor and I have no way to alter them if the business logic changes.
How do you design the value object properly for testing and adherence to the separation between injectables and newables?
Notes:
This is just a very very simplified example. My real object my have serious logic in it that may use other dependencies as well.
I used a PHP example just for illustration. Answers in other languages are appreciated.
A Value Object should only contain primitive values (integers, strings, boolean flags, other Value Objects, etc.).
Often, it would be best to let the Value Object itself protect its invariants. In the Quantity example you supply, it could easily do that by checking the incoming value without relying on external dependencies. However, I realize that you write
This is just a very very simplified example. My real object my have serious logic in it that may use other dependencies as well.
So, while I'm going to outline a solution based on the Quantity example, keep in mind that it looks overly complex because the validation logic is so simple here.
Since you also write
I used a PHP example just for illustration. Answers in other languages are appreciated.
I'm going to answer in F#.
If you have external validation dependencies, but still want to retain Quantity as a Value Object, you'll need to decouple the validation logic from the Value Object.
One way to do that is to define an interface for validation:
type IQuantityValidator =
abstract Validate : decimal -> unit
In this case, I patterned the Validate method on the OP example, which throws exceptions upon validation failures. This means that if the Validate method doesn't throw an exception, all is good. This is the reason the method returns unit.
(If I hadn't decided to pattern this interface on the OP, I'd have preferred using the Specification pattern instead; if so, I'd instead have declared the Validate method as decimal -> bool.)
The IQuantityValidator interface enables you to introduce a Composite:
type CompositeQuantityValidator(validators : IQuantityValidator list) =
interface IQuantityValidator with
member this.Validate value =
validators
|> List.iter (fun validator -> validator.Validate value)
This Composite simply iterates through other IQuantityValidator instances and invokes their Validate method. This enables you to compose arbitrarily complex validator graphs.
One leaf validator could be:
type IntegerValidator() =
interface IQuantityValidator with
member this.Validate value =
if value % 1m <> 0m
then
raise(
ArgumentOutOfRangeException(
"value",
"Quantity must be an integer."))
Another one could be:
type GreaterThanValidator(boundary) =
interface IQuantityValidator with
member this.Validate value =
if value <= boundary
then
raise(
ArgumentOutOfRangeException(
"value",
"Quantity must be greater than zero."))
Notice that the GreaterThanValidator class takes a dependency via its constructor. In this case, boundary is just a decimal, so it's a Primitive Dependency, but it could just as well have been a polymorphic dependency (A.K.A a Service).
You can now compose your own validator from these building blocks:
let myValidator =
CompositeQuantityValidator([IntegerValidator(); GreaterThanValidator(0m)])
When you invoke myValidator with e.g. 9m or 42m, it returns without errors, but if you invoke it with e.g. 9.8m, 0m or -1m it throws the appropriate exception.
If you want to build something a bit more complicated than a decimal, you can introduce a Factory, and compose the Factory with the appropriate validator.
Since Quantity is very simple here, we can just define it as a type alias on decimal:
type Quantity = decimal
A Factory might look like this:
type QuantityFactory(validator : IQuantityValidator) =
member this.Create value : Quantity =
validator.Validate value
value
You can now compose a QuantityFactory instance with your validator of choice:
let factory = QuantityFactory(myValidator)
which will let you supply decimal values as input, and get (validated) Quantity values as output.
These calls succeed:
let x = factory.Create 9m
let y = factory.Create 42m
while these throw appropriate exceptions:
let a = factory.Create 9.8m
let b = factory.Create 0m
let c = factory.Create -1m
Now, all of this is very complex given the simple nature of the example domain, but as the problem domain grows more complex, complex is better than complicated.
Avoid value types with dependencies on non-value types. Also avoid constructors that perform validations and throw exceptions. In your example I'd have a factory type that validates and creates quantities.
Your scenario can also be applied to entities. There are cases where an entity requires some dependency in order to perform some behaviour. As far as I can tell the most popular mechanism to use is double-dispatch.
I'll use C# for my examples.
In your case you could have something like this:
public void Validate(IQuantityValidator validator)
As other answers have noted a value object is typically simple enough to perform its invariant checking in the constructor. An e-mail value object would be a good example as an e-mail has a very specific structure.
Something a bit more complex could be an OrderLine where we need to determine, totally hypothetical, whether it is, say, taxable:
public bool IsTaxable(ITaxableService service)
In the article you reference I would assert that the 'newable' relates quite a bit to the 'transient' type of life cycle that we find in DI containers as we are interested in specific instances. However, when we need to inject specific values the transient business does not really help. This is the case for entities where each is a new instance but has very different state. A repository would hydrate the object but it could just as well use a factory.
The 'true' dependencies typically have a 'singleton' life-cycle.
So for the 'newable' instances a factory could be used if you would like to perform validation upon construction by having the factory call the relevant validation method on your value object using the injected validator dependency as Mark Seemann has mentioned.
This gives you the freedom to still test in isolation without coupling to a specific implementation in your constructor.
Just a slightly different angle on what has already been answered. Hope it helps :)

Is getLastErrorCode() good API design for returning error codes?

What is the best practice regarding returning error codes?
Sometimes we meet situations where a class method operation is unsuccessful, but it is not exceptional. If the reason why it fails are varied, then we need a way to tell the caller why it has failed.
For example, I have Actor::equipItem() method that equips an item to an RPG character object. The reasons for failure could be:
Character level is not high enough.
Character class cannot equip that item.
Character attribute is not sufficient (e.g. not enough strength).
The item is already broken.
The item is a two handed weapon and the character is already wielding a dagger.
etc.
The way I see it, the situations above are not exceptional. I can implement Actor::equipItem() in two ways.
First is returning int codes, like 0 for success and 1 for the level is not enough, 2 for wrong character class and so on.
The second is returning boolean TRUE or FALSE, and implementing Actor::getLastErrorCode() that the caller can inspect if it needs to provide a feedback to the user.
Which of the two is the best practice in terms of OOP and API design? Are there alternatives? Is there a best practice for handling error codes that are not exceptional situations?
Like I said, I agree with cHao that throwing exceptions is the right way to handle this. However, I wanted to comment on how you might decide to process all of those rules. This scenario is a perfect situation for a rules engine, using good ol' polymorphism. (Checking out the chain of responsibility (CoR) design pattern would be good for this.)
You could use a bunch of if statements in your method. Or, better yet, have each if check be its own class that implements something like IEquipItemRule:
public interface IEquipItemRule
{
bool CanEquip();
}
Then, instead of an if statement, your consuming code can process all of the rules like this:
List<IEquipItemRule> equipRules = GetEquipRules(); // This is where the CoR pattern comes in
foreach (IEquipItemRule rule in equipRules)
{
// Note: Instead of throwing immediately, you could collect all of the
// messages and return all of the failure reasons.
if (!rule.CanEquip()) { throw new AppropriateException(rule.Message); }
}
The nice thing about this is that this check can be in its own method. So, if you want to check first to see if this method will succeed, the consumer can call the above code. And when the actual method runs, it can call this checking code as well.
Note: An example of an equipment rule might be something like this:
public class CharacterLevelRule : IEquipItemRule
{
public bool CanEquip()
{
if (characterLevel <= necessaryLevel) { return false; }
return true;
}
}

Modifying setter argument - is it hack or not?

Is it normal to modify setter arguments? Let's imagine that we have setString method. And we really want to keep a trimmed form of the string. So a string with trailing spaces is invalid, but we don't want to throw an exception.
What's the best solution? To trim the value in the setter e.g.
public void setString(String string) {
this.string = string.trim();
}
Or trim it in the caller (more than once) e.g.
object.setString(string.trim());
Or maybe something else?
Yes. After all, setters are designed for these kind of things! To control and sanitize the values written to fields ;)
Totally. Here's an example: suppose you have an engineering programs with different types of measurement units. You keep the internal values in one measurement system, but you convert from all others in the setter, and convert back in the getter, e.g.:
public double UserTemperature
{
get
{
return Units.Instance.ConvertFromSystem(UnitType.Temperature, temperature);
}
set
{
double v = Units.Instance.ConvertToSystem(UnitType.Temperature, value);
if (temperature != v)
{
temperature = v;
Changed("SystemTemperature");
Changed("UserTemperature");
}
}
}
Yes, sure. Just be careful to check for NULL before applying any method (such as trim()).
There are two schools: one says its ok to check param in setter(school style) and second one says beans should not contain any logic and just data(enterprise style).
I believe more in the second one. How often do you look at implementation of your beans? should getUser throw any exception or just return null?
When you put logic in your setter and getters you make it harder to understand whats going on since many people will never look at its implementation. If you disagree I urge you to look at every setter and getter implementation before you use it just to check if its not just a bean.
At first glance it seems like it violates the principle of least astonishment. If I'm a user of your class, I'd expect a setter to do exactly what I tell it to. I'd throw an exception in the setter to force users to trim the input.
The other (better?) alternative is to change the name of the method to trimAndSetString. That way, it's not surprising behavior to trim the input.
Correct me if I'm wrong, but it looks logical to me that the setter should hold this kind of logic.
If the setter is just assigning some value to an internal var without checking it, then why not expose the var itself?
This is exactly why you use setters rather than exposing the objects fields to the whole wide world.
Consider a class that holds an integer angle that's expected to be between 0 and 359 inclusive.
If you expose the field, calling functions can set it to whatever they want and this would break the contract specified by your API. It's also likely to break your functionality somewhere down the track because your code is written to assume a certain range for that variable.
With a setter, there's a number of things you can do. One is to raise an exception to indicate an invalid value was passed but that would be incorrect in my view (for this case). It's likely to be more useful if you modify the input value to something between 0 and 359 such as with:
actualVal = passedValue % 360;
As long as this is specified in your interface (API), it's perfectly valid. In fact, even if you don't specify it, you're still free to do whatever you want since the caller has violated the contract (by passing a value outside of range). I tend to follow the rule of "sanitize your input as soon as possible".
In your specific case, as long as you specify that the string is stored in trimmed format, there's no reason for callers to complain (you've already stated that such a string is invalid). It's better in terms of code size (not speed) to do it in the setter rather than at every piece of code that calls the setter. It also guarantees that the string is stored as you expect it to be - there's no guarantee a caller won't accidentally (or purposefully) store a non-trimmed string.
Yes. It is a feature of object oriented design is that the caller can treat your class as a black box. What you do inside is your own business, as long as the behavior of the interface is documented and logical.
While different people have different philosophies, I would suggest that property setters are only appropriate in cases where they will set an aspect of the object's state to match the indicated value and perhaps possibly notify anyone that cares about the change, but will not otherwise affect the object's state (it is entirely proper for a property setter to change the value of a read-only property if that property is defined in terms of the state associated with the property setter; for example, a control's read-only Right property may be defined in terms of its Bounds). Property setters should throw an exception if they cannot perform the indicated operation.
If one wishes allow a client to modify an object's state in some fashion not meeting the above description, one should use a method rather than a property. If one calls Foo.SetAngle(500) it would be reasonably expected that the method will use the indicated parameter in setting the angle, but the Angle property might not return the angle in the same form as it was set (e.g. it might return 140). On the other hand, if Angle is a read-write property, one would expect that writing a value of 500 would either be forbidden or else would cause the value to read back 500. If one wanted to have the object store an angle in the range 0 to 359, the object could also have a read-only property called BaseAngle which will always return an angle in that form.