Domain Model Snapshot for Mapping and Reconstitution using Factory - orm

I've read in the Patterns, Principles and Practices of DDD book that if you want to fully encapsulate your domain model you can make its properties private and use the Memento pattern to read them. There was also an example that a Repository gets a Snapshot of the domain model, then maps to a database model and saves its changes to the db. Also it retrieves the database model from db, maps it to the Snapshot and then uses the Factory pattern to reconstitute the Domain model from the Snapshot and work with it.
I am a bit confused about how much work is done here when you can just map the database model to the domain model, without using a factory or snapshots. Isn't this much easier?
If we are going to follow the book, is it the correct way to return the Domain model snapshot from the service layer to the presentation layer and then map it to a View Model? Or to create the snapshot in the presentation layer and pass it to service layer, using the factory pattern to reconstitute it there and then pass the domain model to the repository where it will again take its snapshot to map to the database model and save to the db....
Can you give some example when you need to use such complicated mapping architecture?
It really feels like you are writing complicated code when it can be done much simpler.
UPDATE
I can put code examples if it will be easier to understand what i am asking. ;)

A Repository's job is just to save and rehydrate domain entities from a persistent store. Any design pattern beyond that is just technical details, usually a way to work around ORM flaws - but it is not part of Repository (i.e. DDD's fundamental means of storage) per se.
I suppose the Memento pattern in that book is used to solve the "ORM / encapsulation conflict", i.e. an ORM needs write access to all of an entity's fields to be able to rehydrate it, which forces you to expose them and breaks encapsulation.
No, the Memento or Snapshot is for persistence purposes only. Service (or Application) layer maps from the real Entities or uses precomputed read-specific models if you're under CQRS.

Related

Strategy for Sharing Business and Data Access Entities

I'm designing a layered application where 90% of the business and data access entities have the same properties. Basically it doesn't make sense to create one set of classes for each layer (and map) with the same properties for the sake of separation of concerns. I'm completely aware of automappers but I'd rather not use one in this case as I think it's unecessary. Is it ok to share share the business entities between the business and data access layer in this scenario? We will manage the remaining 10% of the classes by creating adhoc/transformed classes within the same namespace.
Any other design approach?
I think sharing between layers is the whole point of having model classes backed by a data store. I would avoid adding unnecessary architecture unless the code really needs it. If you get to a point where you need to be agnostic of the data store or some other similar situation, I would you might look into the Repository pattern. Simple code = maintainable code.

Is this an anemic domain model?

I'm trying to build my first CRUD application, and I don't understand if I should use an object containing getters and setters separated.
Considering that we have the Zend Framework Quick Start tutorial with a Model structure containing:
Gateway
DataMapper
Domain Object (model class)
If the Domain Object (as presented on Zend Quick Start Tutorial) consists of only getters and setters, is that an anti-pattern? In a sense, we are unecessarily dividing the domain object with a transaction script?
Please advise.
The Anemic Domain Model is an Anti-Patern ONLY IF you are trying to build a true Domain Model (aka Domain Model from Domain Driven Design) and end up with entities with only state and without behavior.
For a simple CRUD application an anemic domain model is probably a best practice, especially when you have framework that makes your job very easy.
See Martin Fowler's article about Anemic Domain Model and also Greg Young's Article.
The domain objects are seperated from the business logic of the software. This is an important idea of procedural programming.
However this pattern is considered to be a candidate for an anti-pattern by some developers which means that it might be a ineffective practice.
In fact you could consider disadvantages
your model is less expressive, getters and setters aren't really good to describe the model
code is harder to reuse, you get dublicated code among your transactional scripts
you have to use wrappers which hide the actual data structure (so maybe not really OOP)
there is always a global access to entities
I think the most interesting point to consider is that domain model's objects cannot assure their correctness at any time. Because their mutation takes place in seperated layers.
I worked on a CRUD application with zend framework too. The clear separation between logic and data is really great but when you progress you realize that the amount of layers and mappers gets bigger and bigger. Try to reuse your code as much as you can and avoid dublication.

DDD: Where to put persistence logic, and when to use ORM mapping

We are taking a long, hard look at our (Java) web application patterns. In the past, we've suffered from an overly anaemic object model and overly procedural separation between controllers, services and DAOs, with simple value objects (basically just bags of data) travelling between them. We've used declarative (XML) managed ORM (Hibernate) for persistence. All entity management has taken place in DAOs.
In trying to move to a richer domain model, we find ourselves struggling with how best to design the persistence layer. I've spent a lot of time reading and thinking about Domain Driven Design patterns. However, I'd like some advice.
First, the things I'm more confident about:
We'll have "thin" controllers at the front that deal only with HTTP and HTML - processing forms, validation, UI logic.
We'll have a layer of stateless business logic services that implements common algorithms or logic, unaware of the UI, but very much aware of (and delegating to) the domain model.
We'll have a richer domain model which contains state, relationships, and logic inherent to the objects in that domain model.
The question comes around persistence. Previously, our services would be injected (via Spring) with DAOs, and would use DAO methods like find() and save() to perform persistence. However, a richer domain model would seem to imply that objects should know how to save and delete themselves, and perhaps that higher level services should know how to locate (query for) domain objects.
Here, a few questions and uncertainties arise:
Do we want to inject DAOs into domain objects, so that they can do "this.someDao.save(this)" in a save() method? This is a little awkward since domain objects are not singletons, so we'll need factories or post-construction setting of DAOs. When loading entities from a database, this gets messy. I know Spring AOP can be used for this, but I couldn't get it to work (using Play! framework, another line of experimentation) and it seems quite messy and magical.
Do we instead keep DAOs (repositories?) completely separate, on par with stateless business logic services? This can make some sense, but it means that if "save" or "delete" are inherent operations of a domain object, the domain object can't express those.
Do we just dispense with DAOs entirely and use JPA to let entities manage themselves.
Herein lies the next subtlety: It's quite convenient to map entities using JPA. The Play! framework gives us a nice entity base class, too, with operations like save() and delete(). However, this means that our domain model entities are quite closely tied to the database structure, and we are passing objects around with a large amount of persistence logic, perhaps all the way up to the view layer. If nothing else, this will make the domain model less re-usable in other contexts.
If we want to avoid this, then we'd need some kind of mapping DAO - either using simple JDBC (or at least Spring's JdbcTemplate), or using a parallel hierarchy of database entities and "business" entities, with DAOs forever copying information from one hierarchy to another.
What is the appropriate design choice here?
Martin
Your questions and doubts ring an interesting alarm here, I think you went a bit too far in your interpretation of a "rich domain model". Richness doesn't go as far as implying that persistence logic must be handled by the domain objects, in other words, no, they shouldn't know how to save and delete themselves (at least not explicitely, though Hibernate actually adds some persistence logic transparently). This is often referred to as persistence ignorance.
I suggest that you keep the existing DAO injection system (a nice thing to have for unit testing) and leave the persistence layer as is while trying to move some business logic to your entities where it's fit. A good starting point to do that is to identify Aggregates and establish your Aggregate Roots. They'll often contain more business logic than the other entities.
However, this is not to say domain objects should contain all logic (especially not logic needed by many other objects across the application, which often belongs in Services).
I am not a Java expert, but I use NHibernate in my .NET code so my experience should be directly translatable to the Java world.
When using ORM (like Hibernate you mentioned) to build Domain-Driven Design application, one of good (I won't say best) practices is to create so-called application services between the UI and the Domain. They are similar to stateless business objects you mentioned, but should contain almost no logic. They should look like this:
public void SayHello(int id, String helloString)
{
SomeDomainObject target = domainObjectRepository.findById(id); //This uses Hibernate to load the object.
target.sayHello(helloString); //There is a single domain object method invocation per application service method.
domainObjectRepository.Save(target); //This one is optional. Hibernate should already know that this object needs saving because it tracks changes.
}
Any changes to objects contained by DomainObject (also adding objects to collections) will be handled by Hibernate.
You will also need some kind of AOP to intercept application service method invocations and create Hibernate's session before the method executes and save changes after method finishes with no exceptions.
There is a really good sample how to do DDD in Java here. It is based on the sample problem from Eric Evans' 'Blue Book'. The application logic class sample code is here.

How can I gradually transition to NHibernate persistence logic from existing ADO.NET persistence logic?

The application uses ADO.NET to invoke sprocs for nearly every database operation. Some of these sprocs also contain a fair amount of domain logic. The data access logic for each domain entity resides in the domain class itself. ie, there is no decoupling between domain logic and data access logic.
I'm looking to accomplish the following:
decouple the domain logic from the data access logic
make the domain model persistence ignorant
implement the transition to NHibernate gradually across releases, refactoring individual portions of the DAL (if you can call it that) at a time
Here's my approach for transitioning a single class to NHibernate persistence
create a mapping for the domain class
create a repository for the domain class (basic CRUD operations inherited from a generic base repository)
create a method in the repository for each sproc used by the old DAL (doing some refactoring along the way to pull out the domain logic)
modify consumers to use the repository rather than the data access logic in the class itself
remove the old data access logic and the sprocs
The issues I have are with #1 and #4.
(#1) How can I map properties of a type with no NHibernate mapping?
Consider a Person class with an Address property (Address being a domain object without an NH mapping and Person being the class I'm mapping). How can I include Address in the Person mapping without creating an entire mapping for Address?
(#4) How should I manage the dependencies on old data access logic during the transition?
Classes in the domain model utilize the old data access logic that I'm looking to remove. Consider an Order class with a CustomerId property. When the Order needs info on the Customer it invokes the ADO.NET data access logic that resides in the Customer class. What options are there other than maintaining the old data access logic until the dependent classes are mapped themselves?
I would approach it like this:
Refactor and move the data access logic out of the domain classes into a data layer.
Refactor and move the domain logic out of the sprocs into a data layer. (This step is optional, but doing it will definitely make the transition smoother and easier.)
You don't need a repository, but you can certainly create one if you want.
Create a NHibernate mapping for every domain class (there are tools that do this).
Create a NHibernate oriented data access API that slowly replaces the sproc data layer.
Steps 1 & 2 are the hardest part as it sounds like you have tight coupling that ideally never would have happened. Neither of these first two steps involve NHibernate at all. You are strictly moving to a more maintainable architecture before trying to swap out your data layer.
While it may be possible to create NHibernate mappings one by one and utilize them without the full object graph being available, that seems like asking for unnecessary pain. You need to proceed very cautiously if you choose that path and I just wouldn't recommend it. To do so, you may leave a foreign key mapped as a plain int/guid instead of as a relation to another domain class, but you have to be very careful you don't corrupt your data by half committing to NHibernate in that way. Automated unit/integration tests are your friend.
Swapping out a data layer is hard. It is easier if you have a solid lowest common denominator data layer architecture, but I wouldn't actually recommend creating an architecture using a lowest common denominator approach. Loose coupling is good, but you can go too far.
search more on the internet for nhibernate e-books
Refactor and move the data access logic out of the domain classes into a data layer.
Refactor and move the domain logic out of the sprocs into a data layer. (This step is optional, but doing it will definitely make the transition smoother and easier.)
You don't need a repository, but you can certainly create one if you want.
Create a NHibernate mapping for every domain class (there are tools that do this).
Create a NHibernate oriented data access API that slowly replaces the sproc data layer

Repository, Service or Domain object - where does logic belong?

Take this simple, contrived example:
UserRepository.GetAllUsers();
UserRepository.GetUserById();
Inevitably, I will have more complex "queries", such as:
//returns users where active=true, deleted=false, and confirmed = true
GetActiveUsers();
I'm having trouble determining where the responsibility of the repository ends. GetActiveUsers() represents a simple "query". Does it belong in the repository?
How about something that involves a bit of logic, such as:
//activate the user, set the activationCode to "used", etc.
ActivateUser(string activationCode);
Repositories are responsible for the application-specific handling of sets of objects. This naturally covers queries as well as set modifications (insert/delete).
ActivateUser operates on a single object. That object needs to be retrieved, then modified. The repository is responsible for retrieving the object from the set; another class would be responsible for invoking the query and using the object.
These are all excellent questions to be asking. Being able to determine which of these you should use comes down to your experience and the problem you are working on.
I would suggest reading a book such as Fowler's patterns of enterprise architecture. In this book he discusses the patterns you mention. Most importantly though he assigns each pattern a responsibility. For instance domain logic can be put in either the Service or Domain layers. There are pros and cons associated with each.
If I decide to use a Service layer I assign the layer the role of handling Transactions and Authorization. I like to keep it 'thin' and have no domain logic in there. It becomes an API for my application. I keep all business logic with the domain objects. This includes algorithms and validation for the object. The repository retrieves and persists the domain objects. This may be a one to one mapping between database columns and domain properties for simple systems.
I think GetAtcitveUsers is ok for the Repository. You wouldnt want to retrieve all users from the database and figure out which ones are active in the application as this would lead to poor performance. If ActivateUser has business logic as you suggest, then that logic belongs in the domain object. Persisting the change is the responsibility of the Repository layer.
Hope this helps.
When building DDD projects I like to differentiate two responsibilities: a Repository and a Finder.
A Repository is responsible for storing aggregate roots and for retrieving them, but only for usage in command processing. By command processing I meant executing any action a user invoked.
A Finder is responsible for querying domain objects for purposes of UI, like grid views and details views.
I don't consider finders to be a part of domain model. The particular IXxxFinder interfaces are placed in presentation layer, not in the domain layer. Implementation of both IXxxRepository and IXxxFinder are placed in data access layer, possibly even in the same class.