I'm having some difficulty figuring out how the Aggregate Root will track changes on child entities.
Let say I have an aggregate:
Order (root)
OrderLineItem
With the Order class being the aggregate root. How will I track the changes made on each of the OrderLineItem through the Order class?
When I make a repository (implementing) e.g. an OrderRepository (because only the aggregate root can have the repository right?), how will my OrderRepository track the changes of each OrderLineItem?
Example:
Newly added but not committed to DB
Edited but committed to DB
Edited but not committed to DB
How do you guys deal with this?
with the Order class being the aggregate root now how will I track the
changes made on each of the OrderLineItem through the Order class?
All changes to the Order aggregate, including OrderLineItem, should go through the aggregate root. This way, the aggregate can maintain its integrity. As far as tracking changes, that depends on your persistence implementation. If using an ORM such as EF or NHibernate, then the ORM will take care of tracking changes. If using event sourcing, then changes are tracked explicitly as a sequence of events, usually maintained by the aggregate in OOP implementations. If using SQL directly, you can also avoid tracking changes and update the entire aggregate upon each commit.
and when I make a repository(implementing) say OrderRepository because
only the aggregate root can have the repository right?
Yes, repository per aggregate.
Related
I have User model which is aggregate. I also plan to create WorkingHours object. It's like every user will have his own working hours per day. There will be also graphical user interface separated from User for add/remove/update hours etc. I am thinking that whether should i put all operations into UserRepository related to WorkingHours or should i tread WorkingHours model as aggregate and create separated WorkingHoursRepository so then i could put property into User as id to WorkingHours object. Which option should i choose?
My thoughts are that to not make WorkingHours as aggregate because every set of working hours belong to specific user which makes it if i am thinking right dependent on User and cannot live without it. My only thought about to make it aggregate and create separate repository is due to have cleaner code means not to put all CRUD etc in same repository but i suppose it's should be not the thing to separate it therefore to me the only way is to WorkingHours as value object and not aggregate and use UserRepository for it.
You design your Domain Model based on your business requirements and not on how it needs to be saved.
In this scenario, if Working Hours can be only manipulated within the User domain and if you think User is the only aggregate required, then Working Hours should not be made aggregate. That said, it does not stop you save your data in a clean manner in your data store. Strategy to store your data also depends a lot on your type of data store.
For example, if you are using SQL and your data is stored in multiple tables then you can Commit or Rollback the entire transaction. How you implement it is not tied to DDD as long as you are adhering to the concept that the aggregates should only be updated via the root entity.
If you are using a No-SQL database like Cosmos DB you can choose to load or save the entire document. In that case, you would be only dealing with the User repository.
Hope this helps.
Scenario: I have built an ASP.NET MVC application that manages my cooking recipes. I am using FluentNHibernate to access data from the following tables:
Users
Categories
Recipes
RecipeCategories (many-to-many junction table)
UserBookmarkedRecipes (many-to-many junction table)
UserCookedRecipes (many-to-many junction table)
Question: Is there any way to tell NHibernate to load all data from all tables listed above and store it in memory / in NHibernate's cache so that there do not have to be any additional database requests?
Motivation behind question: The variety of many-to-many relationships poses a problem and would greatly benefit from that optimization.
Note regarding data: The overall amount of data is extremely small. We are talking about less than 100 recipes at the moment.
Instead of preloading everything, I would suggest loading once and then holding it as it's accessed.
NHibernate maintains two different caches, and does a pretty good job of keeping them in sync with your underlying data store. By default, it uses what is called a "first level" cache on a per-session basis but I don't think that's what you want. You can read about the differences at the nhibernate faq page on caching
I suspect a second level cache is what you need (this is available throughout your app). You'll need to get a cache provider from NHContrib (download the version that matches your version of NHibernate). The SysCache2 provider will probably be the easiest to set up for your scenario, as long as your app will be the ONLY thing writing to the database. If other processes will be writing, you will need to ensure that all are using the same cache as an intermediary if you want it to stay in sync.
The second level cache is configured with a timeout that you can set to whatever you need. Don't think it can be infinite but you can set it to long periods if you want (probably not a terrible idea to go back to DB from time to time though). If you want to preload everything up front, you can simply access all your entities from your global.asax's Application_Start method, but this shouldn't be necessary.
You will need to configure your session factory to use the cache. Call the .Cache(...) method when fluently configuring your session factory, it should be relatively self-explanatory.
You will also need to set Cache.ReadWrite() in both your entity mappings AND your relationship mappings. You can do this by convention or by calling Cache.ReadWrite() in your fluent mappings.
Something like:
public class RecipeMap : ClassMap<Recipe> {
public RecipeMap () {
Cache.ReadWrite();
Id(x => x.Id);
HasManyToMany(x => x.Ingredients).Cache.ReadWrite();
}
}
On the Cache calls in your mappings you can specify ReadOnly or whatever else you may need. NonStrictReadWrite is an interesting one, it can boost performance significantly but at an increased risk of reading stale data from the cache.
Documentation on NHibernate cascade settings, discusses the settings in the context of calling the Save() Update() and Delete() methods. But I can find no discussion of cascade behavior in the context of the implicit update that occurs when one loads, modifies and saves entities on the same session. In this case an explicit call to update is not needed, so what happens regarding cascade settings?
This may seem a dumb question, but the reason I bring this up is that I am trying to figure out how NHibernate supports the concept of Aggregate Boundaries in the context of Domain Driven Design. Let me give an example that will illustrate what I am trying to get at.
Suppose I have the canonical invoice application with the entities Invoice, Buyer and LineItem. Invoice is an aggregate root and LineItem is in the same aggregate but Buyer is its own aggregate root.
I want to model this in NHibernate by configuring my mapping such that the cascade from Invoice to LineItem is All-DeleteOrphans and the one from Invoice to Buyer is None.
Based on the documentation I have read, using my desired cascade settings, if I am working with disconnected entities and I do the following, only the Invoice and LineItems will save:
disconnectedInvoice.ShippedDate = DateTime.Today();
disconnectedInvoice.LineItems[2].Backordered = true;
disconnectedInvoice.Buyer.Address = buyersNewAddress;
session.Update(disconnectedInvoice);
session.Flush();
What I don't see discussed anywhere is what happens when one retrieves the invoice, makes the same updates and flushes the session in a connected manner like so.
var invoice = session.Get<Invoice>(invoiceNumber);
invoice.ShippedDate = DateTime.Today();
invoice.LineItems[2].Backordered = true;
invoice.Buyer.Address = buyersNewAddress;
session.Flush();
The NHibernate documentation says that flush persists the changes for dirty entities associated with the session. Based on this one would presume that the updates to the Invoice, the Buyer and the LineItems will all be persisted.
However, this would seem to violate the concept behind the cascade rule. It would seem to me that for the purposes of deciding what entities to update upon flush, the session should look at those entities that were directly loaded (only the invoice in this case) and include indirectly loaded entities (the LineItems and the Buyer in this case) only if the cascade setting indicates they should be persisted.
I admit that this example represents bad DDD. If the Buyer isn't part of the aggregate then it should not be being updated at this time. Or at least it should not be being updated through the Invoice aggregate. However, DDD aside, the point I am actually more interested in is determining if cascade rules are honored for updates in the same-session scenario the same as they are in the disconnected scenario.
The NHibernate documentation says that flush persists the changes for dirty entities associated with the
session.
The main issue is the difference between disconnected and connected entities. Cascading behaves the same for both, but it's the implicit updates that are different. For session loaded entities, there is no need to cascade the save to the Buyer since it'd be redundant. For a disconnected entity, you need the cascade since there will be no implicit updates to that Buyer because it was never explicitly merged into the session.
In the comments for the ayende's blog about the auditing in NHibernate there is a mention about the need to use a child session:session.GetSession(EntityMode.Poco).
As far as I understand it, it has something to do with the order of the SQL operation which session.Flush will emit. (For example: If I wanted to perform some delete operation in the pre-insert event but the session was already done with deleting operations, I would need some way to inject them in.)
However I did not find documentation about this feature and behavior.
Questions:
Is my understanding of child sessions correct?
How and in which scenarios should I use them?
Are they documented somewhere?
Could they be used for session "scoping"?
(For example: I open the master session which will hold some data and then I create 2 child-sessions from the master one. I'd expect that the two child-scopes will be separated but the will share objects from the master session cache. Is this the case?)
Are they first class citizens in NHibernate or are they just hack to support some edge-case scenarios?
Thanks in advance for any info.
Stefando,
NHibernate has not knowledge of child sessions, you can reuse an existing one or open a new one.
For instance, you will get an exception if you try to load the same entity into two different sessions.
The reason why it is mentioned in the blog, is because in preupdate and preinsert, you cannot load more objects in the session, you can change an allready loaded instance, but you may not navigate to a relationship property for instance.
So in the blog it is needed to open a new session just because we want to add a new auditlog entity. So in the end it's the transaction (unit of work) that manages the data.
Does nhibernate proxies do any smart job to make change tracking efficient? Or does it only support what Entity Framework calls snapshot based change tracking?
It is snapshot-based.
When loading an entity, its state is stored in the session as an object[].
When flushing, the current state is converted to an object[] and compared with the original state to determine which properties are dirty.
This is more efficient for many reasons. One of them is that you don't need a proxy to track changes. Another is that, if you set a property to a different value and then revert it, the entity will be considered not-dirty, thus avoiding an unnecessary DB call.
NHibernate and EntityFramework track changes in very different ways. Entity Framework tracks changes in the entity itself. NHibernate tracks changes in the session.
Tracking changes in the entity requires more memory (because you are storing the before values as well as the after values). Entities can retain change tracking even after disconnecting from the ObjectContext.
Tracking changes in the session is more efficient overall, but if you disconnect an entity from the session, you lose the change tracking.