as you know, in Seam there are no problems with LazyInitializationException when reading entities's references to subobjects. So is there any problem if I favor running through the tree of relations in order to read the data that I need, instead of sending specific queries to releveant entities' DAOs? Do I break some important guidelinies/priciples?
Consider that the phrase:
"in Seam there are no problems with LazyInitializationException"
It's not true.
In seam there are no problems with LazyInitializationException if you use a pattern where your session is being persisted in the boundary of a long-running conversation.
This means using a Seam injected persistence context like:
#In
private EntityManager entityManager;
Or, if you are using stateful EJBs (bound to conversation scope too):
#PersistenceContext(type = PersistenceContextType.EXTENDED) EntityManager em;
BTW, once you have understand that, there is no problem navigating the relation tree. You should really do it, if you want to bind it to the interface using JSF.
Consider that you may incur in some speed problem if you access to ManyToOne or OneToMany relations in queries that returns more than one result. This is known as n+1 problem, when you basically runs one more roundtrip to the database for each record returned.
Let's summarize:
single detail object -> navigate relation tree
List of other object -> make a single query to the DAO using left join fetch.
Related
OK, I read these:
EntityFramework show entities before saving changes
Where added objects are stored in ObjectContext?
I guess there is no clear solution of the problem (although the second post is from 2009) but I think it's an important issue of Entity Framework so I will ask my question despite all.
Let say we have code like that:
// Get somehow an UnitOfWork instance, e.g. using factory
var categoryRepository = new CategoryRepository(unitOfWork);
var newCategory = new Category("Some Category");
categoryRepository.Add(newCategory);
var allCategories = categoryRepository.GetAll();
Debug.Assert(allCategories.Contains(newCategory));
unitOfWork.Commit();
If we use NHibernate, the UnitOfWork implementation will encapsulate an ISession instance. And the given code will behave as we expect - we can get newly added category back from the repository (i.e. underlying ISession) before changes are committed.
I was surprised to discover that Entity Framework behave different. If our UnitOfWork implementation encapsulate EF's ObjectContext, the assertion fails. Before calling ObjectContext.SaveChages() (in the unitOfWork.Commit() method), newly added category is not reachable (via the same ObjectContext). I tried to find some property of ObjectContext that configure this behavior but didn't succeed.
So my question is: Is it possible to get entities from ObjectContext we just added without need of calling ObjectContext.SaveChages() (because we don't want to commit until the business transaction ends)? If the answer is "No", is not this violation of the Identity Map design pattern in particular and the UnitOfWork pattern in general? And if you use EF how deal with this scenarios?
Thanks in advance.
Sorry for the delay, guys.
It seems that you don't get my point. The question is not "How to get back this instance I've just added?" After all I have a reference to it. The question is "How does any newly added (still uncommitted - and actually it is possible never to be committed) entity to be considered by any query over the given DbSet?"
If I add a new category and then write something like that (I'm intentionally not using repository here but row DbContext (or ObjectContext if we are using EF 4.0) to be clearer to you):
var selectedCategories = context.Categories.Where(c => c.ParentCategory.Name == "Some Category Name");
I want my new category to be returned in the result set if it satisfied the condition. Probably this query will be executed in another method (or even another class) that just share the same context (repository) within single transaction (unit of work).
I know I can execute the same query over the Categories.Local, filter the result only to newly added entities (since Local contains all entities of the set that are currently being tracked) and combine it with result returned from the database. Don't you think it's terribly ugly? And I'm not even sure if I'm not missing something right now. All this is work of the ORM. It's all about transactional behavior (unit of work) and the ORM should handle it (like NHibernate does).
Now does it make sense to you?
First of all, I'll clarify some words: when I use the word "user" you have to understand "application user" and the "patient" is an "item" from the model layer.
Let's now explain the context:
A client application has a button "get patient" and "update", a text box "patient name" and a grid to display the patient returned after the click on the "Get patient" button.
At server side I've got a WCF method GetPatient(string name) that searches the reclaimed patient and does some business logic to a PatientEntity used with nHibernate. That method returns a PatientDto (a mapping from PatientEntity). And I've got an Update(PatientDto patient) method to update the modified patient.
The user can modify the returned PatientDto and click on the "Update" button.
So far I have two ideas to manage a "session" through this senario:
First idea: I expose an "ID" property in my DTO so when the user clicks on update, I search, at server side, the "patient" with the specified ID using nHibernate's "GetByID()", I update the result with the data from PatientDto and call the nHibernate's "Update()" method.
Second idea: I create manually at server side a CustomSession (I use this name for clarity) class that encapsulates an ISession and exposes a session's unique id that will travel between the client and the server. So, when the client sends to the server the PatientDto and the unique session id, I can get the CutsomSession and update the patient with the Update() methods of the ISession
I don't like these ideas. Because the first is a lot of overhead and it doesn't use the features of nHibernate. And the second idea demands to the developer to manage himself the id of the CustomSession between the calls: It is error prone.
Furthermore, I'm sure nHibernate provides such a mechanism although I googled and found nothing about this.
Then my questions are:
What mechanism (pattern) should I use? Of course, the mechanism should support an entity's object graph and not a single entity!"
Does nHibenrate provides such a mechanism?*
Thank you in advance for your help,
I don't think this is a Hibernate issue and in my opinion is a common misunderstanding. Hibernate is a OR-Mapper and therefor handles your database objects and provides basic transactional support. Thats almost it.
The solution for Sessionmanagement in Client-Server environments is for example the use e.g. Spring.net which does provide solutions (Search for OpenSessionInView) for your problem and integrates quite well with NHibernate.
The stateless approach you mentioned offers many advantages compared to a session-based solution. For example think about concurrency. If your comitt is stateless you can simply react on a failed Save() operation on the client side for example by reloading the view.
Besides your 2 good arguments for the use of Hibernae is, if done right, security aggainst SQL-Injection.
One reason that I usually don't bother with ORM tools/frameworks in client-server programming is that you land at, usually, your first solution with them. It helps in making the server side more stateless (and thus more scalable) at the expense of some reasonably cheap database calls (a fetch-by-PK is usually very cheap, and if you immediate write it anyway, guess what the database is likely to do first on a write? Grab the old record - so SELECT/UPDATE may be only marginally slower than just UPDATE because it seeds the cache).
Yes, you're doing stuff manually that you want to push out to the ORM - such is life. And don't fret over performance until you've measured it - for this particular case, I wonder if you really can measure it.
Here's a sumary of what has been said:
A nHibernate session lasts the time of the service call. That's, the time of the call of "GetPatient(string name)" no more.
The server works with entities and returns DTO's to the client.
The client displays and update DTO's. And calls the service "Update(PatientDto patient)"
When the client triggers the service "Update(PatientDto patient)", the mapper gets the patient entities thanks to the ID contained in the DTO with a "GetById(int id)" and updates the properties which has to be.
And finally, the server calls the nHibernate's "Update()" to persists all the changes.
I guess this has been asked before here , but I'm still confused about the correct approach to be taken.
I have a WPF client application which talks to a WCF service to retrieve data.
On the Service side , I have a large entity ( around 25 properties) and I have
three forms in my client app .
On each form, I need the facility to edit certain properties of my domain entity.
I do not want to return the large entity through the service as I need just 3-4 of its properties on each form.
Hence I have created three DTOs ( we are using AutoMapper) , one for each screen.
The service returns DTOs and this works very fine as far as the retrieval goes.
My question is how do I persist my DTOs.
We are using NHibernate in the service layer.
If I pass my partial DTOs to the service to persist , I would need to reload my large entity every time to perform the update.
Is this the only way to handle this scenario ?
What other options do I have if I need to display partial views of one single entity on the UI .. besides sending across the whole entity over the wire ..or creating three DTOs?
Thanks.
Using NHibernate in the service layer it is logical that you will need to either:
a) load the entity during an update operation at the service, modify the required properties and then commit your transaction, or
b) if you have the object already available at the service (but not associated with the NHibernate session) then you can modify the required properties, call session.Update(obj) to reassociate the object with the session and then commit your transaction.
We use the first approach regularly where we have hundreds of different entities in our model. We pass specialised command request objects from client to server and then our service layer is responsible for performing the work specified in the command requests.
Alternatively you could formulate a HQL query as outlined here. But this will quickly get pretty ugly and difficult to maintain.
I am trying to set up proper domain architecture using Fluent NHibernate and Linq to NHibernate. I have my controllers calling my Repository classes, which do the NHibernate thang under the hood and pass back ICollections of data. This seems to work well because it abstracts the data access and keeps the NHibernate functionality in the "fine print".
However, now I'm finding situations where my controllers need to use the same data calls in a different context. For example, my repo returns a list of Users. That's great when I want to display a list of users, but when I want to start utilizing the child classes to show roles, etc., I run into SELECT N+1 issues. I know how to change that in NHibernate so it uses joins instead, but my specific question is WHERE do I put this logic? I don't want every GetAllUsers() call to return the roles also, but I do want some of them to.
So here are my three options that I see:
Change the setting in my mapping so the roles are joined to my query.
Create two Repository calls - GetAllUsers() and GetUsersAndRoles().
Return my IQueryable object from the Repository to the Controller and use the NHibernate Expand method.
Sorry if I didn't explain this very well. I'm just jumping into DDD and a lot of this terminology is still new to me. Thanks!
As lomaxx points out, you need query.Expand.
To prevent your repository from becoming obscured with all kinds of methods for every possible situation, you could create Query Objects which make configurable queries.
I posted some examples using the ICriteria API on my blog. The ICriteria API has FetchMode instead of Expand, but the idea is the same.
I try and keep all the query logic in my repositories and try to only pass back the ICollection from them.
In your situation, I'd pass in some parameters to determine if you want to eager load roles or not and construct the IQueryable that way. For example:
GetAllUsers(bool loadRoles)
{
var query = session.Linq<Users>();
if(loadRoles)
query.Expand("Roles");
return query.ToList();
}
I would choose 2, creating two repositories. And perhaps would I consider creating another repository call to GetRoleByUser(User user). So, you could access a user's role upon user selection change on a seperate thread, if required, so it would increment your performance and won't load every user's roles for each of your users, which would require most resources.
It sounds like you are asking if it is possible to make GetAllUsers() sometimes return just the Users entities and sometimes return the Users and the roles.
I would either make a separate repository method called GetRolesForUser(User user), use lazy loading for Roles, or use the GetAllUsers(bool loadRoles) mentioned by lomaxx's answer.
I would lean toward lazy loading roles or a separate method in your repository.
Most common ORMs implement persistence by reachability, either as the default object graph change tracking mechanism or an optional.
Persistence by reachability means the ORM will check the aggregate roots object graph and determines wether any objects are (also indirectly) reachable that are not stored inside it's identity map (Linq2Sql) or don't have their identity column set (NHibernate).
In NHibernate this corresponds to cascade="save-update", for Linq2Sql it is the only supported mechanism. They do both, however only implement it for the "add" side of things, objects removed from the aggregate roots graph must be marked for deletion explicitly.
In a DDD context one would use a Repository per Aggregate Root. Objects inside an Aggregate Root may only hold references to other Aggregate Roots. Due to persistence by reachability it is possible this other root will be inserted in the database event though it's corresponding repository wasn't invoked at all!
Consider the following two Aggregate Roots: Contract and Order. Request is part of the Contract Aggregate.
The object graph looks like Contract->Request->Order. Each time a Contractor makes a request, a corresponding order is created. As this involves two different Aggregate Roots, this operation is encapsulated by a Service.
//Unit Of Work begins
Request r = ...;
Contract c = ContractRepository.FindSingleByKey(1);
Order o = OrderForRequest(r); // creates a new order aggregate
r.Order = o; // associates the aggregates
c.Request.Add(r);
ContractRepository.SaveOrUpdate(c);
// OrderAggregate is reachable and will be inserted
Since this Operation happens in a Service, I could still invoke the OrderRepository manually, however I wouldn't be forced to!. Persistence by reachability is a very useful feature inside Aggregate Roots, however I see no way to enforce my Aggregate Boundaries.
Am I overlooking something here? How would you deal with such a scenario?
EDIT: In NHibernate it would indeed be possible to enforce the aggregate root boundary by not marking the aggregate root association with cascade="save-update". I'm stuck with Linq2Sql however.
Honestly, persistence by reachability isn't really an issue with aggregate root boundaries. Remember, aggregate roots can reference each other just fine. Often, I'll use one aggregate root to create another (Customer.CreateOrder for example). The Order is a root as is Customer, and I can still do Customer.Orders[0].Comments = "Foo".
I'm not in the habit of changing the domain model and not persisting changes, but letting them evaporate. It's just not a realistic use case IMO.