NHibernate evict by id - nhibernate

Everyone knows that there is cache in session.
This cache generally could be cleared by 2 methods:
Session.Evict
Session.Clear
Second method removes all cache not only for single entry.
I have business method. It receives id of large object (from aspx site) or sometimes several ids. And do native sql operation in database (using sql-query with complex logic to not load all data in C#). Then I need to invalidate cache. So every potential load of object goes without cache direct from database.
Unfortunately evict accepting only objects. Also its implementation DefaultEvictEventListener has clear separation in code path - separate for proxy and not proxied classes. I have tried simply to create entity, fill id manually and pass it to Evict. This will not works. As I understand Evict by not proxied class use GetHashCode to find and remove object from cache. So if I am not overriding it it will not works. I have a lot native sql batch operations so overriding all GetHashcode in all entities objects will create a lot of work. Also I am not sure is this case removes proxies from cache or no.
Update: As far as I have tried for me overriding GetHashCode also not helped. StatefulPersistenceContext.RemoveEntry not found entity because it uses RuntimeHelpers.GetHashCode. So this solution is not even possible
Using sources of NHibernate I have produced following solution:
public static class NHSessionHelper: DefaultEvictEventListener
public static void RemoveEntityFromCache(this ISession session, Type type, object entityId)
{
ISessionImplementor sessionImpl = session.GetSessionImplementation();
IPersistenceContext persistenceContext = sessionImpl.PersistenceContext;
IEntityPersister persister = sessionImpl.Factory.GetEntityPersister(type.FullName);
if (persister == null)
{
return;
}
EntityKey key = new EntityKey(entityId, persister, sessionImpl.EntityMode);
persistenceContext.RemoveProxy(key);
object entity = persistenceContext.RemoveEntity(key);
if (entity != null)
{
EntityEntry e = persistenceContext.RemoveEntry(entity);
DoEvict(entity, key, e.Persister, (IEventSource)sessionImpl);
}
}
It just uses part of NHibenate implementation. But it seem to me not good idea to duplicate code. May be anyone have other ideas?

If you are sure that the object is in the cache, Session.Get(id) will not hit the db. It's probably easiest to do that and then Evict the object you get back:
Model m = Session.Get(id);
Session.Evict(m);
Edit
It is not clear to me if you are talking about the first-level cache or the second-level cache. The above will evict something from the first-level cache. To evict from the second-level cache, use the evict method on SessionFactory.
Edit in response to comment
In that case, you might try Session.Load:
Model m = Session.Load(id);
Session.Evict(m);
If m is in the cache, Session.Load will return the instance which you can evict. If not it returns a proxy (no db hit). My tests suggest that Session.Evict will not throw if you try to evict the proxy, so this should work.

It sounds like you could use a Stateless session for this and not bother with cache at all.

Related

Copying NHibernate POCO to DTO without triggering lazy load or eager load

I need to create DTOs from NHibernate POCO objects. The problem is that the POCO objects contain dynamic proxies, which should not be copied to the DTO. I eager load all the collections and references I need to transfer in advance, I don't want NHibernate to start loading referenced collections which I did not load in advance.
Several similar questions on SO received answers which either:
Suggest Session.GetSessionImplementation().PersistenceContext.Unproxy();
Suggest turning off Lazy Loading.
In my case the first suggestion is irrelevant, as according to my understanding it causes eager loading to replace the proxies. In reality, it doesn't even work - it doesn't remove the proxies in my objects. (Any explanation why?)
The second suggestion, turning off lazy loading seems to cause all references and collections to eager load, basically loading the entire DB. My expectation was that if lazy loading is off, and I have not requested a collection, it will not be loaded. (Am I correct that NHibernate offers no such option?)
I am using NHibernate 3.3.1 with fluent configuration.
To reiterate my main question, I need to create DTOs clean of proxies, copied from POCOs which contain proxies, and I don't want to load the data behind those proxies.
Any helpful suggestion which includes example code and automates the process with ValueInjecter / AutoMapper will be immensely helpful.
Edit #1:
Following Roger Alsing's suggestion to use projections, I realized that what I'm actually looking for is a ValueInjecter-like convention based mapping. Here is why. Initially, my DTOs will be defined the same as the model's POCOs. This is due to a large code base which depends on the existing POCOs being transferred on the client-side project.
Using projections, I will have to specify which subset of fields have to be copied, and this subset may be different in each context (as, ideally, a DTO would differ). This will mean a lot of new code introduced to the server side, when there should be the second option.
Using ValueInjecter, I will be able to populate the DTOs by convention in one call, without writing specific projections, or having to maintain those into the future. That is, if I am able to have ValueInjecter ignore Proxy objects.
Given that using projections is a good but not ideal solution in my situation, is there a way to configure something like ValueInjecter to copy POCOs without copying proxies or triggering eager/lazy loads on copy?
I solve this by selecting DTO's as projections using Linq or whatever query language the O/R Mapper may have.
e.g.
return from c in customers
select new CustomerDTO
{
Name = c.Name ,
Orders = c.Orders.Select (o => new OrderDTO {...} )
};
This way, you don't need to resort to reflection magic or any other fancy stuff.
And the query fetches exactly what you need in one go, thus, this is usually much more efficient than fetching entities and then transforming them to DTO's in mem.
(it can be less efficient in some cases incase the resulting SQL query contains extra joins for whatever reason..)
I'm using the following ValueResolver with AutoMapper:
/// <summary>
/// ValueResolver that will set NHibernate proxy objects to null, instead of triggering a lazy load of the object
/// </summary>
public class IgnoreNHibernateProxyValueResolver : IValueResolver
{
public ResolutionResult Resolve(ResolutionResult source)
{
var prop = source.Type.GetProperty(source.Context.MemberName).GetValue(source.Value, null);
var proxy = prop as INHibernateProxy;
if (proxy != null && proxy.HibernateLazyInitializer.IsUninitialized)
{
return source.Ignore();
}
return source.New(prop);
}
}
for ValueInjecter solution I recommend using SmartConventionInjection (you need to copy the code from the linked page into your solution)
and after specify a convention that won't touch the proxy properties
here's a start:
public class MapPoco: SmartConventionInjection
{
protected override bool Match(SmartConventionInfo c)
{
return c.SourceProp.Name == c.TargetProp.Name;
}
}
Take a look on Projections in Introduction to QueryOver in NH 3.0
CatSummary summaryDto = null;
IList<CatSummary> catReport =
session.QueryOver<Cat>()
.SelectList(list => list
.SelectGroup(c => c.Name).WithAlias(() => summaryDto.Name)
.SelectAvg(c => c.Age).WithAlias(() => summaryDto.AverageAge))
.TransformUsing(Transformers.AliasToBean<CatSummary>())
.List<CatSummary>();

LINQ functions and DataContext disposal, deferred execution

So I need some advice and insight here. Thanks in advance for your thoughts.
I have developed static functions that return a single record from a LINQ entity. Like so:
FooRecord GetRecord(Guid id)
{
using(var dc = new FooDataContext())
return dc.FooRecords.Where(a => a.Id == id).First();
}
This throws an exception because the DataContext is already disposed, which creates problems with deferred execution. This works:
FooRecord GetRecord(Guid id)
{
var dc = new FooDataContext();
return dc.FooRecords.Where(a => a.Id == id).First();
}
I am worried. How quickly will the DataContext be disposed? Obviously if I grab the record immediately this won't cause an issue. However, say I need to grab a record through association:
var record = Data.FooRecord.GetRecord(id);
//Do a bunch of stuff...
//Now we grab the related record from another entity
var barRecord = record.BarRecord
Is there a risk the DataContext be gone by this point? Any advice?
You basically do not need to Dispose() your DataContext for the reasons discussed here:
When should I dispose of a data context
http://csharpindepth.com/ViewNote.aspx?NoteID=89
The main reason for implementing IDisposable on a type is to dispose of any unmanaged resources. The only unmanaged resource allocated by the DataContext is the underlying database connection, but the DataContext already takes care of opening and closing the connection as needed.
The main thing you want to avoid is returning an IEnumerable collection and then never enumerating it, as this will cause the connection to remain open indefinitely. However, since you are only returning a single object, you shouldn't have to worry about this.
Also note that if access any relationship property on the returned object it may cause the connection to be momentarily reopened so that the property can be lazy loaded. You can avoid this by using DataLoadOptions.LoadWith() with your DataContext to eager-load any properties you intend to access. See http://msdn.microsoft.com/en-us/library/system.data.linq.dataloadoptions.aspx
As to the last part of the question, if the returned entities contain properties that can be lazy loaded, then they will contain internal references to back the DataContext that will keep it in memory. Once you have no more references to these entities, then the DataContext will of course be garbage-collected just like any other object.

Expected behaviour of a Repository

I'm writing an ORM and am unsure of the expected behaviour of the Repository, or more precisely, the frontier between the Repository and the Unit Of Work.
From my understanding, a Repository might look like this:
interface IPersonRepository
{
public function find(Criteria criteria);
public function add(Person person);
public function delete(Person person);
}
According to Fowler (PoEAA, page 322):
A Repository mediates between the domain and data mapping layers, acting like an in-memory domain object collection. [...] Objects can be added to and removed from the Repository, as they can from a simple collection of objects.
This would imply that the following test should work (assuming that we already have a Person persisted, whose last name is Fowler):
collection = repository.find(lastnameEqualsFowlerCriteria);
person = collection[0];
assertEquals(person.lastname, "Fowler");
person.lastname = "Evans";
newCollection = repository.find(lastnameEqualsFowlerCriteria);
assertFalse(newCollection.contains(person));
That means that when mapping to a database, even if no explicit save() method has been called somewhere, the Person model must have been automatically persisted by the Repository, so that the next query returned the correct collection, not containing the original Person.
But, isn't that the role of the Unit Of Work, to decide which model to persist to the database, and when?
In the above implementation, the Repository has to decide to persist the Person previously retrieved when receiving another find() call, so that the result is consistent with the modification. But if no other find() call were issued, the model would not have been persisted implicitly at all.
In the context of a Unit Of Work, it is not really a problem, because we can start a transaction at the beginning, and rollback any insert to the db anyway if needed.
But when used alone, can't this Repository lead to unexpected, unpredictable behaviour?
A Repository mediates between the
domain and data mapping layers, acting
like an in-memory domain object
collection. [...] Objects can be added
to and removed from the Repository, as
they can from a simple collection of
objects.
This does not mean you do not need a save method. You still need to explicitly commit your changes to storage.
See The Unit Of Work Pattern And Persistence Ignorance
public interface IUnitOfWork {
void MarkDirty(object entity);
void MarkNew(object entity);
void MarkDeleted(object entity);
void Commit();
void Rollback();
}
In a way, you can think of the Unit of Work as a place to dump all transaction-handling code. The responsibilities of the Unit of Work are to:
Manage transactions.
Order the database inserts, deletes, and updates.
Prevent duplicate updates. Inside a single usage of a Unit of Work object, different parts of the code may mark the same Invoice object as changed, but the Unit of Work class will only issue a single UPDATE command to the databas
I think what you;re asking about is following: http://martinfowler.com/eaaCatalog/identityMap.html
Repository should keep fetched objects in memory and all subsequent calls for that entity should not be retrieved from persistence storage, hence your example should work fine.

How do I "deactivate" an entity instead of deleting it using NHibernate?

I'm not so sure if this is really trivial to do and I'm just over-complicating stuff, but I've been thinking about this for the good part of the past hour or so.
So I have entities. Hence, NHibernate. What I want to do is to only "deactivate" entities whenever I want to "delete" them, instead of actually removing them physically from the database. (Just cause we don't want to be really deleting records from our data store).
All my entities inherit from a BaseEntity class with a BaseEntity.Active property.
What I've got running right now is something like the following in the entity class' mapping file:
<sql-delete>
UPDATE SomeEntityTable SET Active = 0 WHERE Id = ?
</sql-delete>
This works fine, except that I'll have to inject that, customized with the table name, into every single HBM mapping file for every single entity (we're not implementing the BaseEntity inheritance in any subclassing strategy).
As you can see, that can be a bit menial. The coding would be tedious, the maintenance horrendous, and declaring the table name twice in the same mapping file just rubs me the wrong way.
What I was playing around earlier was whether or not I could implement an event listener; perhaps OnPreDelete or something, and update the entity's .Active property, like so:
class BaseEventListener : IPreDeleteListener
{
public bool OnPreDelete(PreDeleteEvent #event)
{
BaseEntity be = #event.Entity as BaseEntity;
if (be != null)
be.Active = false;
return false;
}
}
That way, the whole "deactivation" thingy is automated for all entities that support deactivation.
The problem is, I'm thinking that NHibernate would still build a proper DELETE SQL query that will burn my entity from the data store anyway instead of updating the thing, so this'll just be wasted automagic effort.
How should I go about this?
You can use an event listener. You have to add the listener to the configuration as well.
public class SoftDeleteEventListener : DefaultDeleteEventListener
{
protected override void DeleteEntity(IEventSource session, object entity, EntityEntry entityEntry, bool isCascadeDeleteEnabled, IEntityPersister persister, ISet transientEntities)
{
var softDeletable = entity as BaseEntity;
if (softDeletable != null)
{
softDeletable.Active = false;
}
else
{
base.DeleteEntity(session, entity, entityEntry, isCascadeDeleteEnabled, persister, transientEntities);
}
}
}
Since it's pretty clear that you never actually delete your persistent entities (as is the case with most applications), there's is no need to use the Delete method just because it's there.
An alternative approach:
Declare a base entity with an Active property
Set it to false for "delete" use cases. You can even add a Delete method to your base entity that does just that
You can, additionally, create a filter to avoid loading "deleted" entities
Yes, there is some work involved, but in the long run it's for the best, as you'll still have a maintainable, non-hacky implementation.
Some of the burden can be reduced if you use a code+convention-based mapping approach, like ConfORM or Fluent.

NHibernate not persisting collections

I have a rather strange error with NHibernate. I am was having error with ISession been shared by across threads and got this resolved by supplying my own ADO.NET connection like:
IDbConnection connection = new SqlConnection(ApplicationConfiguration.ConnectionString);
connection.Open();
ISession session = _sessionFactory.OpenSession(connection);
session.FlushMode = FlushMode.Commit;
return session;
My application now works but all objects with collections are been persisted in the database without their collections. for example, say a car has a list of tyres. then i create a car and then generate a list of tyres based on tyres already in the database. saving the car object will only save the car not the list!
any help on what i am doing wrong? i am using NHibernate 2.0 an i do call Session.Flush() and Transaction.Commit().
cheers.
hi I figured out the reason why the collections were not been persisted. my unit of work was invoking a property which returned an Isession object to persist my objects. However, this property actually returned a new ISession for each call. COnce i corrected this to use the same ISession within each unit of work, the objects were persisted properly. Thanks for all your help though.
Check out the cascade attribute on your collection mapping - by default this is set to 'none', meaning child entities need to be explicitly saved. You probably want cascade="all" or cascade="all-delete-orphan".
are you using NHibernate.ISession.save(object) before the flush and commit of the tyres list?