Is there any reason I shouldn't cache in nHibernate? - nhibernate

I've just discovered the joy of Cache.ReadWrite() in fluent nHibernate, and have been analyzing the results with nhprof extensively.
It seems to be quite useful, but that seems a bit deceptive. Is there any particular reason I wouldn't want to cache a very frequently used object from a query? I mean, I have to presume I should not just go around decorating every single Mapping with a Cache property ... or should I?

As usual, it depends :)
If something has potential to be updated by background processes that don't use the second level cache, or changed directly in the database, caching will cause problems.
Entities that are infrequently accessed may not be good candidates for second level caching either, as they will just take up space.
Also, you may see some weirdness if you have collections mapped as Inverse - the changes will not be picked up by the second level cache correctly and you'll need to manually evict the collection.
As sJhonny points out below, if you have a web farm scenario (or any where your app is running on several servers) you'll need to use a distributed cache (like memcached) instead of the built in ASP.net cache.

Related

nhibernate lazy loading uses implicit transaction

This seems to be a pretty common problem: I load an NHibernate object that has a lazily loaded collection.
At some later point, I access the collection to do something.
I still have the nhibernate session open (as it's managed per view or whatever) so it does actually work but the transaction is closed so in NHprof I get 'use of implicit transactions is discouraged'.
I understand this message and since I'm using a unit of work implementation, I can fix it simply by creating a new transaction and wrapping the call to the lazy loaded collection within it.
My problem is that this doesn't feel right...
I have this great NHibernate framework that gives me nice lazy loading but I can't use it without wrapping every property access in a transaction.
I've googled this a lot, read plenty of blog posts, questions on SO, etc, but can't seem to find a complete solution.
This is what I've considered:
Turn off lazy loading. I think this is silly, it's like getting a full on sports car and then only ever driving it in eco mode. Eager loading everything would hurt performance and if I just had ids instead of references then why bother with Nhibernate at all?
Keep the transaction open longer. Transactions should not be long lived and keeping one open as long as a view is open would just be asking for trouble.
Wrap every lazy load property access in a transaction. Works but is bloaty and error prone. (i.e. if I forget to wrap an accessor then it will still work fine. Only using NHProf will tell me the problem)
Always load all the data for the properties I might need when I load the initial object. Again, this is error prone, both with loading data that you don't need (because the later call to access it has been removed at some point) or with not loading data that you do
So is there a better way?
Any help/thoughts appreciated.
I has had the same feelings when I first encountered this warning in NHProf. In web applications I think the most popular way is to have opened transaction (and unit of work) for the whole duration of request. For desktop applications managing transactions (as well as sessions) may be painful. You can use automatic transaction management frameworks (e.g. Castle) and declare with attributes service methods that should be run within transaction. With this approach you can wrap multiple operations into single transaction denending on your requirements. Also, I was using session-per-view approach with one opened session per view and manual transaction management (in this case I just ignored profiler warnings about implicit transactions).
As for your considerations: I strongly don't recommend 2) and 3). 1) and 4) are points to consider. But the general advice is: think, then try different approaches and find a solution that suits better for your particular situation.

JBoss TreeCache vs PojoCache when using invaludation rather than replication

We are setting up a Jboss cluster and we are building an own distributed cache solution built upon Jboss cache (Cant use it as 2nd level cache to ORM layer in our case). We want to use invalidation and not replication as cache mode. As far as i can see after (very) little testing both solutions seem to work, objects are put into the cache and objects seem to be evicted when they are updated on any of the servers.
This leads me to believe that PojoCache with AOP instrumentation is only needed when using replication so that you can replicate only updated field values and not whole objects. Am I correct here or are there any other advantages with using PojoCache over TreeCache in our scenario? And if PojoCache have advantages, do we still need AOP instrumentation and to annotate our entities with #PojoCacheable (yes, we are using JBCache 1.4.1) since we are not using relication?
Regards
Jonas Heineson
PoJoCache has the ability through AOP to:
only replicate changed fields and not whole objects. Makes a difference if e.g. your person object containes a huge image of the person and you only change the password
detect changes and thus can automatically put them on the list to be replicated.
TreeCache (plain) does not need AOP, but can thus not replicate individual fields or detect what has changed so that you need to trigger replication yourself.
If you don't replicate, those points are probably irrelevant.
IIrc, you don't need the #PojocaCacheable annotation for Pojo cache - without it, you need to specify the classes to be enhanced in a different way.
I have the feeling that if you are not replicating, the plain TreeCache will be enough.

One database, two applications, 2nd-level caching and NHibernate

What do I need to know when setting up caching using NHibernate, in the case that I have two applications running on different servers, but only one database. Are table dependencies generally sufficient to make sure that weird caching problems don't arise? If so, what sort of polltime should I look at?
well in order for nhibernate to check for concurrency issues you can add a field to your entities. That will cause nhibernate to throw a concurrency exception when trying to update an entity that has been modified by someone else.
If you want to use the second level cache with multiple servers I can recommend a distributed implementation of the nhibernate second level cache, for example NCache:
http://www.alachisoft.com/ncache/nhibernate_index.html

NHibernate cache expiration

I use custom developed ORM currently and am planing to move to nhibernate.
Currently, I use both L1 - session level caching and L2 - Application level caching.
Whenever an object is requested from L2 cache by L1 cache, it checks database for modified since last load, and loads only if it has been modified.
Can I do this with NHibernate. In short, caching does not hurt me as it always gets most recent data and saves me object creation and load times.
IMHO it's pointless to have an L2 cache if it needs to hit the DB anyway. That's precisely the entire point of caching, avoid hitting the DB as much as possible.
AFAIK there is no caching strategy implemented like the one you describe, but NHibernate L2 caches are entirely pluggable so you could implement it. However, I wouldn't, for the reasons I mentioned above.
Getting outdated data is only an issue if there are other apps or other DALs hitting the same DB besides NHibernate. If that's the case, you could use the SysCache2 implementation, which internally uses SqlCacheDependencies to invalidate cache regions when data in the underlying table changes.
If it's a single app running in a farm, use the Velocity provider.
If there's only one NHibernate app instance hitting the DB, any cache strategy will do and you don't have to worry about getting outdated data.
See also:
NHibernate docs about 2nd-level caching
NHibernate 2nd Level Cache # NHibernate Forge
First and Second Level caching in NHibernate # The NHibernate FAQ
Ayende's posts about NHibernate caching
The build-in Level1 cache in NHibernate is not very sophisticated as it stand alone and in-proc in nature. So you definitely need to have a second level cache in order to enhance the performance of the NHibernate app. It reduces time taking trips to database. There are many third party integrations available that plug in for NHibernate secondary level cache. NCache is one fine example of it where no code change is required. Read more from here,
http://www.alachisoft.com/ncache/nhibernate-l2cache-index.html

Strict vs NonStrict NHibernate cache concurrency strategies

This question is about the difference between ReadWrite and NonStrictReadWrite cache concurrency strategies for NHibernate's second level cache.
As I understand it, the difference between these two strategies is relevant when you have a distributed replicated cache - nonstrict won't guarantee that one cache has the exact same value as another cache, while strict read/write should - assuming the cache provider does the appropriate distributed locking.
The part I don't understand is how the strict vs nonstrict distinction is relevant when you have a single cache, or a distributed partitioned (non replicated) cache. Can it be relevant? It seems to me that in non replicated scenarios, the timestamps cache will ensure that stale results are not served. If it can be relevant, I would like to see an example.
What you assume is right, in a single target/thread environment there's little difference. However if you look at the cache providers there is a bit going on even in a multi-threaded scenario.
How an object is re-cached from it's modified state is different in the non-strict. For example, if your object is much heftier to reload but you'd like it to after an update instead of footing the next user with the bill, then you'll see different performance with strict vs non-strict. For example: non-strict simply dumps an object from cache after an update is performed...price is paid for the fetch on the next access instead of a post-update event handler. In the strict model, the re-cache is taken care of automatically. A similar thing happens with inserts, non-strict will do nothing where strict will go behind and load the newly inserted object into cache.
In non-strict you also have the possibility of a dirty read, since the cache isn't locked at the time of the read you would not see the result of another thread's change to the item. In strict the cache key for that item would lock and you would be held up but see the absolute latest result.
So, even in a single target environment, if there is a large amount of concurrent reads/edits on objects then you have a chance to see data that isn't really accurate.
This of course becomes a problem when a save is performed and an edit screen is loading: the person thinking they're editing the latest version of the object really isn't, and they're in for a nasty surprise when they try to save the edits to the stale data they loaded.
I have created a post here explaining the differences. Please have a look and feel free to comment.