Problems with NHibernate FlushMode Never - nhibernate

We're building a large application with NHibernate as ORM layer. We've tried to apply as many best practices as possible, among which setting FlushMode to Never. However, this is giving us pain, for example the following scenario:
There is a table with an end date column. From this table, we delete the last (by end date) record:
The record is deleted;
After the delete, we do a (repository) query for the last record (by end date);
This last record is updated because it's the new active record.
This is a very simple scenario, of which many exist. The problem here is that when we do the query, we get the deleted record back, which of course is not correct. This roughly means that we cannot do queries in business logic that may touch the entity being inserted or deleted, because its resp. not there yet or still there.
How can I work with this scenario? Are there ways to work around this without reverting the FlushMode setting or should I just give up on the FlushMode setting all together?

How can I work with this scenario? Are there ways to work around this
without reverting the FlushMode setting
FlushMode.Never does not prevent you from manually calling Flush() when you want to deal with up-to-date data. I guess it is the way to work this scenario without changing the FlushMode
or should I just give up on the FlushMode setting all together?
Could you provide some reference on FlushMode.Never being a good practice in the general case ? Seems like FlushMode.Never is fit when dealing with large, mostly readonly, sets of objects.
http://jroller.com/tfenne/entry/hibernate_understand_flushmode_never

FlushMode.Never is a best practice only when you absolutely require fine-grained control. FlushMode.Auto will cover 99.99% of the cases without a problem. That said, decorating you CUD operations with a ISession.FLush() will not hurt as it only involves a database roundtrip if there are any CUD actions in the internal action queue

Flush mode Never means NHibernate will never flush the session, it's up to you to do that. So, session.Delete() will not actually delete the record from database, just mark the object for delete in session's cache. You can force a flush by calling session.Flush() after calling session.Delete().
I think Auto is a better option, with Auto, NHibernate will flush the session automatically before querying for data.

Related

Fix inconsistent state right away or lazily when data is requested

Our users go through several steps of workflow - the further they go the more objects we create. We also allow users to go back to Step#1 and change one of the existing objects. Which may cause inconsistencies so we must update/delete some of the objects at Step#2. I see 2 options:
Update/delete objects from Step#2 right away. This leads to:
Operation that's supposed to be a simple PATCH of an entity field becomes complicated. And it's a shared object between multiple workflows - so we'll have to add if-statements and do different things depending on the workflow.
Circular dependencies. Operations on Step#1 have to know about objects/operations on Step#2.
On each request in Step#1 we'd have to load data for Step#2 in order to determine whether Step#2 really needs to be updated. Which slows down operations on Step#1. So to change 1 record in DB we'll have to load hundreds (or even thousands) records for Step#2.
Many actions on Step#1 may need fixing state at Step#2. So we have to ensure we don't forget anything today and in the future.
Fix Step#2 lazily - when user goes there (our current approach). Step#2 will recognize that objects are inconsistent and fix them. Which leads to just 1 place where we need to care, but:
Until user opens Step#2 - DB will contain inconsistent objects. This hasn't resulted in any problems so far. But I can imagine it may complicate future SQL migrations.
We update DB state on GET request. This one doesn't seem like that big of a deal since GET stays idempotent anyway. But still it feels awkward.
Anyone knows better approaches? Or maybe improvements to these two?
Update
I haven't found perfect solution, but eventually we implemented an improved version of #1. When updating state on Step#1 we also set a flag "need to rebuild Step#2", when UI opens Step#2 it first checks this flag and issues a PUT to rebuild the state, and only then it GETs Step#2.
This still means that DB state is inconsistent for some period of time. But at least we'll know this for sure from the flag in DB. And if needed - we could write migrations taking this flag into account. This also allows (if needed in the future) to create an async job to fix the state.
I think it is more flexible to separate the state and the context where the objects are stored. Any creation of a new object at any step is accompanied by the preservation of the invariant and consistency of context.
There are separate rules of states - these are rules for transition from one to another and available objects for creation and separate rules for the context, rules for its consistency, which is ensured every time it changes.
What about dirty data asynchronous cleanup?
Whenever user goes back to Step #1 and changes something, mark all related data as "dirty" (e.g. add links to it in "DirtyData" table) and be done for now.
Have a DataCleanup worker (e.g. separate thread or smth) that constantly looks for data to be cleaned up.
Before editing data for Step #2, check if the data is not dirty.
Depending on your logic, 3) might result in user error (e.g. user would need to repeat Step #2). If DataCleanup worker has enough resources (i.e. it processes DirtyData table almost instantaneously), that should happen only on very rare occasions. If that is not OK, you could opt for checking for dirty data on each fetch, but that could be expensive.
It sounds like you're familiar with the HTTP spec regarding GET requests, but for future readers:
Why shouldn't a GET request change data on the server?
Why is using a HTTP GET to update state on the server in a RESTful call incorrect?
For the other bullet under 2, we probably don't need a specification to agree that persisting valid data is preferable to persisting invalid data.
So what can we do for the bullets under 1 to avoid complex branching logic in a particular step and also circular dependencies? My suggestion is an event-driven design. When step #2 changes it should fire a change event. In this scenario, step #2 has no knowledge of the concrete listener(s) who may receive its events, so it remains decoupled from any complex handling logic.
There's probably no way to guarantee you don't forget anything in the future; but if every step in the workflow is defined as a listener, it forces you to consider change events to some extent every time you implement a new step.
One side note on granularity: if a step has many changes, it can batch up its events rather than fire each one individually. You can adjust the size for efficiency.
In summary, I would strongly consider the Observer design pattern.

Nested transactions for testing

I was wondering if it's a good practice to nest two transactions? For example wrapping my NHibernate transaction with TransactionScope for the benefit of the Tests (making sure that the db rollbacks all the changes that were made in the test).
The other option is to keep the entities that I insert into the Db in memory and delete them at the end of the test.
Which one is better?
First of all, nhibernate doesn't support nested transactions!
TransactionScope on the other side will not create a new transaction if there is already one opened. If you only use transaction scope, it will create a new transaction for the connection.
If you then open a transaction within the scope, this will still work with nhibernate.
Back to your question, it pretty much depends on the amount of objects you create within the TransactionScope. If it becomes too many, you will simply spam the transaction log of your database. Apart from that, the concept is perfectly fine I would say.
And one important thing to mention, if you use TransactionScope, and you create multiple sessions/transaction with nhibernate, the scope might switch to distributed transactions which requires MSDTC to run on the target server, otherwise it will simply fail.

Repeating a query does not refresh the properties of the returned objects

When doing a criteria query with NHibernate, I want to get fresh results and not old ones from a cache.
The process is basically:
Query persistent objects into NHibernate application.
Change database entries externally (another program, manual edit in SSMS / MSSQL etc.).
Query persistence objects (with same query code), previously loaded objects shall be refreshed from database.
Here's the code (slightly changed object names):
public IOrder GetOrderByOrderId(int orderId)
{
...
IList result;
var query =
session.CreateCriteria(typeof(Order))
.SetFetchMode("Products", FetchMode.Eager)
.SetFetchMode("Customer", FetchMode.Eager)
.SetFetchMode("OrderItems", FetchMode.Eager)
.Add(Restrictions.Eq("OrderId", orderId));
query.SetCacheMode(CacheMode.Ignore);
query.SetCacheable(false);
result = query.List();
...
}
The SetCacheMode and SetCacheable have been added by me to disable the cache. Also, the NHibernate factory is set up with config parameter UseQueryCache=false:
Cfg.SetProperty(NHibernate.Cfg.Environment.UseQueryCache, "false");
No matter what I do, including Put/Refresh cache modes, for query or session: NHibernate keeps returning me outdated objects the second time the query is called, without the externally committed changes. Info btw.: the outdated value in this case is the value of a Version column (to test if a stale object state can be detected before saving). But I need fresh query results for multiple reasons!
NHibernate even generates an SQL query, but it is never used for the values returned.
Keeping the sessions open is neccessary to do dynamic updates on dirty columns only (also no stateless sessions for solution!); I don't want to add Clear(), Evict() or such everywhere in code, especially since the query is on a lower level and doesn't remember the objects previously loaded. Pessimistic locking would kill performance (multi-user environment!)
Is there any way to force NHibernate, by configuration, to send queries directly to the DB and get fresh results, not using unwanted caching functions?
First of all: this doesn't have anything to do with second-level caching (which is what SetCacheMode and SetCacheable control). Even if it did, those control caching of the query, not caching of the returned entities.
When an object has already been loaded into the current session (also called "first-level cache" by some people, although it's not a cache but an Identity Map), querying it again from the DB using any method will never override its value.
This is by design and there are good reasons for it behaving this way.
If you need to update potentially changed values in multiple records with a query, you will have to Evict them previously.
Alternatively, you might want to read about Stateless Sessions.
Is this code running in a transaction? Or is that external process running in a transaction? If one of those two is still in a transaction, you will not see any updates.
If that is not the case, you might be able to find the problem in the log messages that NHibernate is creating. These are very informative and will always tell you exactly what it is doing.
Keeping the sessions open is neccessary to do dynamic updates on dirty columns only
This is either the problem or it will become a problem in the future. NHibernate is doing all it can to make your life better, but you are trying to do as much as possible to prevent NHibernate to do it's job properly.
If you want NHibernate to update the dirty columns only, you could look at the dynamic-update-attribute in your class mapping file.

Flushing with Castle ActiveRecord

i saw that i can use SessionScope and have inserts
inside the scope of the SessionScope and do flush at the end of the scope.
my question is if i can define in some way that after, let's
say, every 10 insertions/saves of objects, that they will automatically
be flushed to the db.
in other words i want to be able to configure the way i use flush with castle active record.
p.s: is there any way to configure cascading behavior for objects like in NHibernate?
You could hook up your own IPostInsertEventListener where you keep track of insertion count and flush accordingly. But I recommend against this unless you have some very good reasons to do so.
The relevant attributes have a Cascade property to set cascading behavior. See for example HasMany.

how to create a readonly session in nHiberate?

Is it possible to creat a readonly connection in nHibernate ?
Read-only : where nHibernate will not flush out any changes to the underlying database implicitly or explicitly.
When closing a nhibernate connection it does automatically flush out the changes to the persistent object.
Setting the flush mode to never is one way - but is reversable (i.e some code can reset the flush mode).
I think you've already found the solution, setting flush mode to never. Yes, it is changeable but even if it wasn't, code could simply create another session that had a different flush mode.
I think the appropriate solution is to suggest read-only with session.FlushMode = FlushMode.Never and enforce it by using a connection to the database that only has SELECT permissions (or whatever is appropriate for your situation). Maintaining separate ISessionFactory factories might help by allowing something like ReadOnlySessionFactory.Create().
Take a look at Read Only entities that became available in NHibernate 3.1 GA
https://nhibernate.jira.com/browse/NH-908
There is a newer readonly feature in NHibernate (I don't know which version, but it's in 3.3.0 for sure). You can set the session to read only using this:
session.DefaultReadOnly = true
It disables the cache for old values and therefore improves performance and memory consumption.
There is a chapter about read-only entities in the NHibernate reference documentation.
Accumulating updates, and just never flushing seems like a bad solution to me.
I posted a similar question. The solution provided uses a different approach. All the events are set to empty, and thus ignored. My feeling is that it's a better approach.
I am surprised that this is not easier to do. I like the entity framework approach of using an extension method .AsNoTracking() which ensures that read only queries remain that way.
How to create an NHibernate read-only session with Fluent NHibernate that doesn't accumulate updates?