NHibernate, Transaction rollback and Entity version - nhibernate

At the moment im trying to implement code that handles stale state exceptions (ie, another user has changed this row etc.. ) nicely when im committing a transaction using nhibernate. The idea is to, when the exception occurs when flushing, to roll back the transaction, "fix" the entities through different means, then rerun the whole transaction code again.
My problem is, when the transaction rolls back, the entities version property has still been incremented for those entities that successfully updated the database, even though the transaction in the database has been rolled back (This is actually also true for the entity that failed the transaction). This means that the second run will never succeed, because the version is out of sync with the database.
How do I solve this problem?

When an NHibernate exception is thrown, you MUST throw away that session, as the state is not considered valid anymore.
That implies re-getting the entities too.

Related

Wildfly 10 + Eclipselink 2.6.4 - #Observes(during = TransactionPhase.AFTER_SUCCESS) - Get stale entity

I have noticed a strange problem on Wildfly 10 that does not occur in weblogic.
In summary, in wildfly, I am experiencing that an #ApplicationScoped bean that observers an event #Observes(during = TransactionPhase.AFTER_SUCCESS) SomeEvent event, and that in its observe logic uses an #EJB to open up a fresh and small lived transaction context, is getting access a stale entity modified by the original #Mdb transaction that published the event.
And this "Single Thread" phenomena can lead to multiple interesting exceptions.
Type of error 1: Optimistic lock exception:
If your AFTER_SUCCESS observer wants to run itself a short transaction that modifies the entity of the primary transaction that published the event.
Well - on the first try you get an optimistic locking exception.
You get it because even though the MDB run its transaction on Thread1, and your serial observer is observing after success on Thread 1.
You have the problem that the MDB successfully committed the changes to the DB if you go check for those, but you observer is loading the entity from the Entity Manager as if it was stale, in the state as before the Mdb run.
So the #versioninfo is just stale and Eclipselink thinks you have an optimistic lock exception.
If you allow the optimistic lock exception to happen and retry a second time to run the transaction on the same Thread1, you overcome the problem and manage to get your changes through.
This is sloppy and ugly but it works.
The second type of problem is more serious.
If your #bserver on success does a Read only transaction, that nothing blows up and you are not given any indication that you are using stale data.
This is bad, really really bad.
So here I try in my observer doing something like.
MyEntity entity = em.fetchById(primaryKeyOfEntity);
String staleValue = entity.getFieldModifiedByMdbTransaction();
em.refreh(rentity);
String secondTryToGetTheValue = entity.getFieldModifiedByMdbTransaction();
// now the value is no longer stale it is perfect. It is what the MDB that published the event committed.
Using the same eclipselink version on weblogic 12.2.1.2 the pattern of #Observing ON_SUCCESS is not returning any stale data.
The changes done on the first Mdb transaction are well published to the ON_SUCCESS transaction that runs after it on the same thread.
Is anyone aware of this?
Is there a way to tell the entity manager that it should use the server session cache and not the local cache?
Could it be that EclipselInk is only moving the Unit Of Work cache of modified entities to the server cache after the Event ON_SUCCESS is handled and not on the onCommit()? Meaning, the server session cache is holding stale entities until the ON_SUCCESS has finished? - This should not be. When the commit is fired to the DB and before the ON_SUCCESS gets called, the Unit of work cache must be published to session cache.
However, the ON_SUCCESS is literally on a #TransactionAttribute(RequriesNew), when it starts it should have an empty unit of work.
In addition, I have attempt to entitManager.clear(), it has not effect.
The only thing that works in brining out a non stale entity is the em.refresh().
And an em.refresh() to me always smells like sloppy code.
Anyone else has experienced the same problem on wildfly?
Any magical eclipselink persistence unit property to work around this without killing performance?

nhibernate RollbackTransaction and why to dispose of session after rollback?

I am working on a system using nhibernate, and I see a lot of the following two lines in exception catch blocks:
session.Flush();
session.RollbackTransaction();
I am very confused by this logic, and it looks like unnecessary work to flush changes, then use transaction rollback practices.
I wanted to setup an argument for removing these flush calls and relying on just the RollbackTransaction method, but I came across this question. Next I read more into the documentation linked, and read the following nugget of information:
If you rollback the transaction you should immediately close and discard the current session to ensure that NHibernate's internal state is consistent.
What does this mean? we currently pair our Session life time with our web request's begin and end operations, so I am worried that the reason we are calling flush THEN rollback is to keep the session in a valid state.
Any ideas?
NHibernate does object tracking via the session, and all the changes you do the entities are stored there, when you do the flush those changes are written to the db. If you get an exception while doing so the session state is not consistent with the database state, so if you do a rollback at this stage it will rollback db transaction but session values will not be rolled back.
As per design once that happens the session should not be used further in reliable manner (even Session.Clear() will not help)
If you use the session per request and if you get an error the best approach is to display an error to the user and ask to retry the operation. Other option is to create a brand new session and use it for data fetching purposes to display errors.
This Flush before Rollback is very likely a trick for working around a bug of the application caused by re-using the session after the rollback.
As you have found by yourself, the session must not be used after a rollback. The application is doing that mistake as per your comment.
Without the Flush before the Rollback, the session still consider the changes as pending, and will commit them at next Flush, defeating the Rollback purpose. Doing that Flush before the rollback causes pending changes to be flushed, then rollback-ed, helping avoiding the session to flush them later.
But the session is still not in a consistent state, so by continuing to use it, the application stays at risk. The session cache still holds the changes that were attempted then rollback-ed. The session does just no more consider them as pending changes awaiting a flush. If the application later usages of the session access those entities, their state will still be the modified one from the rollback-ed transaction, although they will not be considered dirty.

Can a NHibernate transaction be continued after an exception?

I am using NHibernate to save objects that require that one of the properties on these objects must be unique. The design of the system is such that it is possible that an attempt may be made occasionally to save the same object twice. This of course causes a violation of the uniqueness constraint and NHibernate throws an exception. The exception happens at the time I am attempting to save the object, rather than at Transaction.Commit() time. When this happens I want to simply catch the exception, discard the object and continue on saving other similar objects. However, I have not found a way to allow this to happen. Once that exception has happened I cannot carry on to save other objects and commit the transaction.
The only work-around I have found for this is to always check if the object exists first by running a query on the unique property. That works, but it seems unnecessarily expensive. I would like to avoid that extra hit to the db. Is there a way to do this?
Thanks
The issue, you've described, must be solved on a higher level then NHibernate session. Take a look at 9.8. Exception handling, extract:
If the ISession throws an exception you should immediately rollback
the transaction, call ISession.Close() and discard the ISession
instance. Certain methods of ISession will not leave the session in a
consistent state.
So, what I would suggest, wrap the call to your Data layer (DL) with some validation. Place the if/try logic outside of the Session.
Because even in case, that we are using versioning (see 5.1.7. version) (a very powerful way how to survive concurrency) ... we are provided with StaleExceptions and have to solve them outside of the DL

NHibernate concurrency

I am new to NHibernate. I am curious to know if two processes on different machines pull up the same record at the same time. Both of them modify the record and one of them submits the record before the other. Will the second process roll back the transaction and throw an error message that the record has already been updated?
No, not by default. However, using <version> mapping will help you with this.
version-mapping
Optimistic concurrency control

NHibernate, ActiveRecord, Transaction database locks and when Commits are flushed

This is a common question, but the explanations found so far and observed behaviour are some way apart.
We have want the following nHibernate strategy in our MVC website:
A SessionScope for the request (to track changes)
An ActiveRecord.TransactonScope to wrap our inserts only (to enable rollback/commit of batch)
Selects to be outside a Transaction (to reduce extent of locks)
Delayed Flush of inserts (so that our insert/updates occur as a UoW at end of session)
Now currently we:
Don't get the implied transaction from the SessionScope (with FlushAction Auto or Never)
If we use ActiveRecord.TransactionScope there is no delayed flush and any contained selects are also caught up in a long-running transaction.
I'm wondering if it's because we have an old version of nHibernate (it was from trunk very near 2.0).
We just can't get the expected nHibernate behaviour, and performance sucks (using NHProf and SqlProfiler to monitor db locks).
Here's what we have tried since:
Written our own TransactionScope (inherits from ITransactionScope) that:
Opens a ActiveRecord.TransactionScope on the Commit, not in the ctor (delays transaction until needed)
Opens a 'SessionScope' in the ctor if none are available (as a guard)
Converted our ids to Guid from identity
This stopped the auto flush of insert/update outside of the Transaction (!)
Now we have the following application behaviour:
Request from MVC
SELECTs needed by services are fired, all outside a transaction
Repository.Add calls do not hit the db until scope.Commit is called in our Controllers
All INSERTs / UPDATEs occur wrapped inside a transaction as an atomic unit, with no SELECTs contained.
... But for some reason nHProf now != sqlProfiler (selects seems to happen in the db before nHProf reports it).
NOTE
Before I get flamed I realise the issues here, and know that the SELECTs aren't in the Transaction. That's the design. Some of our operations will contain the SELECTs (we now have a couple of our own TransactionScope implementations) in serialised transactions. The vast majority of our code does not need up-to-the-minute live data, and we have serialised workloads with individual operators.
ALSO
If anyone knows how to get an identity column (non-PK) refreshed post-insert without a need to manually refresh the entity, and in particular by using ActiveRecord markup (I think it's possible in nHibernate mapping files using a 'generated' attribute) please let me know!!