Wildfly 10 + Eclipselink 2.6.4 - #Observes(during = TransactionPhase.AFTER_SUCCESS) - Get stale entity - eclipselink

I have noticed a strange problem on Wildfly 10 that does not occur in weblogic.
In summary, in wildfly, I am experiencing that an #ApplicationScoped bean that observers an event #Observes(during = TransactionPhase.AFTER_SUCCESS) SomeEvent event, and that in its observe logic uses an #EJB to open up a fresh and small lived transaction context, is getting access a stale entity modified by the original #Mdb transaction that published the event.
And this "Single Thread" phenomena can lead to multiple interesting exceptions.
Type of error 1: Optimistic lock exception:
If your AFTER_SUCCESS observer wants to run itself a short transaction that modifies the entity of the primary transaction that published the event.
Well - on the first try you get an optimistic locking exception.
You get it because even though the MDB run its transaction on Thread1, and your serial observer is observing after success on Thread 1.
You have the problem that the MDB successfully committed the changes to the DB if you go check for those, but you observer is loading the entity from the Entity Manager as if it was stale, in the state as before the Mdb run.
So the #versioninfo is just stale and Eclipselink thinks you have an optimistic lock exception.
If you allow the optimistic lock exception to happen and retry a second time to run the transaction on the same Thread1, you overcome the problem and manage to get your changes through.
This is sloppy and ugly but it works.
The second type of problem is more serious.
If your #bserver on success does a Read only transaction, that nothing blows up and you are not given any indication that you are using stale data.
This is bad, really really bad.
So here I try in my observer doing something like.
MyEntity entity = em.fetchById(primaryKeyOfEntity);
String staleValue = entity.getFieldModifiedByMdbTransaction();
em.refreh(rentity);
String secondTryToGetTheValue = entity.getFieldModifiedByMdbTransaction();
// now the value is no longer stale it is perfect. It is what the MDB that published the event committed.
Using the same eclipselink version on weblogic 12.2.1.2 the pattern of #Observing ON_SUCCESS is not returning any stale data.
The changes done on the first Mdb transaction are well published to the ON_SUCCESS transaction that runs after it on the same thread.
Is anyone aware of this?
Is there a way to tell the entity manager that it should use the server session cache and not the local cache?
Could it be that EclipselInk is only moving the Unit Of Work cache of modified entities to the server cache after the Event ON_SUCCESS is handled and not on the onCommit()? Meaning, the server session cache is holding stale entities until the ON_SUCCESS has finished? - This should not be. When the commit is fired to the DB and before the ON_SUCCESS gets called, the Unit of work cache must be published to session cache.
However, the ON_SUCCESS is literally on a #TransactionAttribute(RequriesNew), when it starts it should have an empty unit of work.
In addition, I have attempt to entitManager.clear(), it has not effect.
The only thing that works in brining out a non stale entity is the em.refresh().
And an em.refresh() to me always smells like sloppy code.
Anyone else has experienced the same problem on wildfly?
Any magical eclipselink persistence unit property to work around this without killing performance?

Related

nhibernate RollbackTransaction and why to dispose of session after rollback?

I am working on a system using nhibernate, and I see a lot of the following two lines in exception catch blocks:
session.Flush();
session.RollbackTransaction();
I am very confused by this logic, and it looks like unnecessary work to flush changes, then use transaction rollback practices.
I wanted to setup an argument for removing these flush calls and relying on just the RollbackTransaction method, but I came across this question. Next I read more into the documentation linked, and read the following nugget of information:
If you rollback the transaction you should immediately close and discard the current session to ensure that NHibernate's internal state is consistent.
What does this mean? we currently pair our Session life time with our web request's begin and end operations, so I am worried that the reason we are calling flush THEN rollback is to keep the session in a valid state.
Any ideas?
NHibernate does object tracking via the session, and all the changes you do the entities are stored there, when you do the flush those changes are written to the db. If you get an exception while doing so the session state is not consistent with the database state, so if you do a rollback at this stage it will rollback db transaction but session values will not be rolled back.
As per design once that happens the session should not be used further in reliable manner (even Session.Clear() will not help)
If you use the session per request and if you get an error the best approach is to display an error to the user and ask to retry the operation. Other option is to create a brand new session and use it for data fetching purposes to display errors.
This Flush before Rollback is very likely a trick for working around a bug of the application caused by re-using the session after the rollback.
As you have found by yourself, the session must not be used after a rollback. The application is doing that mistake as per your comment.
Without the Flush before the Rollback, the session still consider the changes as pending, and will commit them at next Flush, defeating the Rollback purpose. Doing that Flush before the rollback causes pending changes to be flushed, then rollback-ed, helping avoiding the session to flush them later.
But the session is still not in a consistent state, so by continuing to use it, the application stays at risk. The session cache still holds the changes that were attempted then rollback-ed. The session does just no more consider them as pending changes awaiting a flush. If the application later usages of the session access those entities, their state will still be the modified one from the rollback-ed transaction, although they will not be considered dirty.

Usage of NHibernate session after exception on query

We are trying to implement retry logic to recover from transient errors in Azure environment.
We are using long-running sessions to keep track and commit the whole bunch of changes at the end of application transaction (which may spread over several web-requests). Along the way we need to get additional data from database. Our main problem is that we can't easily recover from db error because we can't "replay" all user actions.
So far we used straightforward recovery algorithm:
Try to perform operation in long-running session
In case of error, close the session, open a new one and merge entities into it
Retry the operation
It's very expensive approach in terms of time (merge is really long for big entity hierarchies). So we'd like to optimize things a little.
We'd like to perform query operations in separate session (to keep long running one untouched and safe) and on success, merge results back to the long-running session. Retry is relatively simple here - we just need to open new session and run query once more. However, with this approach we have an issue with initializing lazy properties/collections:
If we do this in separate session, we need to merge results back (a lot of entities) but merge could fail and break the long-running session
We tried different ways of "moving" original entity to different session, loading details and returning it back, but without success (evict, replicate, etc.)
There is known statement that session should be discarded in case of exception. However, the example shows write operation. Is it still true for read ones? I mean if I guarantee that no data is written back to the database, can I reuse the same session to run query again?
Do you have any other suggestions about retry logic with long-running sessions?
IMO there's no way to solve your issue. It's gonna take a lot of time to commit everything or you will have to do a lot of work to break it up into smaller sessions and handle every error that can occur while merging.
To answer your question about using the session after an exception: you cannot trust ANYTHING anymore inside this session, not even loaded entities.
Read this paragraph from Ayende's article about building a simple todo app with a recoveryplan in case of an exception in the session:
Then there is the problem of error handling. If you get an exception
(such as StaleObjectStateException, because of concurrency conflict),
your session and its loaded entities are toast, because with
NHibernate, an exception thrown from a session moves that session into
an undefined state. You can no longer use that session or any loaded
entities. If you have only a single global session, it means that you
probably need to restart the application, which is probably not a good
idea.

nhibernate retry how to manage the session in deadlocks

I have been having deadlock issues. I've been on working some retry approaches. My retry code is currently just a 'for' statement that tries 5 times. I understand i need to use the 'Evit' nhibernate method to clear the session. I am using a session factory and use a transaction for each request.
In the below example if i experience a deadlock on the first retry will the orderNote property remain the same on the second retry?
private ActionResult OrderDetails(int id)
{
var order = _orderRepository.Get(id);
order.OrderNote = "will this text remain";
Retry.Times(5).Do(() => _orderRepository.Update(order));
return View();
}
Edit
1) Finding it hard to trace the cause. I'm getting about 10 locks a day all over my application. Just set up a profiler. Are there any other useful methods for tracing
http://msdn.microsoft.com/en-us/library/ms190465.aspx
I think the main issue is that i'm using auto increament. I'm in the process of moving to hilo.
2) Using a different transation mode. I'm not defining any at the moment. What is recommended.
5) Long running operations. Yes i do. And i think because i'm using auto increament lazy loading is ignored. Does that sound correct?
In my opinion your code is trying to fix the symptoms instead of the cause.
You will be better off doing some of the following things:
Find out why you are getting deadlocks and fix the core issue
Use a different transaction mode to read past locks
Look at delegating the update into a queue structure to be background processed
Understand the update execution plan and perhaps add indexing to speed up queries
Do you have any "long" running operations in your Controller action which is keeping the transaction open for longer than it should be?
Even if the operation did deadlock, why don't you return an friendly error back to the calling page and let them manually retry.
Update:
1.) Useful methods for tracing
I have used this method for tracing deadlocks which should give you an idea of the resources which are in contention: Tracing Deadlocks
You can also look at the concurreny models available to you: NHibernate Concurrency
2.) Transaction Isolation Levels
Depending on your DB this Question has some useful information: Transaction Isolation Mode
3.) Long Running Operations
I have to use Identity Columns as my primary keys in NHibernate and I don't think these are going to be source of your problem in an update scenario as the Id/PK is already set by this point. Try to minimise the long running operations which will shorten the amount of time your transaction is held open.

How to clear NHibernate Cache on any exceptions

I have one NHibernate web application. I have used sys cache provider. While doing any transaction, due to any reason over communication problem between DB server or App server (E.g., n/w problem), the query will fail and this is obvious. But the issue is, this result i.e., exception, is cached for that query, the subsequent execution returns the same result.
The worse case is this happens even on versioning issue i.e., say while updating an domain entity, the same row was already updated by another transaction, the query execution gives an exception with message "Row was updated or deleted by another transaction". This result is cached for the period of default cache configuration time (5 mins).
How to configure not to cache the result on exception or how to clear the cached result during this scenario?
Thanks for help.
Thanks and Regards,
Vijay Pandurangan
You can catch the exception that is thrown, then depending on the exception, evict the entity from the cache using ISession.Evict() (there are several overrides, and EvictCollection() exists too). To invalidate the whole cache, you can use ISession.Clear(). If you really don't trust it, I'd probably create a new ISession entirely.

NHibernate, ActiveRecord, Transaction database locks and when Commits are flushed

This is a common question, but the explanations found so far and observed behaviour are some way apart.
We have want the following nHibernate strategy in our MVC website:
A SessionScope for the request (to track changes)
An ActiveRecord.TransactonScope to wrap our inserts only (to enable rollback/commit of batch)
Selects to be outside a Transaction (to reduce extent of locks)
Delayed Flush of inserts (so that our insert/updates occur as a UoW at end of session)
Now currently we:
Don't get the implied transaction from the SessionScope (with FlushAction Auto or Never)
If we use ActiveRecord.TransactionScope there is no delayed flush and any contained selects are also caught up in a long-running transaction.
I'm wondering if it's because we have an old version of nHibernate (it was from trunk very near 2.0).
We just can't get the expected nHibernate behaviour, and performance sucks (using NHProf and SqlProfiler to monitor db locks).
Here's what we have tried since:
Written our own TransactionScope (inherits from ITransactionScope) that:
Opens a ActiveRecord.TransactionScope on the Commit, not in the ctor (delays transaction until needed)
Opens a 'SessionScope' in the ctor if none are available (as a guard)
Converted our ids to Guid from identity
This stopped the auto flush of insert/update outside of the Transaction (!)
Now we have the following application behaviour:
Request from MVC
SELECTs needed by services are fired, all outside a transaction
Repository.Add calls do not hit the db until scope.Commit is called in our Controllers
All INSERTs / UPDATEs occur wrapped inside a transaction as an atomic unit, with no SELECTs contained.
... But for some reason nHProf now != sqlProfiler (selects seems to happen in the db before nHProf reports it).
NOTE
Before I get flamed I realise the issues here, and know that the SELECTs aren't in the Transaction. That's the design. Some of our operations will contain the SELECTs (we now have a couple of our own TransactionScope implementations) in serialised transactions. The vast majority of our code does not need up-to-the-minute live data, and we have serialised workloads with individual operators.
ALSO
If anyone knows how to get an identity column (non-PK) refreshed post-insert without a need to manually refresh the entity, and in particular by using ActiveRecord markup (I think it's possible in nHibernate mapping files using a 'generated' attribute) please let me know!!