I have one NHibernate web application. I have used sys cache provider. While doing any transaction, due to any reason over communication problem between DB server or App server (E.g., n/w problem), the query will fail and this is obvious. But the issue is, this result i.e., exception, is cached for that query, the subsequent execution returns the same result.
The worse case is this happens even on versioning issue i.e., say while updating an domain entity, the same row was already updated by another transaction, the query execution gives an exception with message "Row was updated or deleted by another transaction". This result is cached for the period of default cache configuration time (5 mins).
How to configure not to cache the result on exception or how to clear the cached result during this scenario?
Thanks for help.
Thanks and Regards,
Vijay Pandurangan
You can catch the exception that is thrown, then depending on the exception, evict the entity from the cache using ISession.Evict() (there are several overrides, and EvictCollection() exists too). To invalidate the whole cache, you can use ISession.Clear(). If you really don't trust it, I'd probably create a new ISession entirely.
Related
I have noticed a strange problem on Wildfly 10 that does not occur in weblogic.
In summary, in wildfly, I am experiencing that an #ApplicationScoped bean that observers an event #Observes(during = TransactionPhase.AFTER_SUCCESS) SomeEvent event, and that in its observe logic uses an #EJB to open up a fresh and small lived transaction context, is getting access a stale entity modified by the original #Mdb transaction that published the event.
And this "Single Thread" phenomena can lead to multiple interesting exceptions.
Type of error 1: Optimistic lock exception:
If your AFTER_SUCCESS observer wants to run itself a short transaction that modifies the entity of the primary transaction that published the event.
Well - on the first try you get an optimistic locking exception.
You get it because even though the MDB run its transaction on Thread1, and your serial observer is observing after success on Thread 1.
You have the problem that the MDB successfully committed the changes to the DB if you go check for those, but you observer is loading the entity from the Entity Manager as if it was stale, in the state as before the Mdb run.
So the #versioninfo is just stale and Eclipselink thinks you have an optimistic lock exception.
If you allow the optimistic lock exception to happen and retry a second time to run the transaction on the same Thread1, you overcome the problem and manage to get your changes through.
This is sloppy and ugly but it works.
The second type of problem is more serious.
If your #bserver on success does a Read only transaction, that nothing blows up and you are not given any indication that you are using stale data.
This is bad, really really bad.
So here I try in my observer doing something like.
MyEntity entity = em.fetchById(primaryKeyOfEntity);
String staleValue = entity.getFieldModifiedByMdbTransaction();
em.refreh(rentity);
String secondTryToGetTheValue = entity.getFieldModifiedByMdbTransaction();
// now the value is no longer stale it is perfect. It is what the MDB that published the event committed.
Using the same eclipselink version on weblogic 12.2.1.2 the pattern of #Observing ON_SUCCESS is not returning any stale data.
The changes done on the first Mdb transaction are well published to the ON_SUCCESS transaction that runs after it on the same thread.
Is anyone aware of this?
Is there a way to tell the entity manager that it should use the server session cache and not the local cache?
Could it be that EclipselInk is only moving the Unit Of Work cache of modified entities to the server cache after the Event ON_SUCCESS is handled and not on the onCommit()? Meaning, the server session cache is holding stale entities until the ON_SUCCESS has finished? - This should not be. When the commit is fired to the DB and before the ON_SUCCESS gets called, the Unit of work cache must be published to session cache.
However, the ON_SUCCESS is literally on a #TransactionAttribute(RequriesNew), when it starts it should have an empty unit of work.
In addition, I have attempt to entitManager.clear(), it has not effect.
The only thing that works in brining out a non stale entity is the em.refresh().
And an em.refresh() to me always smells like sloppy code.
Anyone else has experienced the same problem on wildfly?
Any magical eclipselink persistence unit property to work around this without killing performance?
So the question is mostly in the title but after some research I can't really find any deeper information about this. Mostly I want to know if a deadlock situation occurs does Breeze automatically reattempt the commit or does it just return an error back to the front end to try saving again? Any documentation or articles going deeper into this would be appreciated!
To a certain extent this depends on the server backend that you are using. But in general breeze will NOT attempt to retry a deadlock failure and will instead return an exception indicating that a deadlock occurred. You can then retry the save yourself by handling the client side exception and reexecuting the query.
Note that because of the way that most breeze servers automatically toposort the entities in a save request, deadlocks are much less likely than if such a sort was not performed. The idea here is that by ensuring that multiple instances of a program use the same ordering when updating the same set of tables, we reduce the possibility of a deadlock.
This toposorting is part of any Entity Framework based backend as well as the Breeze Node/Sequelize (MySQL, Postgress) provider, and is likely to be added to the Breeze NHibernate and MongoDb providers in the near future.
We are trying to implement retry logic to recover from transient errors in Azure environment.
We are using long-running sessions to keep track and commit the whole bunch of changes at the end of application transaction (which may spread over several web-requests). Along the way we need to get additional data from database. Our main problem is that we can't easily recover from db error because we can't "replay" all user actions.
So far we used straightforward recovery algorithm:
Try to perform operation in long-running session
In case of error, close the session, open a new one and merge entities into it
Retry the operation
It's very expensive approach in terms of time (merge is really long for big entity hierarchies). So we'd like to optimize things a little.
We'd like to perform query operations in separate session (to keep long running one untouched and safe) and on success, merge results back to the long-running session. Retry is relatively simple here - we just need to open new session and run query once more. However, with this approach we have an issue with initializing lazy properties/collections:
If we do this in separate session, we need to merge results back (a lot of entities) but merge could fail and break the long-running session
We tried different ways of "moving" original entity to different session, loading details and returning it back, but without success (evict, replicate, etc.)
There is known statement that session should be discarded in case of exception. However, the example shows write operation. Is it still true for read ones? I mean if I guarantee that no data is written back to the database, can I reuse the same session to run query again?
Do you have any other suggestions about retry logic with long-running sessions?
IMO there's no way to solve your issue. It's gonna take a lot of time to commit everything or you will have to do a lot of work to break it up into smaller sessions and handle every error that can occur while merging.
To answer your question about using the session after an exception: you cannot trust ANYTHING anymore inside this session, not even loaded entities.
Read this paragraph from Ayende's article about building a simple todo app with a recoveryplan in case of an exception in the session:
Then there is the problem of error handling. If you get an exception
(such as StaleObjectStateException, because of concurrency conflict),
your session and its loaded entities are toast, because with
NHibernate, an exception thrown from a session moves that session into
an undefined state. You can no longer use that session or any loaded
entities. If you have only a single global session, it means that you
probably need to restart the application, which is probably not a good
idea.
We're using siteCore 6.5 and each time we start to publish items, users who are browsing the website will get server 500 errors which end up being
Transaction (Process ID ##) was deadlocked on lock resources with
another process and has been chosen as the deadlock victim. Rerun the
transaction.
How can we setup SQL Server to give priority to a specific application? We cannnot modify any queries or code so it has to be done via SQL Server (or connection string)
I've seen "deadlock victim" in transaction, how to change the priority? and looked at http://msdn.microsoft.com/en-us/library/ms186736(v=SQL.105).aspx but these seem to be per session, not globally.
I don't care if it's a fix/change to SiteCore or a SQL solution.
I don't think you can set the deadlock priority globally - it's a session-only setting. There are not any connection string settings that I know of. The list of possible SqlConnection string settings can be found here.
It sounds to me like you're actually having a problem with the cache and every time you publish, it's clearing the cache and thus you're getting deadlocks with all these calls made at the same time. I haven't seen this sort of thing happen with 6.5 so you might also want to check into your caching. It would help a lot to look at your Sitecore logs and see if this is happening when caches are being created. Either way, check the caching guide on the SDN and see if that helps.
I have been having deadlock issues. I've been on working some retry approaches. My retry code is currently just a 'for' statement that tries 5 times. I understand i need to use the 'Evit' nhibernate method to clear the session. I am using a session factory and use a transaction for each request.
In the below example if i experience a deadlock on the first retry will the orderNote property remain the same on the second retry?
private ActionResult OrderDetails(int id)
{
var order = _orderRepository.Get(id);
order.OrderNote = "will this text remain";
Retry.Times(5).Do(() => _orderRepository.Update(order));
return View();
}
Edit
1) Finding it hard to trace the cause. I'm getting about 10 locks a day all over my application. Just set up a profiler. Are there any other useful methods for tracing
http://msdn.microsoft.com/en-us/library/ms190465.aspx
I think the main issue is that i'm using auto increament. I'm in the process of moving to hilo.
2) Using a different transation mode. I'm not defining any at the moment. What is recommended.
5) Long running operations. Yes i do. And i think because i'm using auto increament lazy loading is ignored. Does that sound correct?
In my opinion your code is trying to fix the symptoms instead of the cause.
You will be better off doing some of the following things:
Find out why you are getting deadlocks and fix the core issue
Use a different transaction mode to read past locks
Look at delegating the update into a queue structure to be background processed
Understand the update execution plan and perhaps add indexing to speed up queries
Do you have any "long" running operations in your Controller action which is keeping the transaction open for longer than it should be?
Even if the operation did deadlock, why don't you return an friendly error back to the calling page and let them manually retry.
Update:
1.) Useful methods for tracing
I have used this method for tracing deadlocks which should give you an idea of the resources which are in contention: Tracing Deadlocks
You can also look at the concurreny models available to you: NHibernate Concurrency
2.) Transaction Isolation Levels
Depending on your DB this Question has some useful information: Transaction Isolation Mode
3.) Long Running Operations
I have to use Identity Columns as my primary keys in NHibernate and I don't think these are going to be source of your problem in an update scenario as the Id/PK is already set by this point. Try to minimise the long running operations which will shorten the amount of time your transaction is held open.