how to replicate and retry deadlocks in nhibernate - nhibernate

Looking through my logs, I can see that my app is vulnerable to deadlocks. They are occurring in many parts of my application.
1) Is there way to replicate this issue. ie: I have only seen this in logs.
2) What is the best/simplest way to retry if the transaction is locked
3) If I wrapped the call in a try/catch. What would the exception type be.
There is a lot written about the issue. I concluded the best option is to try and shorten the transactions as much as possible. Should I change the isolation levels?

Finding Deadlocks
deadlocks are very hard to find. If you know why they occur, you may reproduce it in integration tests. In real environments you can use Profiler to observe dead locks. It shows a graph which displays how the deadlock is formed.
Retry
You should actually throw away the transaction and start again. The NHibernate session is out of synch after any database exception.
We have a delay before restarting to avoid more stress to the database. It waits for a certain time containing a random number, to avoid that the parallel transactions are synchronizing again.
Avoiding Deadlocks
Reducing Lock Time
If you are using Sql Server, it is very vulnerable to dead locks because of its pessimistic locking mechanism (in contrast to Oracle databases). The newer Snapshot isolation level is something similar to what Oracle is doing and may fix the problem to some degree, but I never used until now so I can't say much about it.
NHibernate fixes the problem as far as possible by caching changes to persistent data and store it at the end of a transaction. But there are some limits and some ways to break it.
Using identity ("auto numbers") as primary keys is probably the most famous mistake. It forces NH to insert entities when they are put into the session which produces a lock of the whole table (in SQL Server).
More complicated to fix is the flushing problem. NH needs to flush changes before executing queries, to ensure consistency. You can get around this by setting FlushMode to Never, which may cause consistency problems, so only do it when you exactly know what you do. The best solution is to only use Get or Load or navigate to properties of a root entity instead of performing queries in the middle of a transaction.
By doing all this, NH is able to wait for any Insert, Update and Delete command to the database until the end of the transaction. The reduces lock time a lot and therefore it also reduces the risk of dead locks.
General Rules To Avoid Deadlock
The general rules to avoid deadlocks also apply when using NHibernate. Most important: lock resources in a certain order, lock resources not on by one but all at the beginning. The latter is contradictory to what I said above to reduce lock time. It would mean that you lock resources at the beginning of a transaction to make other transactions wait until it is finished. This may reduce deadlocks but also reduces parallel execution.

This is the solution that we opted to use in legacy system where we could not fix the root cause of these deadlocks as it would mean rewriting a lot of existing and poorly documented code. The system was using DataSets and ADO.NET classes, so if you intent to use NHibernate I fear you would have to research its internals and/or develop your own extension or for if existing functionality is not available for that.
1) If the code is prone to deadlocks they should start appearing at sufficient database load. You need many simultaneous connections working with the same tables using the problematic procedures.
It is difficult to reproduce deadlocks in the exact places you want, but if you want general deadlocks for your retry procedure testing do simultaneous reads/inserts into same tables from 10+ threads with differing access order (e.g. table A then B in some of them, table B and then A in others) with small delays and you get one soon.
2) You need to retry the entire code fragment that works with the transaction and retry data initialization too. Meaning, if you are filling datasets within transaction you have to clear them at the beginning of retryable code block.
3) It is .Numer=1205 of SqlException. In general, you can also retry on timeout and network errors:
switch (sqlEx.Number)
{
case 1205:
{
DebugLog("DEADLOCK!");
canRetry = true;
break;
}
case -2:
case -2147217871:
{
DebugLog("TIMEOUT!");
canRetry = true;
break;
}
case 11:
{
DebugLog("NETWORK ERROR!");
canRetry = true;
break;
}
default:
{
DebugLog(string.Format("SQL ERROR: {0}", sqlEx.Number));
break;
}
}
In my experience when retrying on deadlock it is best to discard the connection from the pool with SqlConnection.ClearPool(connection) because it might not be reset properly for the next time.

Related

nhibernate retry how to manage the session in deadlocks

I have been having deadlock issues. I've been on working some retry approaches. My retry code is currently just a 'for' statement that tries 5 times. I understand i need to use the 'Evit' nhibernate method to clear the session. I am using a session factory and use a transaction for each request.
In the below example if i experience a deadlock on the first retry will the orderNote property remain the same on the second retry?
private ActionResult OrderDetails(int id)
{
var order = _orderRepository.Get(id);
order.OrderNote = "will this text remain";
Retry.Times(5).Do(() => _orderRepository.Update(order));
return View();
}
Edit
1) Finding it hard to trace the cause. I'm getting about 10 locks a day all over my application. Just set up a profiler. Are there any other useful methods for tracing
http://msdn.microsoft.com/en-us/library/ms190465.aspx
I think the main issue is that i'm using auto increament. I'm in the process of moving to hilo.
2) Using a different transation mode. I'm not defining any at the moment. What is recommended.
5) Long running operations. Yes i do. And i think because i'm using auto increament lazy loading is ignored. Does that sound correct?
In my opinion your code is trying to fix the symptoms instead of the cause.
You will be better off doing some of the following things:
Find out why you are getting deadlocks and fix the core issue
Use a different transaction mode to read past locks
Look at delegating the update into a queue structure to be background processed
Understand the update execution plan and perhaps add indexing to speed up queries
Do you have any "long" running operations in your Controller action which is keeping the transaction open for longer than it should be?
Even if the operation did deadlock, why don't you return an friendly error back to the calling page and let them manually retry.
Update:
1.) Useful methods for tracing
I have used this method for tracing deadlocks which should give you an idea of the resources which are in contention: Tracing Deadlocks
You can also look at the concurreny models available to you: NHibernate Concurrency
2.) Transaction Isolation Levels
Depending on your DB this Question has some useful information: Transaction Isolation Mode
3.) Long Running Operations
I have to use Identity Columns as my primary keys in NHibernate and I don't think these are going to be source of your problem in an update scenario as the Id/PK is already set by this point. Try to minimise the long running operations which will shorten the amount of time your transaction is held open.

Regarding SQL Server Locking Mechanism

I would like to ask couple of questions regarding SQL Server Locking mechanism
If i am not using the lock hint with SQL Statement, SQL Server uses PAGELOCK hint by default. am I right??? If yes then why? may be its due to the factor of managing too many locks this is the only thing i took as drawback but please let me know if there are others. and also tell me if we can change this default behavior if its reasonable to do.
I am writing a server side application, a Sync Server (not using sync framework) and I have written database queries in C# code file and using ODBC connection to execute them. Now question is what is the best way to change the default locking from Page to Row keeping drawbacks in mind (e.g. adding lock hint in queries this is what i am planning for).
What if a sql query(SELECT/DML) is being executed without the scope of transaction and statement contains lock hint then what kind of lock will be acquired (e.g. shared, update, exclusive)? AND while in transaction scope does Isolation Level of transaction has impact on lock type if ROWLOCK hint is being used.
Lastly, If some could give me sample so i could test and experience all above scenarios my self (e.g. dot net code or sql script)
Thanks
Mubashar
No. It locks as it sees fit and escalates locks as needed
Let the DB engine manage it
See point 2
See point 2
I'd only use lock hints if you want specific and certain behaviours eg queues or non-blocking (dirty) reads.
More generally, why do you think the DB engine can't do what you want by default?
The default locking is row locks not page locks, although the way in which the locking mechanism works means you will be placing locks on all the objects within the hierarchy e.g. reading a single row will place a shared lock on the table, a shared lock on the page and then a shared lock on the row.
This enables an action requesting an exclusive lock on the table to know it may not take it yet, since there is a shared lock present (otherwise it would have to check every page / row for locks.)
If you issue too many locks for an individual query however, it performs lock escalation which reduces the granularity of the lock - so that is it managing less locks.
This can be turned off using a trace flag but I wouldn't consider it.
Until you know you actually have a locking / lock escalation issue you risk prematurely optimizing a non-existant problem.

How can I get dead lock in this situation?

In my client application I have a method like this (in practice it's more complex, but I've left the main part):
public void btnUpdate_Click(...)
{
...
dataAdapter.Update(...);
...
dataAdapter.Fill(...); // here I got exception one time
}
The exception I found in logs says "Deadlock found when trying to get lock; try restarting transaction". I met this exception only time, so it wasn't repeated.
As I understand, DataAdapter.Fill() method executes only select query. I don't make an explicit transaction and I have autocommit enabled.
So how can I get dead lock on a simple select query which is not a part of bigger transaction?
As I understand, to get a dead lock, two transactions should wait for each other. How is that possible with a single select not inside a transaction? Maybe it's a bug in MySql?
Thank you in advance.
You are right it takes two transactions to make a deadlock. That is to say, No statement or statements within a single transaction can deadlock with other statements within the same transaction.
But it only take one transaction to notice a report of a deadlock. How do you know that the transaction you are seeing the deadlock reported in is the only transaction being executed in the database? Isn't there other activity going on in this database?
Also. your statement "I don't make an explicit transaction", and "... which is not a part of bigger transaction" implies that you do not understand that every SQL statement executed is always in an implicit transaction, even if you do not explicitly start one.
Most databases have reporting mechanisms specifically designed to track, report and/or log instances of deadlocks for diagnostic purposes. In SQL server there is a trace flag that causes a log entry with much detail about each deadlock that occurs, including details about each of the two transactions involved, like what sql statements were being executed, what objects in the database were being locked, and why the lock could not be obtained. I'd guess mySQL has similar disgnostic tool. Find out what it is and turn it on so that the next time this occurs you can look and find out exactly what happened.
You can deadlock a simple SELECT against other statements, like an UPDATE. On my blog I have an example explaining a deadlock between two well tunned statements: Read/Write deadlock. While the example is SQL Server specific, the principle is generic. I don't have enough knowledge of MySQL to claim this is necessarily the case or not, specially in the light of various engines MySQL can deploy, but none the less a simple SELECT can be the victim of a deadlock.
I haven't looked into how MySQL transaction works, but this is based on how MSSQL transactions work:
If you are not using a transaction, each query has a transaction by itself. Otherwise you would get a mess every time an update failed in the middle.
The reason for the deadlock might be lock escalation. The database tries to lock as little as possible for each query, so it starts out by locking only the single rows affected. When most of the rows in a page is locked by the query it might decide that escalating the lock into locking the entire page would be better, which may have the side effect of locking some rows not otherwise affected by the query.
If a select query and an update query are trying to escalate locks on the same table, they may cause a deadlock eventhough only a single table is involved.
I agree that in this particular issue this is unlikely to be the issue but this is supplemental to the other answers in terms of limiting their scope, recorded for posterity in case someone finds it useful.
MySQL can in rare cases have single statements periodically deadlock against themselves. This seems to happen particularly on bulk inserts and the issues are almost certainly a deadlock between different threads relating to the operation. I would expect bulk updates to have the same problem. In the past when faced with this sort of issue I have generally just cut down on the number of rows being inserted (or updated) in a single statement. You won't usually get a deadlock when trying to obtain the lock in this case but other messages.
A colleague of mine and I were discussing similar problems in MS SQL Server (so this is not unique to MySQL!) and he pointed out that the solution there is to tell the server not to parallelize the insert or update. The problems here are spinlock-related deadlocks, not logical lock deadlocks in the RDBMS.

ORM Support for Handling Deadlocks

Do you know of any ORM tool that offers deadlock recovery? I know deadlocks are a bad thing but sometimes any system will suffer from it given the right amount of load. In Sql Server, the deadlock message says "Rerun the transaction" so I would suspect that rerunning a deadlock statement is a desirable feature on ORM's.
I don't know of any special ORM tool support for automatically rerunning transactions that failed because of deadlocks. However I don't think that a ORM makes dealing with locking/deadlocking issues very different. Firstly, you should analyze the root cause for your deadlocks, then redesign your transactions and queries in a way that deadlocks are avoided or at least reduced. There are lots of options for improvement, like choosing the right isolation level for (parts) of your transactions, using lock hints etc. This depends much more on your database system then on your ORM. Of course it helps if your ORM allows you to use stored procedures for some fine-tuned command etc.
If this doesn't help to avoid deadlocks completely, or you don't have the time to implement and test the real fix now, of course you could simply place a try/catch around your save/commit/persist or whatever call, check catched exceptions if they indicate that the failed transaction is a "deadlock victim", and then simply recall save/commit/persist after a few seconds sleeping. Waiting a few seconds is a good idea since deadlocks are often an indication that there is a temporary peak of transactions competing for the same resources, and rerunning the same transaction quickly again and again would probably make things even worse.
For the same reason you probably would wont to make sure that you only try once to rerun the same transaction.
In a real world scenario we once implemented this kind of workaround, and about 80% of the "deadlock victims" succeeded on the second go. But I strongly recommend to digg deeper to fix the actual reason for the deadlocking, because these problems usually increase exponentially with the number of users. Hope that helps.
Deadlocks are to be expected, and SQL Server seems to be worse off in this front than other database servers. First, you should try to minimize your deadlocks. Try using the SQL Server Profiler to figure out why its happening and what you can do about it. Next, configure your ORM to not read after making an update in the same transaction, if possible. Finally, after you've done that, if you happen to use Spring and Hibernate together, you can put in an interceptor to watch for this situation. Extend MethodInterceptor and place it in your Spring bean under interceptorNames. When the interceptor is run, use invocation.proceed() to execute the transaction. Catch any exceptions, and define a number of times you want to retry.
An o/r mapper can't detect this, as the deadlock is always occuring inside the DBMS, which could be caused by locks set by other threads or other apps even.
To be sure a piece of code doesn't create a deadlock, always use these rules:
- do fetching outside the transaction. So first fetch, then perform processing then perform DML statements like insert, delete and update
- every action inside a method or series of methods which contain / work with a transaction have to use the same connection to the database. This is required because for example write locks are ignored by statements executed over the same connection (as that same connection set the locks ;)).
Often, deadlocks occur because either code fetches data inside a transaction which causes a NEW connection to be opened (which has to wait for locks) or uses different connections for the statements in a transaction.
I had a quick look (no doubt you have too) and couldn't find anything suggesting that hibernate at least offers this. This is probably because ORMs consider this outside of the scope of the problem they are trying to solve.
If you are having issues with deadlocks certainly follow some of the suggestions posted here to try and resolve them. After that you just need to make sure all your database access code gets wrapped with something which can detect a deadlock and retry the transaction.
One system I worked on was based on “commands” that were then committed to the database when the user pressed save, it worked like this:
While(true)
start a database transaction
Foreach command to process
read data the command need into objects
update the object by calling the command.run method
EndForeach
Save the objects to the database
If not deadlock
commit the database transaction
we are done
Else
abort the database transaction
log deadlock and try again
EndIf
EndWhile
You may be able to do something like with any ORM; we used an in house data access system, as ORM were too new at the time.
We run the commands outside of a transaction while the user was interacting with the system. Then rerun them as above (when you use did a "save") to cope with changes other people have made. As we already had a good ideal of the rows the command would change, we could even use locking hints or “select for update” to take out all the write locks we needed at the start of the transaction. (We shorted the set of rows to be updated to reduce the number of deadlocks even more)

What are the problems of using transactions in a database?

From this post. One obvious problem is scalability/performance. What are the other problems that transactions use will provoke?
Could you say there are two sets of problems, one for long running transactions and one for short running ones? If yes, how would you define them?
EDIT: Deadlock is another problem, but data inconsistency might be worse, depending on the application domain. Assuming a transaction-worthy domain (banking, to use the canonical example), deadlock possibility is more like a cost to pay for ensuring data consistency, rather than a problem with transactions use, or you would disagree? If so, what other solutions would you use to ensure data consistency which are deadlock free?
It depends a lot on the transactional implementation inside your database and may also depend on the transaction isolation level you use. I'm assuming "repeatable read" or higher here. Holding transactions open for a long time (even ones which haven't modified anything) forces the database to hold on to deleted or updated rows of frequently-changing tables (just in case you decide to read them) which could otherwise be thrown away.
Also, rolling back transactions can be really expensive. I know that in MySQL's InnoDB engine, rolling back a big transaction can take FAR longer than committing it (we've seen a rollback take 30 minutes).
Another problem is to do with database connection state. In a distributed, fault-tolerant application, you can't ever really know what state a database connection is in. Stateful database connections can't be maintained easily as they could fail at any moment (the application needs to remember what it was in the middle of doing it and redo it). Stateless ones can just be reconnected and have the (atomic) command re-issued without (in most cases) breaking state.
You can get deadlocks even without using explicit transactions. For one thing, most relational databases will apply an implicit transaction to each statement you execute.
Deadlocks are fundamentally caused by acquiring multiple locks, and any activity that involves acquiring more than one lock can deadlock with any other activity that involves acquiring at least two of the same locks as the first activity. In a database transaction, some of the acquired locks may be held longer than they would otherwise be held -- to the end of the transaction, in fact. The longer locks are held, the greater the chance for a deadlock. This is why a longer-running transaction has a greater chance of deadlock than a shorter one.
One issue with transactions is that it's possible (unlikely, but possible) to get deadlocks in the DB. You do have to understand how your database works, locks, transacts, etc in order to debug these interesting/frustrating problems.
-Adam
I think the major issue is at the design level. At what level or levels within my application do I utilise transactions.
For example I could:
Create transactions within stored procedures,
Use the data access API (ADO.NET) to control transactions
Use some form of implicit rollback higher in the application
A distributed transaction in (via DTC / COM+).
Using more then one of these levels in the same application often seems to create performance and/or data integrity issues.