We've got a web-based application. There are time-bound database operations (INSERTs and UPDATEs) in the application which take more time to complete, hence this particular flow has been changed into a Java Thread so it will not wait (block) for the complete database operation to be completed.
My problem is, if more than 1 user comes across this particular flow, I'm facing the following error thrown by PostgreSQL:
org.postgresql.util.PSQLException: ERROR: deadlock detected
Detail: Process 13560 waits for ShareLock on transaction 3147316424; blocked by process 13566.
Process 13566 waits for ShareLock on transaction 3147316408; blocked by process 13560.
The above error is consistently thrown in INSERT statements.
Additional Information:
1) I have PRIMARY KEY defined in this table.
2) There are FOREIGN KEY references in this table.
3) Separate database connection is passed to each Java Thread.
Technologies
Web Server: Tomcat v6.0.10
Java v1.6.0
Servlet
Database: PostgreSQL v8.2.3
Connection Management: pgpool II
One way to cope with deadlocks is to have a retry mechanism that waits for a random interval and tries to run the transaction again. The random interval is necessary so that the colliding transactions don't continuously keep bumping into each other, causing what is called a live lock - something even nastier to debug. Actually most complex applications will need such a retry mechanism sooner or later when they need to handle transaction serialization failures.
Of course if you are able to determine the cause of the deadlock it's usually much better to eliminate it or it will come back to bite you. For almost all cases, even when the deadlock condition is rare, the little bit of throughput and coding overhead to get the locks in deterministic order or get more coarse-grained locks is worth it to avoid the occasional large latency hit and the sudden performance cliff when scaling concurrency.
When you are consistently getting two INSERT statements deadlocking it's most likely an unique index insert order issue. Try for example the following in two psql command windows:
Thread A | Thread B
BEGIN; | BEGIN;
| INSERT uniq=1;
INSERT uniq=2; |
| INSERT uniq=2;
| block waiting for thread A to commit or rollback, to
| see if this is an unique key error.
INSERT uniq=1; |
blocks waiting |
for thread B, |
DEADLOCK |
V
Usually the best course of action to resolve this is to figure out the parent objects that guard all such transactions. Most applications have one or two of primary entities, such as users or accounts, that are good candidates for this. Then all you need is for every transaction to get the locks on the primary entity it touches via SELECT ... FOR UPDATE. Or if touches several, get locks on all of them but in the same order every time (order by primary key is a good choice).
What PostgreSQL does here is covered in the documentation on Explicit Locking. The example in the "Deadlocks" section shows what you're probably doing. The part you may not have expected is that when you UPDATE something, that acquires a lock on that row that continues until the transaction involved ends. If you have multiple clients all doing updates of more than one thing at once, you'll inevitably end up with deadlocks unless you go out of your way to prevent them.
If you have multiple things that take out implicit locks like UPDATE, you should wrap the whole sequence in BEGIN/COMMIT transaction blocks, and make sure you're consistent about the order they acquire locks (even the implicit ones like what UPDATE grabs) at everywhere. If you need to update something in table A then table B, and one part of the app does A then B while the other does B then A, you're going to deadlock one day. Two UPDATEs against the same table are similarly destined to fail unless you can enforce some ordering of the two that's repeatable among clients. Sorting by primary key once you have the set of records to update and always grabbing the "lower" one first is a common strategy.
It's less likely your INSERTs are to blame here, those are much harder to get into a deadlocked situation, unless you violate a primary key as Ants already described.
What you don't want to do is try and duplicate locking in your app, which is going to turn into a giant scalability and reliability mess (and will likely still result in database deadlocks). If you can't work around this within the confines of the standard database locking methods, consider using either the advisory lock facility or explicit LOCK TABLE to enforce what you need instead. That will save you a world of painful coding over trying to push all the locks onto the client side. If you have multiple updates against a table and can't enforce the order they happen in, you have no choice but to lock the whole table while you execute them; that's the only route that doesn't introduce a potential for deadlock.
Deadlock explained:
In a nutshell, what is happening is that a particular SQL statement (INSERT or other) is waiting on another statement to release a lock on a particular part of the database, before it can proceed. Until this lock is released, the first SQL statement, call it "statement A" will not allow itself to access this part of the database to do its job (= regular lock situation). But... statement A has also put a lock on another part of the database to ensure that no other users of the database access (for reading, or modifiying/deleting, depending on the type of lock). Now... the second SQL statement, is itself in need of accessing the data section marked by the lock of Statement A. That is a DEAD LOCK : both Statement will wait, ad infinitum, on one another.
The remedy...
This would require to know the specific SQL statement these various threads are running, and looking in there if there is a way to either:
a) removing some of the locks, or changing their types.
For example, maybe the whole table is locked, whereby only a given row, or
a page thereof would be necessary.
b) preventing multiple of these queries to be submitted at a given time.
This would be done by way of semaphores/locks (aka MUTEX) at the level of the
multi-threading logic.
Beware that the "b)" approach, if not correctly implemented may just move the deadlock situation from within SQL to within the program/threads logic. The key would be to only create one mutex to be obtained first by any thread which is about to run one of these deadlock-prone queries.
Your problem, probably, is the insert command is trying to lock one or both index and the indexes is locked for the other tread.
One common mistake is lock resources in different order on each thread. Check the orders and try to lock the resources in the same order in all threads.
Related
I want to implement a mutual exclusion system in PostgreSQL where multiple worker processes will temporarily lock resources (rows) from a table (queue) while they work on them. If the worker processes crash, I want the lock to be cleanly released and not have to rely on another process to clean up the leaked locks.
What I have come up with so far is to use a SELECT ... FOR UPDATE SKIP LOCKED query within a transaction, which locks the row it finds and skips any other locked row.
It works well but one of the issues is that the worker might take a while to do its task and I need to keep the transaction open for the entire duration of its task.
Another problem is that the workers work incrementally and persist their state to the database so that if they're stopped or crash, they can resume quickly where they were. The row being locked makes it impossible to persist their state in the same table (though I think I can get away from that by using another table to persist the state).
I've searched on the Web on how to implement a semaphore or a resource borrowing system in SQL/PostgreSQL but I haven't found something that fits my needs. Is there a simple way of achieving this with PostgreSQL?
I have a table that tracks the last update time of another table's partitions so our reconciler need only check the partitions that have been updated since the last reconcile. There are multiple threads updating the partitioned table and therefore updating the same row of the latest update time table several times each. This is obviously causing deadlocks. Is there a way to prevent these deadlocks by only updating once on commit?
I was thinking of maybe using a session local temporary table, but not sure how to transfer the values to the global table on commit.
There is no way to trigger a process on commit so that approach probably won't work.
Potentially, you could have each of the writer processes write to an Oracle Advanced Queue (AQ) and then have another process that de-queues the messages and actually applies them to the current table. That would mean that there would be some lag between the writer session committing and the AQ processor picking up and processing the message but that lag shouldn't be too long. You could do the same thing by having each writer thread insert into a queue-like table and having a separate thread process that table if you don't want to use AQ.
I'm confused, though, by how the process you are describing could cause a deadlock. Are you really talking about a deadlock (i.e. an ORA-00060 error is thrown and a deadlock trace file is generated)? What you are describing should lead to blocking locks, not deadlocks, unless there is more going on than you have told us.
I am new to SQL Server, but am having a fair knowledge of simple things like select/update/delete and other transaction. I am facing a dead lock scenario in my application. I have understood the scenario as many threads are parallel trying to run a set of update operations. Its is not a single update but a set of update operations.
I have understood that this cannot be avoided in my application as many people want to do a update simultaneously. So I want to have a manual lock system. First the thread 1 should check if the manual lock is available and then start the transaction. Mean while if the second thread requests for the lock it should be busy and hence the second thread should wait. Once the first is completed the second should acquire the lock and start with the transaction.
This is just a logic i have thought about. But I do not have any idea of how to do this in SQL Server. Are there any examples which can help me. Please let me know if you can give me some sample sql scripts or links that will be helpful for me. Thank you for your time and help.
You probably mean "semaphore". That is, something to serialise execution of the DML to only one process can run at a time.
This is native in SQL Server using sp_getapplock
You can configure 2nd processes to wait or fail when they call sp_getapplock, and also it can be self-cancelling in "transaction" mode.
You will still most likely end up in the same scenario. Having a dead lock based around your tailor made locks. SQL Server internally implements a very robust locking mechanism. You should use it.
The problem you're having is that resources (tables, indexes, etc.) are accessed (or modified) in a conflicting order by different transactions/threads.
If you create your own locking mechanism, you may end up with a dead lock just the same. Example:
Thread 1 creates a lock on Customer record
Thread 2 creates a lock on Order record
Thread 1 attempts to create a lock on Order record (but cannot proceed due to step 2)
Thread 2 attempts to create a lock on Customer record (but cannot proceed due to step 3)
Voila ... deadlock
The solution is to refactor the way resources are accessed, so records are always accessed in the same order and the problem will go away.
Thread 1 creates a lock on Customer record
Thread 2 attempts to create a lock on Customer record (but cannot proceed due to step 1)
Thread 1 creates a lock on Order record
Thread 1 completes transaction and unlocks both Order and Customer records
Thread 2 creates a lock on Customer record
Thread 2 creates a lock on Order record
Also, have a look here to read how locking can happen on a single table.
You manual Lock system sounds interesting but you need to aware that it will sacrifice concurrency, which is quite important for many OLTP application.
Advance db like Oracle and SQL server is quite good in avoiding dead lock and give you the tool to resolve dead lock, which help you just kill the session that cause the dead lock and let the other query finish it's job first.
Microsoft Has documentation which can be find here.
http://support.microsoft.com/kb/832524
Beside, there are many other reasons that could lead to deadlock. You can find some example here. how to solve deadlock problem?
In my client application I have a method like this (in practice it's more complex, but I've left the main part):
public void btnUpdate_Click(...)
{
...
dataAdapter.Update(...);
...
dataAdapter.Fill(...); // here I got exception one time
}
The exception I found in logs says "Deadlock found when trying to get lock; try restarting transaction". I met this exception only time, so it wasn't repeated.
As I understand, DataAdapter.Fill() method executes only select query. I don't make an explicit transaction and I have autocommit enabled.
So how can I get dead lock on a simple select query which is not a part of bigger transaction?
As I understand, to get a dead lock, two transactions should wait for each other. How is that possible with a single select not inside a transaction? Maybe it's a bug in MySql?
Thank you in advance.
You are right it takes two transactions to make a deadlock. That is to say, No statement or statements within a single transaction can deadlock with other statements within the same transaction.
But it only take one transaction to notice a report of a deadlock. How do you know that the transaction you are seeing the deadlock reported in is the only transaction being executed in the database? Isn't there other activity going on in this database?
Also. your statement "I don't make an explicit transaction", and "... which is not a part of bigger transaction" implies that you do not understand that every SQL statement executed is always in an implicit transaction, even if you do not explicitly start one.
Most databases have reporting mechanisms specifically designed to track, report and/or log instances of deadlocks for diagnostic purposes. In SQL server there is a trace flag that causes a log entry with much detail about each deadlock that occurs, including details about each of the two transactions involved, like what sql statements were being executed, what objects in the database were being locked, and why the lock could not be obtained. I'd guess mySQL has similar disgnostic tool. Find out what it is and turn it on so that the next time this occurs you can look and find out exactly what happened.
You can deadlock a simple SELECT against other statements, like an UPDATE. On my blog I have an example explaining a deadlock between two well tunned statements: Read/Write deadlock. While the example is SQL Server specific, the principle is generic. I don't have enough knowledge of MySQL to claim this is necessarily the case or not, specially in the light of various engines MySQL can deploy, but none the less a simple SELECT can be the victim of a deadlock.
I haven't looked into how MySQL transaction works, but this is based on how MSSQL transactions work:
If you are not using a transaction, each query has a transaction by itself. Otherwise you would get a mess every time an update failed in the middle.
The reason for the deadlock might be lock escalation. The database tries to lock as little as possible for each query, so it starts out by locking only the single rows affected. When most of the rows in a page is locked by the query it might decide that escalating the lock into locking the entire page would be better, which may have the side effect of locking some rows not otherwise affected by the query.
If a select query and an update query are trying to escalate locks on the same table, they may cause a deadlock eventhough only a single table is involved.
I agree that in this particular issue this is unlikely to be the issue but this is supplemental to the other answers in terms of limiting their scope, recorded for posterity in case someone finds it useful.
MySQL can in rare cases have single statements periodically deadlock against themselves. This seems to happen particularly on bulk inserts and the issues are almost certainly a deadlock between different threads relating to the operation. I would expect bulk updates to have the same problem. In the past when faced with this sort of issue I have generally just cut down on the number of rows being inserted (or updated) in a single statement. You won't usually get a deadlock when trying to obtain the lock in this case but other messages.
A colleague of mine and I were discussing similar problems in MS SQL Server (so this is not unique to MySQL!) and he pointed out that the solution there is to tell the server not to parallelize the insert or update. The problems here are spinlock-related deadlocks, not logical lock deadlocks in the RDBMS.
From this post. One obvious problem is scalability/performance. What are the other problems that transactions use will provoke?
Could you say there are two sets of problems, one for long running transactions and one for short running ones? If yes, how would you define them?
EDIT: Deadlock is another problem, but data inconsistency might be worse, depending on the application domain. Assuming a transaction-worthy domain (banking, to use the canonical example), deadlock possibility is more like a cost to pay for ensuring data consistency, rather than a problem with transactions use, or you would disagree? If so, what other solutions would you use to ensure data consistency which are deadlock free?
It depends a lot on the transactional implementation inside your database and may also depend on the transaction isolation level you use. I'm assuming "repeatable read" or higher here. Holding transactions open for a long time (even ones which haven't modified anything) forces the database to hold on to deleted or updated rows of frequently-changing tables (just in case you decide to read them) which could otherwise be thrown away.
Also, rolling back transactions can be really expensive. I know that in MySQL's InnoDB engine, rolling back a big transaction can take FAR longer than committing it (we've seen a rollback take 30 minutes).
Another problem is to do with database connection state. In a distributed, fault-tolerant application, you can't ever really know what state a database connection is in. Stateful database connections can't be maintained easily as they could fail at any moment (the application needs to remember what it was in the middle of doing it and redo it). Stateless ones can just be reconnected and have the (atomic) command re-issued without (in most cases) breaking state.
You can get deadlocks even without using explicit transactions. For one thing, most relational databases will apply an implicit transaction to each statement you execute.
Deadlocks are fundamentally caused by acquiring multiple locks, and any activity that involves acquiring more than one lock can deadlock with any other activity that involves acquiring at least two of the same locks as the first activity. In a database transaction, some of the acquired locks may be held longer than they would otherwise be held -- to the end of the transaction, in fact. The longer locks are held, the greater the chance for a deadlock. This is why a longer-running transaction has a greater chance of deadlock than a shorter one.
One issue with transactions is that it's possible (unlikely, but possible) to get deadlocks in the DB. You do have to understand how your database works, locks, transacts, etc in order to debug these interesting/frustrating problems.
-Adam
I think the major issue is at the design level. At what level or levels within my application do I utilise transactions.
For example I could:
Create transactions within stored procedures,
Use the data access API (ADO.NET) to control transactions
Use some form of implicit rollback higher in the application
A distributed transaction in (via DTC / COM+).
Using more then one of these levels in the same application often seems to create performance and/or data integrity issues.