There is an update query which causes deadlock errors I don't know why. There is already (rowlock, updlock) hint used in the update query still it gives deadlock error.
Sample query:
update table a with (rowlock, updlock)
set a.column1 = value
This same query is used in several stored procedures which may be called simultaneously. But since a lock is specified should it still causes a deadlock
Deadlock occurs when two or more tasks permanently block each other by each task having a lock on a resource which the other tasks are trying to lock.
Since your are explicitly specifying with (rowlock, updlock) which add a lock upon transaction. Eventually, when one transaction is being executed, it places a lock on the table. If another transaction wants to access that same record, it will have to wait until the previous transaction finishes and remove the lock for it to proceed.
Using with NOLOCK enables you to bypass the locks but in case of update, that would be too risky. More so, as you mentioned, several update can be executed simultaneously.
In your case, it seems that the Locks is the culprit. But however, Locks are not the only source of deadlocks. It might be memory issues or thread execution also. This LINK might help you identify the real cause of your deadlock.
Related
I am re-writing an old stored procedure which is called by BizTalk. Now this has the potential to have 50-60 messages pushed through at once.
I occasionally have an issue with database locking when they are all trying to update at once.
I can only make changes in SQL (not BizTalk) and I am trying to find the best way to run my SP.
With this in mind what i have done is to make the majority of the statement to determine if an UPDATE is needed by using a SELECT statement.
What my question is - What is the difference regarding locking between an UPDATE statement and a SELECT with a NOLOCK against it?
I hope this makes sense - Thank you.
You use nolock when you want to read uncommitted data and want to avoid taking any shared lock on the data so that other transactions can take exclusive lock for updating/deleting.
You should not use nolock with update statement, it is really a bad idea, MS says that nolock are ignored for the target of update/insert statement.
Support for use of the READUNCOMMITTED and NOLOCK hints in the FROM
clause that apply to the target table of an UPDATE or DELETE statement
will be removed in a future version of SQL Server. Avoid using these
hints in this context in new development work, and plan to modify
applications that currently use them.
Source
Regarding your locking problem during multiple updates happening at the same time. This normally happens when you read data with the intention to update it later by just putting a shared lock, the following UPDATE statement can’t acquire the necessary Update Locks, because they are already blocked by the Shared Locks acquired in the different session causing the deadlock.
To resolve this you can select the records using UPDLOCK like following
DECLARE #IdToUpdate INT
SELECT #IdToUpdate =ID FROM [Your_Table] WITH (UPDLOCK) WHERE A=B
UPDATE [Your_Table]
SET X=Y
WHERE ID=#IdToUpdate
This will take the necessary Update lock on the record in advance and will stop other sessions to acquire any lock (shared/exclusive) on the record and will prevent from any deadlocks.
NOLOCK: Specifies that dirty reads are allowed. No shared locks are issued to prevent other transactions from modifying data read by the current transaction, and exclusive locks set by other transactions do not block the current transaction from reading the locked data. NOLOCK is equivalent to READUNCOMMITTED.
Thus, while using NOLOCK you get all rows back but there are chances to read Uncommitted (Dirty) data. And while using READPAST you get only Committed Data so there are chances you won’t get those records that are currently being processed and not committed.
For your better understanding please go through below link.
https://www.mssqltips.com/sqlservertip/2470/understanding-the-sql-server-nolock-hint/
https://www.mssqltips.com/sqlservertip/4468/compare-sql-server-nolock-and-readpast-table-hints/
https://social.technet.microsoft.com/wiki/contents/articles/19112.understanding-nolock-query-hint.aspx
I am building a locking system based on PostgreSQL, I have two methods, acquire and release.
For acquire, it works like this
BEGIN
while True:
SELECT id FROM my_locks WHERE locked = false AND id = '<NAME>' FOR UPDATE
if no rows return:
continue
UPDATE my_locks SET locked = true WHERE id = '<NAME>'
COMMIT
break
And for release
BEGIN
UPDATE my_locks SET locked = false WHERE id = '<NAME>'
COMMIT
This looks pretty straightforward, but it doesn't work. The strange part of it is, I thought
SELECT id FROM my_locks WHERE locked = false AND id = '<NAME>' FOR UPDATE
should only acquire the lock on target row only if the target row's locked is false. But in reality, it's not like that. Somehow, even no locked = false row exists, it acquire lock anyway. As a result, I have a deadlock issue. It looks like this
Release is waiting for SELECT FOR UPDATE, and SELECT FOR UPDATE is doing infinite loop while it's holding a lock for no reason.
To reproduce the issue, I wrote a simple test here
https://gist.github.com/victorlin/d9119dd9dfdd5ac3836b
You can run it with psycopg2 and pytest, remember to change the database setting, and run
pip install pytest psycopg2
py.test -sv test_lock.py
The test case plays out like this:
Thread-1 runs the SELECT and acquires the record lock.
Thread-2 runs the SELECT and enters the lock's wait queue.
Thread-1 runs the UPDATE / COMMIT and releases the lock.
Thread-2 acquires the lock. Detecting that the record has changed since its SELECT, it rechecks the data against its WHERE condition. The check fails, and the row is filtered out of the result set, but the lock is still held.
This behaviour is mentioned in the FOR UPDATE documentation:
...rows that satisfied the query conditions as of the query snapshot will be locked, although they will not be returned if they were updated after the snapshot and no longer satisfy the query conditions.
This can have some unpleasant consequences, so a superfluous lock isn't that bad, all things considered.
Probably the simplest workaround is to limit the lock duration by committing after every iteration of acquire. There are various other ways to prevent it from holding this lock (e.g. SELECT ... NOWAIT, running in a REPEATABLE READ or SERIALIZABLE isolation level, SELECT ... SKIP LOCKED in Postgres 9.5).
I think the cleanest implementation using this retry-loop approach would be to skip the SELECT altogether, and just run an UPDATE ... WHERE locked = false, committing each time. You can tell if you acquired the lock by checking cur.rowcount after calling cur.execute(). If there is additional information you need to pull from the lock record, you can use an UPDATE ... RETURNING statement.
But I would have to agree with #Kevin, and say that you'd probably be better off leveraging Postgres' built-in locking support than trying to reinvent it. It would solve a lot of problems for you, e.g.:
Deadlocks are automatically detected
Waiting processes are put to sleep, rather than having to poll the server
Lock requests are queued, preventing starvation
Locks would (generally) not outlive a failed process
The easiest way might be to implement acquire as SELECT FROM my_locks FOR UPDATE, release simply as COMMIT, and let the processes contend for the row lock. If you need more flexibility (e.g. blocking/non-blocking calls, transaction/session/custom scope), advisory locks should prove useful.
PostgreSQL normally aborts transactions which deadlock:
The use of explicit locking can increase the likelihood of deadlocks, wherein two (or more) transactions each hold locks that the other wants. For example, if transaction 1 acquires an exclusive lock on table A and then tries to acquire an exclusive lock on table B, while transaction 2 has already exclusive-locked table B and now wants an exclusive lock on table A, then neither one can proceed. PostgreSQL automatically detects deadlock situations and resolves them by aborting one of the transactions involved, allowing the other(s) to complete. (Exactly which transaction will be aborted is difficult to predict and should not be relied upon.)
Looking at your Python code, and at the screenshot you showed, it appears to me that:
Thread 3 is holding the locked=true lock, and is waiting to acquire a row lock.
Thread 1 is also waiting for a row lock, and also the locked=true lock.
The only logical conclusion is that Thread 2 is somehow holding the row lock, and waiting for the locked=true lock (note the short time on that query; it is looping, not blocking).
Since Postgres is not aware of the locked=true lock, it is unable to abort transactions to prevent deadlock in this case.
It's not immediately clear to me how T2 acquired the row lock, since all the information I've looked at says it can't do that:
FOR UPDATE causes the rows retrieved by the SELECT statement to be locked as though for update. This prevents them from being locked, modified or deleted by other transactions until the current transaction ends. That is, other transactions that attempt UPDATE, DELETE, SELECT FOR UPDATE, SELECT FOR NO KEY UPDATE, SELECT FOR SHARE or SELECT FOR KEY SHARE of these rows will be blocked until the current transaction ends; conversely, SELECT FOR UPDATE will wait for a concurrent transaction that has run any of those commands on the same row, and will then lock and return the updated row (or no row, if the row was deleted). Within a REPEATABLE READ or SERIALIZABLE transaction, however, an error will be thrown if a row to be locked has changed since the transaction started. For further discussion see Section 13.4.
I was not able to find any evidence of PostgreSQL "magically" upgrading row locks to table locks or anything similar.
But what you're doing is not obviously safe, either. You're acquiring lock A (the row lock), then acquiring lock B (the explicit locked=true lock), then releasing and re-acquiring A, before finally releasing B and A in that order. This does not properly observe a lock hierarchy since we try both to acquire A while holding B and vice-versa. But OTOH, acquiring B while holding A should not fail (I think), so I'm still not sure this is outright wrong.
Quite frankly, it's my opinion that you'd be better off just using the LOCK TABLE statement on an empty table. Postgres is aware of these locks and will detect deadlocks for you. It also saves you the trouble of the SELECT FOR UPDATE finagling.
Also, you should add locked = true in the release code:
BEGIN
UPDATE my_locks SET locked = false WHERE id = '<NAME>' AND locked = true
COMMIT
If not, you are updating the record whatever locked state it is (in your case, even when locked = false), and adding the odds of causing a deadlock.
I want to know how to stop undo process of oracle ? I tried to delete millions of rows of a big table and in the middle of process I killed session but It started to undo delete and for a bout two hours database got dramatically slow. I didn't want the undo process to be continued. Is there any way to stop it ?
You can't stop the process of rolling back the transaction because doing so would leave the database in an inconsistent state.
When you are executing a long-running delete process, Oracle will likely be writing the changed blocks to your data files before you decide whether to commit or rollback the transaction. If you interrupted the process in the middle of executing the transaction, there will be some changed blocks on disk, some changed blocks in memory, and some unchanged blocks. Rolling back the transaction is the only way to return the database to the state it was in before you started executing the DELETE statement.
Row-by-row delete processes can, as you've found, be exceedingly slow. If the deletions are all done in a single transaction, as appears to be the case here, they can become even slower. You might want to consider the following options:
If you're deleting all the rows in the table you might want to consider using the TRUNCATE TABLE statement.
If you're not deleting all the rows in the table you should probably change your procedure to COMMIT after a certain number of rows are deleted.
In the meantime you're going to have to wait until that rollback process completes.
Share and enjoy.
and when you try truncating the table while it's still deleting you'll be seeing an ORA-00054 "resource busy and acquire with NOWAIT specified or timeout expired"
A large SQL Server 2008 table is normally being updated in (relatively) small chunks using a SNAPSHOT ISOLATION transaction. Snapshot works very well for those updates since the chunks never overlap. These updates aren't a single long running operation, but many small one-row insert/update grouped by the transaction.
I would like a lower priority transaction to update all the rows which aren't currently locked. Does anyone know how I can get this behavior? Will another SNAPSHOT ISOLATION transaction fail as soon as it a row clashes, or will it update everything it can before failing?
Could SET DEADLOCK_PRIORITY LOW with a try-catch be of any help? Maybe in a retry loop with a WHERE which targets only rows which haven't been updated?
Snapshot isolation doesn't really work that way; the optimistic locking model means it won't check for locks or conflicts until it's ready to write/commit. You also can't set query 'priority' per se, nor can you use the READPAST hint on an update.
Each update is an implicit atomic transaction so if 1 update out of 10 fails (in a single transaction) they all roll back.
SET DEADLOCK_PRIORITY only sets a preference for which transaction is rolled back in the event of a dealdlock (otherwise the 'cheapest' rollback is selected).
A try-catch is pretty much a requirement if you're expecting regular collisions.
The retry loop would work as would using a different locking model and the NOWAIT hint to skip queries that would be blocked.
SNAPSHOT ISOLATION transaction fails as soon as it encounters an update conflict. However, I would use some queue outside the database to prioritize updates.
I use a small transaction which consists of two simple queries: select and update:
SELECT * FROM XYZ WHERE ABC = DEF
and
UPDATE XYZ SET ABC = 123
WHERE ABC = DEF
It is quite often situation when the transaction is started by two threads, and depending on Isolation Level deadlock occurs (RepeatableRead, Serialization). Both transactions try to read and update exactly the same row.
I'm wondering why it is happening. What is the order of queries which leads to deadlock? I've read a bit about lock (shared, exclusive) and how long locks last for each isolation level, but I still don't fully understand...
I've even prepared a simple test which always result in deadlock. I've looked at results of the test in SSMS and SQL Server Profiler. I started first query and then immediately the second.
First query:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRANSACTION
SELECT ...
WAITFOR DELAY '00:00:04'
UPDATE ...
COMMIT
Second query:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRANSACTION
SELECT ...
UPDATE ...
COMMIT
Now I'm not able to show you detailed logs, but it looks less or more like this (I've very likely missed Lock:deadlock etc. somewhere):
(1) SQL:BatchStarting: First query
(2) SQL:BatchStarting: Second query
(3) Lock:timeout for second query
(4) Lock:timeout for first query
(5) Deadlock graph
If I understand locks well, in (1) first query takes a shared lock (to execute SELECT), then goes to sleep and keeps the shared lock until the end of transaction. In (2) second query also takes shared lock (SELECT) but cannot take exclusive lock (UPDATE) while there are shared locks on the same row, which results in Lock:timeout. But I can't explain why timeout for second query occurs. Probably I don't understand the whole process well. Can anybody give a good explanation?
I haven't noticed deadlocks using ReadCommitted but I'm afraid they may occur.
What solution do you recommend?
A deadlock occurs when two or more tasks permanently block each other by each task having a lock on a resource which the other tasks are trying to lock
http://msdn.microsoft.com/en-us/library/ms177433.aspx
"But I can't explain why timeout for second query occurs."
Because the first query holds shared lock. Then the update in the first query also tries to get the exclusive lock, which makes him sleep. So the first and second query are both sleeping waiting for the other to wake up - and this is a deadlock which results in timeout :-)
In mysql it works better - the deadlock is detected immediatelly and one of the transactions is rolled back (you need not to wait for timeout :-)).
Also, in mysql, you can do the following to prevent deadlock:
select ... for update
which will put a write-lock (i.e. exclusive lock) just from the beginning of the transaction, and this way you avoid the deadlock situation! Perhaps you can do something similar in your database engine.
For MSSQL there is a mechanism to prevent deadlocks. What you need here is called the WITH NOLOCK hint.
In 99.99% of the cases of SELECT statements it's usable and there is no need to bundle the SELECT with the UPDATE. There is also no need to put a SELECT into a transaction. The only exception is when dirty reads are not allowed.
Changing your queries to this form would solve all your issues:
SELECT ...
FROM yourtable WITH (NOLOCK)
WHERE ...
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRANSACTION
UPDATE ...
COMMIT
It has been a long time since I last dealt with this, but I believe that the select statement creates a read-lock, which only prevents the data to be changed -- hence multiple queries can hold and share a read-lock on the same data. The shared-read-lock is for read consistency, that is if you multiple times in your transaction reads the same row, then read-consistency should mean that you should always get the same result.
The update statement requires an exclusive lock, and hence the update statement have to wait for the read-lock to be released.
None of the two transactions will release the locks, so the transactions fails.
Different databases implementations have different strategies for how to deal with this, with Sybase and MS-SQL-servers using lock escalation with timeout (escalate from read-to-write-lock) -- Oracle I believe (at some point) implemented read consistency though use of the roll-back-log, where MySQL have yet a different strategy.