Deadlock happens on 1 single table with 2 users doing simple statement - sql

Final solution
What we did actually was to bypass the line by line insert by creating a work table & session ID, and having a stored proc that DELETE and INSERT from the temp table to the main table all at once. No more deadlocks!
Final process :
DELETE FROM TheTable_work (with sessionID, just in case…)
INSERT INTO TheTable_work (line by line with ADODB)
I call a stored procedure that does :
BEGIN TRANSACTION
DELETE FROM TheTable (with some
conditions)
INSERT INTO TheTable FROM TheTable_work (1
statement)
COMMIT
DELETE FROM TheTable_work (clean up)
We do think it's because of index lock but I am waiting for confirmation from DBA (explanation here).
Isolation levels did not change nothing (we tried all of possibilities, even turning on and off READ_COMMITTED_SNAPSHOT)
I am having an application that causes a deadlock into my database when 2 users "write" to the database at the same moment. They do not work on the same data, because they have different id, but they work on the same table.
User 1 (id = 1), User 2 (id = 2)
Process
User 1 does : 1 DELETE statement, followed by 3000 INSERT
User 2 does : 1 DELETE statement, followed by 3000 INSERT
User 2 DELETE in the middle of user 1 INSERT (example : after 1500). Result in Deadlock.
Statements (examples, but they work)
DELETE FROM dbo.TheTable WHERE id = 1 /* Delete about 3000 lines */
INSERT INTO dbo.TheTable (id, field2, field3) VALUE (1 , 'value2','value3')
I use ADODB (Microsoft Access...) so I do a simple query execute , example :
ConnectSQLServer.Execute "DELETE or INSERT statement"
Error Message
Run-time error '-2147457259 (80004005)':
Transaction (Process ID76) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
Other informations
There is no transaction involved, only simple INSERT or DELETE query,
one by one
There is no RecordSet involved, nor open
We tried to looked for a deadlock trace but no graphic showed up, a DBA tries to understand why
I'd like to understand why they deadlock each other and not just "wait" until the other one is finished! How can I do better operations ? =)
Access : it's a access front-end with SQL-Server backend. No linked tables, vba code pushed queries via ADODB connection.
Edit : Deadlock graph
On the left we have 1 (of 3000) INSERT statement, on the right one DELETE.
UPDATE
I deleted an index on a column that is used in the DELETE statement and I can not reproduce the deadlock anymore! Not a clean solution, but temporary. We think about INSERT 3000 lines in a temp table then copy from temp table to TheTable all at once (DELETE and INSERT in a stored procedure).
Thanks,
Séb

DELETE will hold an Update lock on the pages :
Update (U)
Used on resources that can be updated. Prevents a common form of
deadlock that occurs when multiple sessions are reading, locking, and
potentially updating resources later.
And INSERT will hold an intent/exclusive lock on the same pages.
Exclusive (X)
Used for data-modification operations, such as INSERT, UPDATE, or
DELETE. Ensures that multiple updates cannot be made to the same
resource at the same time.
Intent
Used to establish a lock hierarchy. The types of intent locks are:
intent shared (IS), intent exclusive (IX), and shared with intent
exclusive (SIX).
If you wish to run both queries at the same time,
each will have to wait for the other at a given time.
I suggest you to run each insert in a seperate transaction if no rollback is needed for the whole set of inserts.
Or, try to remove parallelization, or do these operations at different times.
To make short :
1) Try to make sure each INSERT is correctly commited before going to the next one.
2) Or try to avoid doing both operations at the same time
Another guess would be to adjust the Transaction Isolation Level for the sessions that produce the problem. Check the following documentation.
Reference :
Lock Modes
Transaction Isolation Level

Related

Unexpected behaviour of the Serializable isolation level

Test setup
I have a SQL Server 2014 and a simple table MyTable that contains columns Code (int) and Data (nvarchar(50)), no indexes created for this table.
I have 4 records in the table in the following manner:
1, First
2, Second
3, Third
4, Fourth
Then I run the following query in a transaction:
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ
BEGIN TRANSACTION
DELETE FROM dbo.MyTable
WHERE dbo.MyTable.Code = 2
I have one affected row and I don't issue either Commit or Rollback.
Next I start yet another transaction:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRANSACTION
SELECT TOP 10 Code, Data
FROM dbo.MyTable
WHERE Code = 3
At this step the transaction with the SELECT query hangs waiting for completion of the transaction with the DELETE query.
My question
I don't understand why the transaction with SELECT query is waiting for the transaction with the DELETE query. In my understanding the deleted row (with code 2) has nothing to do with the selected row (with code 3) and as far as I understand the specific of isolation level SERIALIZABLE SQL Server shouldn't lock entire table in this case. Maybe this happens because the minimal locking amount for SERIALIZABLE is a page? Then it could produce an inconsistent behavior for selecting rows from some other pages if the table would have more rows, say 1000000 (some rows from other pages wouldn't be locked then). Please help to figure out why the locking takes place in my case.
Under locking READ COMMITTED, REPEATABLE READ, or SERIALIZABLE a SELECT query must place Shared (S) locks for every row the query plan actually reads. The locks can be placed either at the row-level, page-level, or table-level. Additionally SERIALIZABLE will place locks on ranges, so that no other session could insert a matching row while the lock is held.
And because you have "no indexes created for this table", this query:
SELECT TOP 10 Code, Data
FROM dbo.MyTable
WHERE Code = 3
Has to be executed with a table scan, and it must read all the rows (even those with Code=2) to determine whether they qualify for the SELECT.
This is one reason why you should almost always use Row-Versioning, either by setting the database to READ COMMITTED SNAPSHOT, or by coding read-only transactions to use SNAPSHOT isolation.

ACID transactions across multiple technologies

I have an application that uses a local database and a remote database to synchronize to. The local database uses SQLite and for the remote database I'm using postgres. I need to move data from one database to the other database and avoid duplicating information.
Roughly what I do right now:
BEGIN; //remote database (start transaction)
SELECT * FROM local.queued TOP 1; //local database (select first queued element)
INSERT INTO remote.queued VALUES ( element ) //remote database (insert first queued element on remote database)
BEGIN; //local database (start transaction)
DELETE * FROM local.queued LIMIT 1; //local database (delete first queued element on local database)
END; //local database (finalize transaction local database)
END; //remote database (finalize transaction remote database)
This works relatively well most of the times but incidentally, after giving a hard reset to the program I've noticed a data record was duplicated. I believe this is has something to do with the transaction finalizing. Because I'm using two distinct technologies it would be impossible to create a single atomic commit with WAL archiving.
Any ideas how I could improve this concept to avoid duplicative entries.
The canonical way to do that is a distributed transaction using the two-phase commit protocol.
Unfortunately SQLite doesn't seem to support it, but since PostgreSQL does, you can still use it if only two databases are involved:
BEGIN; -- on PostgreSQL
BEGIN; -- on SQLite
/*
* Do work on both databases.
* On error, ROLLBACK both transactions.
*/
PREPARE TRANSACTION 'somename'; -- PostgreSQL
COMMIT; -- SQLite
COMMIT PREPARED 'somename'; -- PostgreSQL
Now if an error happens during the SQLite COMMIT, you run ROLLBACK PREPARED 'sonename' on PostgreSQL. The idea is that everything that can fail during commit is done during PREPARE TRANSACTION, and the state of the transaction is persisted so that it stays open, but will still survive a server restart.
This is safe, but there is a caveat. Prepared transactions are dangerous, because they will hold locks and keep VACUUM from cleaning up (like all other transactions), but they are persistent and stick around until you explicitly remove them. So you need some piece of software, a distributed transaction manager, that is crash safe and keeps track of all distributed transactions. This transaction manager can clean up all prepared transactions after some outage.
I think it would make sense to make your DML actions idempotent - that is to say that if you call them multiple times they have the same overall effect. For example, we can make the INSERT a no-op if the data exists:
INSERT INTO x(id, name)
SELECT nu.id, nu.name
FROM
(SELECT 1 as id, 'a' as name) as nu
LEFT JOIN x ON nu.id = x.id
WHERE
x.id IS NULL
You can run this as many times as you like, and it'll only insert one record
https://www.db-fiddle.com/f/nbHmy3PVDQ3RrGMqLni1su/0
YOu'll need to decide what to do if the record exists in an altered state - eg do you want to leave it alone, or reset it to the incoming values - a question for another time

SQL Server Prevent Duplication

SQL Table contains a table with a primary key with two columns: ID and Version. When the application attempts to save to the database it uses a command that looks similar to this:
BEGIN TRANSACTION
INSERT INTO [dbo].[FirstTable] ([ID],[VERSION]) VALUES (41,19)
INSERT INTO [dbo].[SecondTable] ([ID],[VERSION]) VALUES (41,19)
COMMIT TRANSACTION
SecondTable has a foreign key constraint to FirstTable and matching column names.
If two computers execute this statement at the same time, does the first computer lock FirstTable and the second computer waits, then when it is finished waiting it finds that the first insert statement fails and throws an error and does not execute the second statement? Is it possible for the second insert to run successfully on both computers?
What is the safest way to ensure that the second computer does not write anything and returns an error to the calling application?
Since these are transactions, all operations must be successful otherwise everything gets rolled back. Whichever computer completes the first statement of the transaction first will ultimately complete its transaction. The other computer will attempt to insert a duplicate primary key and fail, causing its transaction to roll back.
Ultimately there will be one row added to both tables, via only one of the computers.
Ofcourse if there are 2 statements then one of them is going to be executed and the other one will throw an error . In order to avoid this your best bet would be to use either "if exists" or "if not exists"
What this does is basically checks to see if there is data already present in the table if not then you insert , else just select.
Take a look at the flow shown below :
if not exists (select * from Table with (updlock, rowlock, holdlock) where
...)
/* insert */
else
/* update */
commit /* locks are released here */
The transaction did not rollback automatically if an error was encountered. It needed to have
set xact_abort on
Found this in this question
SQL Server - transactions roll back on error?

exclusive lock and shared lock for select statement - SQL Server

I am not able to understand how select will behave while its part of exclusive transaction. Please consider following scenarios –
Scenario 1
Step 1.1
create table Tmp(x int)
insert into Tmp values(1)
Step 1.2 – session 1
begin tran
set transaction isolation level serializable
select * from Tmp
Step 1.3 – session 2
select * from Tmp
Even first session hasn't been finished, session 2 will be able to read tmp table. I thought Tmp will have exclusive lock and shared lock should not be issued to select query in session 2. And it’s not happening. I have made sure that default isolation level is READ COMMITED.
Thanks in advance for helping me in understanding this behavior.
EDIT : Why I need select in exclusive lock?
I have a SP which actually generate sequential values. So flow is -
read max values from Table and store value in variables
Update table set value=value+1
This SP is executed in parallel by several thousand instances. If two instances execute SP at same time, then they will read same value and will update value+1. Though I would like to have sequential value for every execution. I think its possible only if select is also part of exclusive lock.
If you want a transaction to be serializable, you have to change that option before you start the outermost transaction. So your first session is incorrect and is still actually running under read committed (or whatever other level was in effect for that session).
But even if you correct the order of statements, it still will not acquire an exclusive lock for a plain SELECT statement.
If you want the plain SELECT to acquire an exclusive lock, you need to ask for it:
select * from Tmp with (XLOCK)
or you need to execute a statement that actually requires an exclusive lock:
update Tmp set x = x
Your first session doesn't need an exclusive lock because it's not changing the data. If your first (serializable) session had run to completion and either rolled back or committed, before your second session was started, that session's results would still be the same because your first session didn't change the data - and so the "serializable" nature of the transaction was correct.

Design a Lock for SQL Server to help relax the conflict between INSERT and SELECT

SQL Server is SQL Azure, basically it's SQL Server 2008 for normal process.
I have a table, called TASK, constantly have new data in (new task), and removed (task complete)
For new data in, I use INSERT INTO .. SELECT ..., most of time takes very long, lets say dozen of minutes.
For old data out, I first use SELECT (WITH NOLOCK) to get task, UPDATE to let other thread know this task already starts to process, then DELETE once finished.
Dead lock sometime happens on SELECT, most time happens on UPDATE and DELETE.
this is not time critical task, so I can start process the new data once all INSERT finished. Is there any kind of LOCK to ask SELECT not to select it before the INSERT finished? Or any kind of other suggestion to avoid Conflict. I can redesign table if needed.
later the sqlserver2005,resolve lock is easy.
for conflict
1.you can use the service broker.
2.use the isolution level.
dbcc useroptions ,at last row ,you can see the deflaut isolution level is read_committed,this is the session level.
we can change the level to read_committed_snapshot for conflict,in sqlserver, not realy row lock like oracle.but we can use this method implement.
ALTER DATABASE DBName
SET READ_COMMITTED_SNAPSHOT ON;
open this feature,must in single user schame.
and you can test it.
for session A ,session B.
A:update table1 set name = 'new' with(Xlock) where id = 1
B:you still update other row and select all the data from table.
my english is not very good,but for lock ,i know.
in sqlserver,for function ,there are three locks.
1.optimistic lock ,use the timestamp(rowversion) control.
2.pessimism lock ,force lock when use the date.use Ulock,Xlock and so on.
3.virtual lock,use the proc getapplock().
if you need lock schame in system architecture,please me email : mjjjj2001#163.com
Consider using service broker if this is a processing queue.
There are a number of considerations that affect performance and locking. I surmise that the data is being updated and deleted in a separate session. Which transaction isolation level is in use for the insert session and the delete session.
Has the insert session and all transactions committed and closed when the delete session runs? Are there multiple delete sessions running concurrently? It is very important to have an index on the columns you are using to identify a task for the SELECT/UPDATE/DELETE statements, especially if you move to a higher isolation level such as REPEATABLE READ or SERIALIZED.
All of these issues could be solved by moving to Service Broker if it is appropriate.