I use libpqxx for my connection to postgresql. And everything was ok, until i run serialazable query on one table on one row.
table:
CREATE TABLE t1(id integer primary key);
postgres 9.4.4_x64
pqxx::connection c1(conn_str);
pqxx::connection c2(conn_str);
pqxx::transaction<pqxx::isolation_level::serializable> t1(c1);
t1.exec("INSERT INTO t1 (id) VALUES (25)");
pqxx::transaction<pqxx::isolation_level::serializable> t2(c2);
t2.exec("INSERT INTO t1 (id) VALUES (25)"); //hang here
t2.commit();
t1.commit();
my program hang forever. hang in PQexec function. Why? Is i think it must rollback one of transaction? but no? just hang.
UPDATE: same result for pure libpq:
c1 = PQconnectdb(conninfo);
c2 = PQconnectdb(conninfo);
res1 = PQexec(c1, "BEGIN");
PQclear(res1);
res1 = PQexec(c1, "INSERT INTO t1 (id) VALUES (104)");
PQclear(res1);
res2 = PQexec(c2, "BEGIN");
PQclear(res2);
res2 = PQexec(c2, "INSERT INTO t1 (id) VALUES (104)");
PQclear(res2);
res2 = PQexec(c2, "END");
PQclear(res2);
res1 = PQexec(c1, "END");
PQclear(res1);
postgresql 9.1 - same hang
The hang has nothing to do with the serializable isolation level.
I'm no libpqxx expert, but your example appears to be running both transactions in a single thread. That's your problem.
t2.exec("INSERT INTO t1 (id) VALUES (25)");
The above statement has to wait for t1 to commit or rollback before completing, but t1.commit() never gets a chance to execute. Deadlock! This is absolutely normal, and will happen regardless of your chosen isolation level. This is just a consequence of trying to run statements from 2 concurrent transactions in the same thread of execution, not a good idea.
Try running both transactions on different threads, and your hang will go away.
If only the two transaction are involved, you should get a unique violation error for transaction t2 - exactly the same as with default READ COMMITTED isolation level:
ERROR: duplicate key value violates unique constraint "t1_pkey"
DETAIL: Key (id)=(25) already exists.
t1 tried the insert first and wins regardless which transaction tries to commit first. All following transactions trying to insert the same key wait for the first. Again, this is valid for READ COMMITTED and SERIALIZABLE alike.
A possible explanation would be that a third transaction is involved, which tried to insert the same key first and is still open. Or several such transactions, artefacts of your tests.
All transactions wait for the first one that tried to insert the same key. If that one commits, all other get a unique violation. If it rolls back, the next in line gets its chance.
To check look at pg_stat_activity (being connected to the same database):
SELECT * FROM pg_stat_activity;
More specifically:
SELECT * FROM pg_stat_activity WHERE state = 'idle in transaction';
Then commit / rollback the idle transaction. Or brute-force: terminate that connection. Details:
psql: FATAL: too many connections for role
form postgres team:
"In concurrent programming, a deadlock is a situation in which two or more competing actions are each waiting for the other to finish, and thus neither ever does."
https://en.wikipedia.org/wiki/Deadlock
definition it is not a deadlock in the first place. "c1" is not waiting for "c2" to finish and can happily go about its business with id=104. However, your application never gives c1 the next command because it is stubbornly waiting for "c2" to perform its insert - which it cannot due to the lock held by "c1".
Lock testing can be done within a single thread but deadlock testing implies that both connections need to simultaneously be attempting to execute a command - which a single program cannot accomplish without threading.
David J.
Related
Final solution
What we did actually was to bypass the line by line insert by creating a work table & session ID, and having a stored proc that DELETE and INSERT from the temp table to the main table all at once. No more deadlocks!
Final process :
DELETE FROM TheTable_work (with sessionID, just in case…)
INSERT INTO TheTable_work (line by line with ADODB)
I call a stored procedure that does :
BEGIN TRANSACTION
DELETE FROM TheTable (with some
conditions)
INSERT INTO TheTable FROM TheTable_work (1
statement)
COMMIT
DELETE FROM TheTable_work (clean up)
We do think it's because of index lock but I am waiting for confirmation from DBA (explanation here).
Isolation levels did not change nothing (we tried all of possibilities, even turning on and off READ_COMMITTED_SNAPSHOT)
I am having an application that causes a deadlock into my database when 2 users "write" to the database at the same moment. They do not work on the same data, because they have different id, but they work on the same table.
User 1 (id = 1), User 2 (id = 2)
Process
User 1 does : 1 DELETE statement, followed by 3000 INSERT
User 2 does : 1 DELETE statement, followed by 3000 INSERT
User 2 DELETE in the middle of user 1 INSERT (example : after 1500). Result in Deadlock.
Statements (examples, but they work)
DELETE FROM dbo.TheTable WHERE id = 1 /* Delete about 3000 lines */
INSERT INTO dbo.TheTable (id, field2, field3) VALUE (1 , 'value2','value3')
I use ADODB (Microsoft Access...) so I do a simple query execute , example :
ConnectSQLServer.Execute "DELETE or INSERT statement"
Error Message
Run-time error '-2147457259 (80004005)':
Transaction (Process ID76) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
Other informations
There is no transaction involved, only simple INSERT or DELETE query,
one by one
There is no RecordSet involved, nor open
We tried to looked for a deadlock trace but no graphic showed up, a DBA tries to understand why
I'd like to understand why they deadlock each other and not just "wait" until the other one is finished! How can I do better operations ? =)
Access : it's a access front-end with SQL-Server backend. No linked tables, vba code pushed queries via ADODB connection.
Edit : Deadlock graph
On the left we have 1 (of 3000) INSERT statement, on the right one DELETE.
UPDATE
I deleted an index on a column that is used in the DELETE statement and I can not reproduce the deadlock anymore! Not a clean solution, but temporary. We think about INSERT 3000 lines in a temp table then copy from temp table to TheTable all at once (DELETE and INSERT in a stored procedure).
Thanks,
Séb
DELETE will hold an Update lock on the pages :
Update (U)
Used on resources that can be updated. Prevents a common form of
deadlock that occurs when multiple sessions are reading, locking, and
potentially updating resources later.
And INSERT will hold an intent/exclusive lock on the same pages.
Exclusive (X)
Used for data-modification operations, such as INSERT, UPDATE, or
DELETE. Ensures that multiple updates cannot be made to the same
resource at the same time.
Intent
Used to establish a lock hierarchy. The types of intent locks are:
intent shared (IS), intent exclusive (IX), and shared with intent
exclusive (SIX).
If you wish to run both queries at the same time,
each will have to wait for the other at a given time.
I suggest you to run each insert in a seperate transaction if no rollback is needed for the whole set of inserts.
Or, try to remove parallelization, or do these operations at different times.
To make short :
1) Try to make sure each INSERT is correctly commited before going to the next one.
2) Or try to avoid doing both operations at the same time
Another guess would be to adjust the Transaction Isolation Level for the sessions that produce the problem. Check the following documentation.
Reference :
Lock Modes
Transaction Isolation Level
SQL Table contains a table with a primary key with two columns: ID and Version. When the application attempts to save to the database it uses a command that looks similar to this:
BEGIN TRANSACTION
INSERT INTO [dbo].[FirstTable] ([ID],[VERSION]) VALUES (41,19)
INSERT INTO [dbo].[SecondTable] ([ID],[VERSION]) VALUES (41,19)
COMMIT TRANSACTION
SecondTable has a foreign key constraint to FirstTable and matching column names.
If two computers execute this statement at the same time, does the first computer lock FirstTable and the second computer waits, then when it is finished waiting it finds that the first insert statement fails and throws an error and does not execute the second statement? Is it possible for the second insert to run successfully on both computers?
What is the safest way to ensure that the second computer does not write anything and returns an error to the calling application?
Since these are transactions, all operations must be successful otherwise everything gets rolled back. Whichever computer completes the first statement of the transaction first will ultimately complete its transaction. The other computer will attempt to insert a duplicate primary key and fail, causing its transaction to roll back.
Ultimately there will be one row added to both tables, via only one of the computers.
Ofcourse if there are 2 statements then one of them is going to be executed and the other one will throw an error . In order to avoid this your best bet would be to use either "if exists" or "if not exists"
What this does is basically checks to see if there is data already present in the table if not then you insert , else just select.
Take a look at the flow shown below :
if not exists (select * from Table with (updlock, rowlock, holdlock) where
...)
/* insert */
else
/* update */
commit /* locks are released here */
The transaction did not rollback automatically if an error was encountered. It needed to have
set xact_abort on
Found this in this question
SQL Server - transactions roll back on error?
I am not able to understand how select will behave while its part of exclusive transaction. Please consider following scenarios –
Scenario 1
Step 1.1
create table Tmp(x int)
insert into Tmp values(1)
Step 1.2 – session 1
begin tran
set transaction isolation level serializable
select * from Tmp
Step 1.3 – session 2
select * from Tmp
Even first session hasn't been finished, session 2 will be able to read tmp table. I thought Tmp will have exclusive lock and shared lock should not be issued to select query in session 2. And it’s not happening. I have made sure that default isolation level is READ COMMITED.
Thanks in advance for helping me in understanding this behavior.
EDIT : Why I need select in exclusive lock?
I have a SP which actually generate sequential values. So flow is -
read max values from Table and store value in variables
Update table set value=value+1
This SP is executed in parallel by several thousand instances. If two instances execute SP at same time, then they will read same value and will update value+1. Though I would like to have sequential value for every execution. I think its possible only if select is also part of exclusive lock.
If you want a transaction to be serializable, you have to change that option before you start the outermost transaction. So your first session is incorrect and is still actually running under read committed (or whatever other level was in effect for that session).
But even if you correct the order of statements, it still will not acquire an exclusive lock for a plain SELECT statement.
If you want the plain SELECT to acquire an exclusive lock, you need to ask for it:
select * from Tmp with (XLOCK)
or you need to execute a statement that actually requires an exclusive lock:
update Tmp set x = x
Your first session doesn't need an exclusive lock because it's not changing the data. If your first (serializable) session had run to completion and either rolled back or committed, before your second session was started, that session's results would still be the same because your first session didn't change the data - and so the "serializable" nature of the transaction was correct.
I have two transaction: T1 with SERIALIZABLE isolation level and T2 (I think - with default READ COMMITTED isolation level, but it doesn't matter).
Transaction T1 performs SELECT then WAITFOR 2 seconds then SELECT.
Transaction T2 performs UPDATE on data which T1 read.
It causes deadlock, why transaction T2 don't wait for end of T1?
When T1 has REPEATABLE READ isolation level everything is OK i.e. phantom rows occur.
I thought when I raise isolation level up to SERIALIZABLE, T2 will wait for end of T1.
This is a part of my college exercise. I have to show negative effects in two parallel transactions which have incorrect isolation level and absence of these effects with correct isolation level.
Here's the code, unfortunately names of fields are in Polish.
T1:
USE MR;
SET IMPLICIT_TRANSACTIONS OFF;
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
BEGIN TRANSACTION;
-- 1. zapytanie
SELECT
www.IdSamochodu, s.Model, s.Marka, s.NrRejestracyjny, o.PESEL, o.Nazwisko, o.Imie, o.NrTelefonu
FROM
WizytyWWarsztacie www
JOIN
Samochody s
ON s.IdSamochodu = www.IdSamochodu
JOIN
Osoby o
ON o.PESEL = s.PESEL
WHERE
www.[Status] = 'gotowy_do_odbioru'
ORDER BY www.IdSamochodu ASC
;
WAITFOR DELAY '00:00:02';
-- 2. zapytanie
SELECT
u.IdSamochodu, tu.Nazwa, tu.Opis, u.Oplata
FROM
Uslugi u
JOIN
TypyUslug tu
ON tu.IdTypuUslugi = u.IdTypuUslugi
JOIN
WizytyWWarsztacie www
ON www.IdSamochodu = u.IdSamochodu AND
www.DataOd = u.DataOd
WHERE
www.[Status] = 'gotowy_do_odbioru'
ORDER BY u.IdSamochodu ASC, u.Oplata DESC
;
COMMIT;
T2:
USE MR;
SET IMPLICIT_TRANSACTIONS OFF;
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
BEGIN TRANSACTION;
UPDATE
Uslugi
SET
[Status] = 'wykonano'
WHERE
IdUslugi = 2
;
UPDATE
www
SET
www.[Status] = 'gotowy_do_odbioru'
FROM
WizytyWWarsztacie www
WHERE
www.[Status] = 'wykonywanie_usług' AND
EXISTS (
SELECT 1
FROM
Uslugi u
WHERE
u.IdSamochodu = www.IdSamochodu AND
u.DataOd = www.DataOd AND
u.[Status] = 'wykonano'
GROUP BY u.IdSamochodu, u.DataOd
HAVING COUNT(u.IdUslugi) = (
SELECT
COUNT(u2.IdUslugi)
FROM
Uslugi u2
WHERE
u2.IdSamochodu = www.IdSamochodu AND
u2.DataOd = www.DataOd
GROUP BY u2.IdSamochodu, u2.DataOd
)
)
;
COMMIT;
I use SQL Management Studio and I have each transaction in different file. I run this by clicking F5 in T1 then quickly switch to file which contains T2 and again - F5.
I read about deadlocks and locking mechanism in mssql but apparently, I haven't understand this topic yet.
Deadlock issue in SQL Server 2008 R2 (.Net 2.0 Application)
SQL Server deadlocks between select/update or multiple selects
Deadlock on SELECT/UPDATE
http://msdn.microsoft.com/en-us/library/ms173763(v=sql.105).aspx
http://www.sql-server-performance.com/2004/advanced-sql-locking/
edit
I figure out the first UPDATE statement in T2 causes the problem, why?
Troubleshooting deadlocks starts with obtaining the deadlock graph. This is an xml document that tells you the relevant bits about the transactions and resources involved. You can get it through Profiler, extended events, or event notifications (I'm sure that there are other methods, but this will do for now). Once you have the graph, examine it to see what each transaction had what type of locks on what resources. Where you go from there really depends on what's going on in the graph so I'll stop there. Bottom line: obtain the deadlock graph and mine it for details.
As an aside, to say that one or the other transaction is "causing" the deadlock is somewhat misleading. All transactions involved in the deadlock were necessary to cause the deadlock situation so neither is more at fault.
I had some problems with my SQL Managmenet Studio (Profiler didn't work) but finally I've obtained deadlock graph. This article was helpful for me.
To understand this graph I had to learn about locking mechanism and symbols.
I think here it is explained quite clearly.
Now, when I know about all these stuff, the cause of deadlock is quite obvious.
I've made sequence diagram for the described situation:
As I wrote earlier, when we get rid of the first UPDATE statement from transaction T2, deadlock does not occur.
In this situation T2 does not acquire a lock on the pk_uslugi index, thus second SELECT statement from transaction T1 will execute successfully and index pk_wizytywwarsztacie will be unlocked. After that, also T2 will be finished.
The problem could be this:
The T1 Select S-locks the row
The T2 Update U-locks the row (succeeds)
The T2 Update X-locks the row (waits, lock is queued)
T2 tries to S-lock again, but the S-lock is incompatible with the queued X-lock.
Locks in SQL Server are queued. If the head of the queue waits, everything else behind it also waits.
Actually, I'm not entirely sure that this is the cause because the same problem should occur with REPEATABLE READ. I'm still posting this idea hoping that it helps.
I ran into a similar issue where I was selecting from a list of available items and then inserting those items into a holding queue table. When I had too many concurrent requests, the select statement would return items that were also concurrently selected during another parallel request. When attempting to insert them into the holding queue table, I would receive a Unique Constraint error (because the same item couldn't go into the holding table twice).
I then tried wrapping a SERIALIZABLE transaction around the whole thing but then I ran into DEADLOCK errors because both transactions were holding onto a lock on the UC index (determined by my Deadlock graph).
I was finally able to resolve the issue by using an exclusive Row lock within the select statement.
You could try using an exclusive row lock on the table/rows in question. This would ensure that the lock on the rows within T1 will complete before T2 attempts to update the same rows.
EXAMPLE:
SELECT *
FROM Uslugi u WITH (XLOCK, ROWLOCK)
I'm not sure yet of the performance impact of this but while running load testing using multiple threads, it doesn't appear to have a negative impact.
I'm writing a high volume trading system. We receive messages at around 300-500 per second and these messages then need to be saved to the database as quickly as possible. These messages get deposited on a Message Queue and are then read from there.
I've implemented a Competing Consumer pattern, which reads from the queue and allows for multithreaded processing of the messages. However I'm getting a frequent primary key violation while the app is running.
We're running SQL 2008. The sample table structure would be:
TableA
{
MessageSequence INT PRIMARY KEY,
Data VARCHAR(50)
}
A stored procedure gets invoked to persist this message and looks something like this:
BEGIN TRANSACTION
INSERT TableA(MessageSequence, Data )
SELECT #MessageSequence, #Data
WHERE NOT EXISTS
(
SELECT TOP 1 MessageSequence FROM TableA WHERE MessageSequence = #MessageSequence
)
IF (##ROWCOUNT = 0)
BEGIN
UPDATE TableA
SET Data = #Data
WHERE MessageSequence = #MessageSequence
END
COMMIT TRANSACTION
All of this is in a TRY...CATCH block so if there's an error, it rolls back the transaction.
I've tried using table hints, like ROWLOCK, but it hasn't made a difference. Since the Insert is evaluated as a single statement, it seems ludicrous that I'm still getting a 'Primary Key on insert' issue.
Does anyone have an idea why this is happening? And have you got ANY ideas which may point me in the direction of a solution?
Why is this happening?
SELECT TOP 1 MessageSequence FROM TableA WHERE MessageSequence = #MessageSequence
This SELECT will try to locate the row, if not found the EXISTS operator will return FALSE and the INSERT will proceed. Hoewever, the decision to INSERT is based on a state that was true at the time of the SELECT, but that is no longer guaranteed to be true at the time of the INSERT. In other words, you have race conditions where two threads can both look up the same #MessageSequence, both return NOT EXISTS and both try to INSERT, when only the first one will succeed, second one will cause a PK violation.
How do I solve it?
The quickest fix is to add a WITH (UPDLOCK) hint to the SELECT, this will enforce the lock placed on the #MessageSequence key to be retained and thus the INSERT/SELECT to behave atomically:
INSERT TableA(MessageSequence, Data )
SELECT #MessageSequence, #Data
WHERE NOT EXISTS (
SELECT TOP 1 MessageSequence FROM TableA WITH(UPDLOCK) WHERE MessageSequence = #MessageSequence)
To prevent SQL from doing fancy stuff like page lock, you can also add the ROWLOCK hint.
However, that is not my recommendation. My recommendation may surpise you, but is this: do the operation that is most likely to succeed and handle the error if it failed. Ie. if your business case makes it more likely for the #MessageSequnce to be new, try an INSERT and handle the PK if it failed. This way you avoid the spurious look-ups, and hte cost of the catch/retry is amortized over the many cases when it succeeds from the first try.
Also, it is perhaps worth investigating using the built-in queues that come with SQL Server.
Common problem. Explained here:
Defensive database programming: eliminating IF statements
It might be related to the transaction isolation level. You might need
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
before you start the transaction.
Also, if you have more updates than inserts, you should try the update first and check rowcount and do the insert second.
This is very similar to post 939831. Ultimately you want to use the hints (ROWLOCK, READPAST, UPDLOCK). READPAST tells sql server to skip to the next record if the current one is locked. UPDLOCK tells sql server that the read lock is going to escalate to an update lock.
When I implemented something similar I locked the next record by the threadID
UPDATE TOP (1)
foo
SET
ProcessorID = #PROCID
FROM
OrderTable foo WITH (ROWLOCK, READPAST, UPDLOCK)
WHERE
ProcessorID = 0
Then selected the record
SELECT *
FROM foo WITH (NOLOCK)
WHERE ProcessorID = #PROCID
Then marked it as processed
UPDATE foo
SET ProcessorID = -1
WHERE ProcessorID = #PROCID
Later in off hours I perform the relatively expensive operation of performing the delete operation to clear the queue of processed records.
The atomicity of the following statement is what you are after:
INSERT TableA(MessageSequence, Data )
SELECT #MessageSequence, #Data
WHERE NOT EXISTS
(
SELECT TOP 1 MessageSequence FROM TableA WHERE MessageSequence = #MessageSequence
)
According to this person, it depends on the current isolation level.
On a tangent, if you're thinking of a high volume trading system you might want to consider a tick database designed for such data [I'm not exactly sure what "message" you are storing here], such as discussed in this thread for example: http://www.elitetrader.com/vb/showthread.php?threadid=81345.
These are typically in-memory solutions with proprietary query languages. We use kdb+ at our shop.
Not sure what Messaging product you use - but it may be worth looking at the transactions not at the DB level, but at the MQ Level.
Of course, if you are using a TM (Transaction manager), the two operations : 1)Get from MQ and 2)Write to DB are both 'bracketed' under the same parent commit.
So I am not sure if you are using an implicit or explicit or any TM here (for example, Microsoft's DTC).
MessageSequence is the PK, so could the same Message from the MQ be getting processed twice.
When you perform a 'GET" from MQ, make sure the GET is committed (i.e. not a db-commit, but a MQ-commit) - that will ensure the same MessageID cannot be 'popped' by the next thread that writes messages to the DB.