SELECT FOR UPDATE with Subquery results in Deadlock - sql

I have query when executed from different session is resulting in deadlock.
TAB1 (ID, TARGET, STATE, NEXT) AND ID is primary key
Column ID is the primary key.
SELECT *
FROM
TAB1 WHERE NEXT = (SELECT MIN(NEXT) FROM TAB1 WHERE TARGET=? AND STATE=?)
AND TARGET=? AND STATE=? FOR UPDATE
In the Oracle trace file, I see the statement:
DEADLOCK DETECTED
Current SQL statement for this session:
SELECT ID, TARGET, NEXT, STATE FROM TAB1
WHERE NEXT=(SELECT MIN(NEXT) FROM TAB1 WHERE (TARGET='$any') AND ( STATE = 0))
AND (TARGET='$any')
AND (STATE = 0) FOR UPDATE
The following deadlock is not an ORACLE error. It is a
deadlock due to user error in the design of an application
or from issuing incorrect ad-hoc SQL. The following
information may aid in determining the deadlock:
Deadlock graph:
---------Blocker(s)-------- ---------Waiter(s)---------
Resource Name process session holds waits process session holds waits
TX-00010012-0102905b 54 474 X 52 256 X
TX-000a0005-00a30961 52 256 X 54 474 X
session 474: DID 0001-0036-00000002 session 256: DID 0001-0034-00000002
session 256: DID 0001-0034-00000002 session 474: DID 0001-0036-00000002
Rows waited on:
Session 256: obj - rowid = 00013181 - AAATGBAAzAABtPTAAI
(dictionary objn - 78209, file - 51, block - 447443, slot - 8)
Session 474: obj - rowid = 00013181 - AAATGBAAzAABtPUAAJ
(dictionary objn - 78209, file - 51, block - 447444, slot - 9)
Information on the OTHER waiting sessions:
Session 256:
pid=52 serial=58842 audsid=43375302 user: 106/B2B_ISINTERNAL
O/S info: user: admwmt, term: spl099wmt04.compucom.local, ospid: , machine: spl099wmt04.compucom.local/10.16.0.41
program: JDBC Connect Client
Current SQL Statement:
SELECT ID, TARGET, NEXT, STATE FROM TAB1
WHERE NEXT=(SELECT MIN(NEXT) FROM TAB1 WHERE (TARGET='$any') AND ( STATE = 0))
AND (TARGET='$any')
AND (STATE = 0) FOR UPDATE
End of information on OTHER waiting sessions.
===================================================
Is there any way to avoid this? Rewriting the Query or Indexing?

I think the reason can be that you are actually selecting the same table twice with FOR UPDATE clause, once in the main query and once in subquery.

Update
Instead of trying to guess exactly how Oracle retrieves rows and force a plan, it may be easier to use one of the available UPDATE FOR locking features.
NOWAIT or SKIP LOCKED should be able to fix the problem. Although with NOWAIT you would probably need to add some application logic to retry after an error.
Since there are bind variables there may be multiple execution plans for the same SQL statement. This is normally a good thing, for example think about a query like this: select * from tab where status = ?. A full table scan would work best for a popular status, and an index scan would work better for a rare status. But if one plan uses an index and one uses a table, the same statement will retrieve resources in a different order, potentially causing a deadlock.
Forcing the statement to always use the same plan will prevent the deadlocks.
First, you will want to confirm my theory about multiple execution plans is correct. Look for multiple rows in this query, specifically look for different plan_hash_values for the same SQL_ID.
select child_number, plan_hash_value, gv$sql.*
from gv$sql
where sql_text like '%NEXT=(SELECT%';
Then it's a matter of forcing the statements to always use the same plan. One simple way is to find the outline that fixes a specific plan, and use the same set of hints for both statements. Hopefully the forced plan will still run well for all sets of bind variables.
select *
from table(dbms_xplan.display_cursor(
sql_id => '<SQL_ID from above>',
cursor_child_no => <child_number from above>,
format => '+outline')
);

Related

How to do Update if record exists and insert if not in SQLite?

Here is my statement:
CASE
WHEN
EXISTS
(
SELECT
1
FROM
Transactions
WHERE
Hash = 'VLEYCBLTDGGLHVQEWWIQ'
AND Time = 1531739096
)
THEN
UPDATE
Transactions
SET
BlockID = 0, PoolHeight = NULL
WHERE
Hash = 'VLEYCBLTDGGLHVQEWWIQ'
AND Time = 1531739096
ELSE
INSERT INTO Transactions (Hash, Time, _fROM, Signature, Mode, BlockID, OutputValue, PoolHeight)
VALUES('VLEYCBLTDGGLHVQEWWIQ', 1531739096, 'GENESIS', 'GENESIS', - 1, 0, 0, NULL)
END;
The error:
System.Data.SQLite.SQLiteException: 'SQL logic error near "CASE": syntax error'
Basically: If Exists is true, then i want to update the record. If it doesnt exist i want to insert the record.
You will need to do that in your application. The CASE expression cannot be used for control flow. Many RDBMSs support an IF statement for that purpose, but SQLite does not. SQLite's dialect of SQL does not support control flow with SQL.
For this problem, you can try using SQLite's INSERT OR REPLACE syntax, but looking at your queries it does not seem to 100% match what you're trying to do. You'd be updating Signature, Mode, BlockID, OutputValue, and PoolHeight on a collision, while it looks like you only want to update BlockID and PoolHeight.
You may be able to use an INSERT [...] ON CONFLICT UPDATE statement, but I haven't used the so-called UPSERT clause before. I believe it would look like this:
INSERT INTO Transactions (Hash, Time, _fROM, Signature, Mode, BlockID, OutputValue, PoolHeight)
VALUES('VLEYCBLTDGGLHVQEWWIQ', 1531739096, 'GENESIS', 'GENESIS', - 1, 0, 0, NULL)
ON CONFLICT (Hash, Time) DO UPDATE SET BlockID = 0, PoolHeight = NULL
WHERE Hash = 'VLEYCBLTDGGLHVQEWWIQ' AND Time = 1531739096;
However, I've never used this statement myself on SQLite, so I'm not sure of exactly how it behaves. It does seem to require that the columns specified for CONFLICT are indexed. I'm not entirely sure the WHERE clause is even necessary here. Read the doc and do extensive testing first.
The other option would be to simply run the UPDATE statement every time, and then if you get zero records affected run the INSERT.

Multiple success messages from SQL Server Update statement

I have the following query that SHOULD update 716 records:
USE db1
GO
UPDATE SAMP
SET flag1 = 'F', flag2 = 'F'
FROM samp INNER JOIN result ON samp.samp_num = result.samp_num
WHERE result.status != 'X'
AND result.name = 'compound'
AND result.alias = '1313'
AND sample.standard = 'F'
AND sample.flag2 = 'T';
However, when this query is run on a SQL Server 2005 database from a query window in SSMS, I get the following THREE messages:
716 row(s) affected
10814 row(s) affected
716 row(s) affected
So WHY am I getting 3 messages (instead of the normal one for a single update statement) and WHAT does the 10814 likely refer to? This is a production database I need to update so I don't want to commit these changes without knowing the answer :-) Thanks.
This is likely caused by a trigger on the [samp] table. If you go to Query -> Query Options -> Execution -> Advanced and check SET STATISTICS IO, you will see which other tables are being updated when you run the query.
You can also use the object browser in SSMS to look for the triggers. Open the Tables Node, find the table, open the table node and then open the triggers. The nice thing about this method is that you can script the trigger to a new query window and see what the trigger is doing.
It's probably because you have one trigger in your table.
This command will show you what is happening.
SET STATISTICS IO { ON | OFF }
https://msdn.microsoft.com/en-us/library/ms184361.aspx

Mutating Table in Oracle 11 caused by a function

We've recently upgraded from Oracle 10 to Oracle 11.2. After upgrading, I started seeing a mutating table error caused by a function rather than a trigger (which I've never come across before). It's old code that worked in prior versions of Oracle.
Here's a scenario that will cause the error:
create table mutate (
x NUMBER,
y NUMBER
);
insert into mutate (x, y)
values (1,2);
insert into mutate (x, y)
values (3,4);
I've created two rows. Now, I'll double my rows by calling this statement:
insert into mutate (x, y)
select x + 1, y + 1
from mutate;
This isn't strictly necessary to duplicate the error, but it helps with my demonstration later. So the contents of the table now look like this:
X,Y
1,2
3,4
2,3
4,5
All is well. Now for the fun part:
create or replace function mutate_count
return PLS_INTEGER
is
v_dummy PLS_INTEGER;
begin
select count(*)
into v_dummy
from mutate;
return v_dummy;
end mutate_count;
/
I've created a function to query my table and return a count. Now, I'll combine that with an INSERT statement:
insert into mutate (x, y)
select x + 2, y + 2
from mutate
where mutate_count() = 4;
The result? This error:
ORA-04091: table MUTATE is mutating, trigger/function may not see it
ORA-06512: at "MUTATE_COUNT", line 6
So I know what causes the error, but I am curious as to the why. Isn't Oracle performing the SELECT, retrieving the result set, and then performing a bulk insert of those results? I would only expect a mutating table error if records were already being inserted before the query finished. But if Oracle did that, wouldn't the earlier statement:
insert into mutate (x, y)
select x + 1, y + 1
from mutate;
start an infinite loop?
UPDATE:
Through Jeffrey's link I found this in the Oracle docs:
By default, Oracle guarantees statement-level read consistency. The
set of data returned by a single query is consistent with respect to a
single point in time.
There's also a comment from the author in his post:
One could argue why Oracle doesn't ensure this 'statement-level read
consistency' for repeated function calls that appear inside a SQL
statement. It could be considered a bug as far as I'm concerned. But
this is the way it currently works.
Am I correct in assuming that this behavior has changed between Oracle versions 10 and 11?
Firstly,
insert into mutate (x, y)
select x + 1, y + 1
from mutate;
Does not start an infinite loop, because the query will not see the data that was inserted - only data that existed as of the start of the statement. The new rows will only be visible to subsequent statements.
This explains it quite well:
When Oracle steps out of the SQL-engine that's currently executing the
update statement, and invokes the function, then this function -- just
like an after row update trigger would -- sees the intermediate states
of EMP as they exist during execution of the update statement. This
implies that the return value of our function invocations heavily
depend on the order in which the rows happen to be updated.
Statement-Level Read Consistency and Transaction-Level Read Consistency".
From the manual:
"If a SELECT list contains a function, then the database applies
statement-level read consistency at the statement level for SQL run
within the PL/SQL function code, rather than at the parent SQL
level. For example, a function could access a table whose data is
changed and committed by another user. For each execution of the
SELECT in the function, a new read consistent snapshot is
established".
Both concepts are explained in the "Oracle® Database Concepts" :
http://download.oracle.com/docs/cd/B19306_01/server.102/b14220/consist.htm#sthref1955
->>> UPDATE
->>>*Section added after the OP was closed
The rule
The technical rule , well linked by Mr Kemp(#jeffrey-kemp) and well explained by Toon Koppelaars here, is reported in "Pl/Sql language reference - Controlling Side Effects of PL/SQL Subprograms"(your function violates RNDS reads no database state):
When invoked from an INSERT, UPDATE, or DELETE statement, the function
cannot query or modify any database tables modified by that statement.
If a function either queries or modifies a table, and a DML statement
on that table invokes the function, then ORA-04091 (mutating-table
error) occurs.
PL/SQL Functions that SQL Statements Can Invoke

How to lock upon select, not just insert/update/delete

Lets say I've got code like the following:
begin
select ... from T where x = 42; -- (1)
.
.
.
update T ... where x = 42; -- (2)
commit;
end;
Am I correct in saying that by the time (2) executes, whatever has been selected from T in (1) may no longer be in T, if, for example, in another session the following executed:
delete from T where x = 42;
If so, what I would like to happen is the select statement to 'lock' T, so it can not be modified.
I realise I can do this explicitly by doing:
lock table T in exclusive mode;
But what if T is a view? Do I have to look through the definition of Ts view/subviews to find all tables it references and lock them individually, or can I do something like this:
begin
START_TRANSACTION;
select ... from T where x = 42; -- (1)
.
.
.
update T ... where x = 42; -- (2)
commit;
end;
Where START_TRANSACTION ensures locks all tables referred to in all select statements until the transaction is complete?
Or is there another nicer solution to this issue? I'm using Oracle 10g if that is important.
Am I correct in saying that by the time (2) executes, whatever has
been selected from T in (1) may no longer be in T
Yes.
So you can lock the row by doing...
SELECT ...
[INTO ...]
FROM T
WHERE x = 42
FOR UPDATE [NOWAIT];
You can optionally use NOWAIT to make the statement fail if someone else already has the row locked. Without the NOWAIT the statement will pause until it can lock the row.
You need to use the select ... for update variation as shown here.
You definitely don't want to lock the entire table, although locking a view is valid. From the Oracle docs:
When a LOCK TABLE statement is issued on a view, the underlying base tables are locked.
However, that may be a performance killer and your DBAs will find you - there's nowhere to hide :-)
I'm pretty sure that transaction are exactly what you want here. The help enforce what's called ACID for a sequence of operations.

Write consistency with nested subquery in Oracle

I've read many of the gory details of write consistency and I understand how it works in the simple cases. What I'm not clear on is what this means for nested sub-queries.
Here's a concrete example:
A table with PK id, and other columns state, temp and date.
UPDATE table SET state = DECODE(state, 'rainy', 'snowy', 'sunny', 'frosty') WHERE id IN (
SELECT id FROM (
SELECT id,state,temp from table WHERE date > 50
) WHERE (state='rainy' OR state='sunny') AND temp < 0
)
The real thing was more convoluted (in the innermost query), but this captures the essence.
If we assume the state column is not nullable, can this update ever fail due to concurrent modification (i.e., the DECODE function doesn't find a match, a value of 'rainy' or 'sunny', and so tries to insert null into a non-nullable column)?
Oracle supports "statement level read and write consistency" (as all other serious DBMS)
This means that the statement as a whole will not see any changes to the database that occurred after the statement started.
As your UPDATE is one single statement there shouldn't be a case where the decode returns null.
Btw: the statement can be simplified, you don't need the outer SELECT in the sub-query:
UPDATE table SET state = DECODE(state, 'rainy', 'snowy', 'sunny', 'frosty')
WHERE id IN (
SELECT id
FROM table
WHERE date > 50
AND (state='rainy' OR state='sunny')
AND temp < 0
)
I don't see any reason to be concerned. The subquery explicitly retrieves only IDs of rows with state 'rainy' or 'sunny' and that's what outer DECODE is going to get. Thole thing is one statement, and is going to be executed within transaction boundaries.
Answering my own question: turns out there is a bug in Oracle which can cause this query to fail. Details confirmed by Tom Kyte, in the discussion starting here.