What will happen in PostgreSQL if a cascading delete is attempted on the parent of a locked row? - sql

I have a table foo_bar and another table spam_eggs with a fb foreign key pointing to foo_bar. spam_eggs rows are cascade deleted when their related spam_eggs.fb are deleted.
I'm working with PostgreSQL.
In a transaction I have used SELECT... FOR UPDATE to lock a spam_eggs row. In the duration of this transaction, another transaction has attempted to DELETE FROM... the related foo_bar of my locked row. Will this trigger an error, or will my locked row cause the query to block until the end of my original update transaction?

Try it and see. Open psql and do some setup:
CREATE TABLE foo_bar(id integer primary key);
CREATE TABLE spam_eggs(
foo_bar_id integer not null references foo_bar(id) on delete cascade
);
INSERT INTO foo_bar (id) VALUES (1),(2),(3),(4);
INSERT INTO spam_eggs(foo_bar_id) VALUES (1),(2),(3),(4);
then open another psql connection. BEGIN a transaction in both of them.
In the first (old) session, run SELECT 1 FROM spam_eggs WHERE foo_bar_id = 4 FOR UPDATE;
In the second (new) session, run DELETE FROM foo_bar WHERE id = 4;
You will see that the second statement blocks on the first. That's because the DELETE on foo_bar cascades to spam_eggs and attempts to lock the row with the foreign key reference so it can delete it. That lock blocks on the lock held by the SELECT ... FOR SHARE.
In general, try to test in all these circumstances:
tx's are BEGIN ISOLATION LEVEL READ COMMITTED and first issues a ROLLBACK
tx's are BEGIN ISOLATION LEVEL READ COMMITTED and first issues a COMMIT
tx's are BEGIN ISOLATION LEVEL SERIALIZABLE and first issues a ROLLBACK
tx's are BEGIN ISOLATION LEVEL SERIALIZABLE and first issues a COMMIT
to make sure you know what to expect. It's also good for your learning if you reason through what you expect to happen before testing it.
In this case the READ COMMITTED and SERIALIZABLE isolation levels will behave the same. If you actually do an UPDATE after your SELECT ... FOR UPDATE and then COMMIT then they'll behave differently, though; the READ COMMITTED version will DELETE successfully, while the SERIALIZABLE version will fail with:
regress=# BEGIN ISOLATION LEVEL SERIALIZABLE;
regress=# DELETE FROM foo_bar WHERE id = 4;
ERROR: could not serialize access due to concurrent update
CONTEXT: SQL statement "DELETE FROM ONLY "public"."spam_eggs" WHERE $1 OPERATOR(pg_catalog.=) "foo_bar_id""

Related

Drop/Create Triggers in a Transaction Question - Will the trigger run after

I would like to inquire about the behavior of a trigger (let's say "after update" trigger) that is dropped at the beginning of a transaction and then re-created at the end of a transaction,- specifically, will the trigger run during the commit phase of the transaction (assuming that in the middle of the transaction I executed some scripts that would normally trigger the trigger)?
Consider this example.
Start Transaction
Drop trigger
Run scripts that make changes to the table that would normally trigger the (dropped) trigger
Re-create the (dropped) trigger
Commit Transaction
At line #5, when database commits entire transaction, will the re-created trigger run or not?
UPDATE
I would like to re-phrase the question if this is not possible/or good idea to do it this way. Instead of dropping/re-creating, I believe a better solution is to disable/enable the trigger. In this, when the trigger is enabled, will it run at the end of the transaction?
UPDATE 2
As suggested by everyone, for my scenario do this:
Start TX
disable trig, run SQL, enable trig
Commit TX
The trigger will not fire which is what I want.
The trigger will not run if you ENABLE the trigger between the DML statement and the COMMIT; For example, this will not cause the trigger to execute:
BEGIN TRANSACTION;
ALTER TABLE [dbo].[myTable] DISABLE TRIGGER [trg_trgtest]
UPDATE [dbo].[myTable]
SET [language] = 'fr'
WHERE id = 6
ALTER TABLE [dbo].[myTable] ENABLE TRIGGER [trg_trgtest]
COMMIT;
Dropping a trigger places an exclusive metadata lock (Sch-M) on the table, preventing any queries or DML, which all require a Schema Stability Lock (Sch-S), for the duration of the transaction.
EG
use tempdb
go
drop table if exists foo
go
create table foo(id int primary key)
go
create trigger tg_foo on foo after insert
as
begin
select 'tg_foo trigger running' msg
end
go
begin transaction
go
drop trigger tg_foo
go
select o.name, o.type_desc, request_mode
from sys.dm_tran_locks tl
join sys.objects o
on o.object_id = tl.resource_associated_entity_id
where request_session_id = ##spid
and o.is_ms_shipped = 0
optputs
name type_desc request_mode
------------------- ----------- --------------
foo USER_TABLE Sch-M
(1 row affected)
So when you drop a trigger at the beginning of a transaction, all access to that table will be blocked until you commit or rollback the transaction.
And so
will the trigger run during the commit phase of the transaction
(assuming that in the middle of the transaction I executed some
scripts that would normally trigger the trigger)?
So on commit, the script that would normally "trigger the trigger" will become unblocked, and run, triggering the trigger as normal.
If you disable a trigger, then execute and commit a transaction, then re-enable the trigger, the trigger will not "run" for the already completed transaction.

Performance of Anonymous PL/SQL Block versus Native SQL

I have a SQL statement which gives significantly different performance when executed inside PL/SQL block.
The SQL is extremely simple.
INSERT into Existing_Table SELECT col1,col2... from Other_table where rownum < 100000
When this is executed as SQL, it comes back almost immediately.
But when executed inside Anonymous PL/SQL block, it hangs forever:
begin
INSERT into Existing_Table SELECT col1, col2... from Other_table where rownum < 100000;
end;
/
But when executed inside Anonymous PL/SQL block, it hangs forever:
I'm guessing two things:
Your table (Existing_Table) has a constraint on one of the columns you're using in the insert statement.
You forgot to issue a commit in between your execution of the SQL statement and the PL/SQL Anonymous Block.
Executing your statement in SQL and PL/SQL shouldn't have a difference in performance. It should've executed in almost the same amount of time.
But due to the Constraint or the Commit, it's being blocked because the row is locked.
Here's an example.
In Session 1, Create two Tables. One with a Constraint, and one without:
create table Existing_Table
(
existing_column number primary key
);
create table Existing_Table_2
(
existing_column number
);
On the same session, execute the following SQL Statement:
insert into Existing_Table (existing_column) values (1);
Result:
1 row inserted.
On Another (Session 2), Execute the following PL/SQL Anonymous Block:
begin
insert into Existing_Table (existing_column) values (1);
end;
This will hang until you issue a commit in Session 1.
This is because Session 1 has "reserved" the value of "1" for existing_column and will be "saved" when you issue a commit.
Session 2 is merely waiting for Session 1 to Commit or Rollback the Insert.
Now, when I go back to Session 1 and issue a commit, the row will be unlocked.
However, Session 2 will result into an Error because of the Integrity Constraint Violation:
Error starting at line : 1 in command -
begin
insert into Existing_Table (existing_column) values (1);
end;
Error report -
ORA-00001: unique constraint (APPS.SYS_C001730810) violated
ORA-06512: at line 2
00001. 00000 - "unique constraint (%s.%s) violated"
*Cause: An UPDATE or INSERT statement attempted to insert a duplicate key.
For Trusted Oracle configured in DBMS MAC mode, you may see
this message if a duplicate entry exists at a different level.
*Action: Either remove the unique restriction or do not insert the key.
Now, Another example of a table WITHOUT a constraint.
Run the below SQL in Session 3 without a commit:
insert into Existing_Table_2 (existing_column) values (1);
Result:
1 row inserted.
Run the same SQL inside an Anonymous PL/SQL Block in Session 4:
begin
insert into Existing_Table_2 (existing_column) values (1);
end;
Result:
PL/SQL procedure successfully completed.
It inserted fine even without a commit in session 1 because there wasn't any constraints being violated.
Just note that none of the data in Session 3 and 4 will be actually saved in the Database until you issue a commit.
Check out other articles about Session Blocking here:
Tracking Oracle blocking sessions
Find blocking sessions
I tried to recreate the problem but was unable to. As correctly pointed out by others, the SQL was simple enough and executing it as either SQL or anon PL/SQL should not have made the difference.
The only thing comes to mind is that I may have failed to notice whether I had another session trying a DML without a COMMIT/ROLLBACK; and that might have caused the hang. So scenario mentioned by BobC and Migs Isip may be relevant here. Thanks all.

Is it possible to rollback DELETE, DROP and TRUNCATE?

We can rollback a delete query but not so for truncate and drop. When I execute queries then successfully done with rollback in delete, drop & truncate.
We can rollback the data in conditions of Delete, Truncate & Drop.
But must be used Begin Transaction before executing query Delete, Drop & Truncate.
Here is example:
Create Database Ankit
Create Table Tbl_Ankit(Name varchar(11))
insert into tbl_ankit(name) values('ankit');
insert into tbl_ankit(name) values('ankur');
insert into tbl_ankit(name) values('arti');
Select * From Tbl_Ankit
/*======================For Delete==================*/
Begin Transaction
Delete From Tbl_Ankit where Name='ankit'
Rollback
Select * From Tbl_Ankit
/*======================For Truncate==================*/
Begin Transaction
Truncate Table Tbl_Ankit
Rollback
Select * From Tbl_Ankit
/*======================For Drop==================*/
Begin Transaction
Drop Table Tbl_Ankit
Rollback
Select * From Tbl_Ankit
For MySql:
13.3.2 Statements That Cannot Be Rolled Back
Some statements cannot be rolled back. In general, these include data definition language (DDL) statements, such as those that create or drop databases, those that create, drop, or alter tables or stored routines.
You should design your transactions not to include such statements. If you issue a statement early in a transaction that cannot be rolled back, and then another statement later fails, the full effect of the transaction cannot be rolled back in such cases by issuing a ROLLBACK statement.
https://dev.mysql.com/doc/refman/8.0/en/cannot-roll-back.html
All of the above 3 transactions can be rolled back because all of them generate detail log. See this SO answer for more information. And this blog for detailed explanation with examples.
DELETE(DML)-------------------
Used to delete the tuples(row) of the table and have facility to DELETE ONE OR
MULTIPLE ROW USING WHERE CLAUSE
delete EX ->
 delete from emp ;
it will delete all the row of table
 delete from emp where id = 2;
it will delete one row
----------------------Drop(DDL)-------------------
This command will delete the whole structure of table with data so that's why it
is very important or crucial command and the access is to this command is only
to database administrator (DBA)
dropEX -> DROP EMP1;
it will vanish whole table with structure
-------------------TRUNCATE(DDL)----------------
Used to delete all rows (tuple) of the table.
truncate EX -> TRUNCATE EMP1;
----------------------QUESTIONS-------------------
QUES 1: IF WE CAN DELETE ALL ROWS FROM BOTH 'DELETE' COMMAND AND
'TRUNCATE' COMMAND THEN WHAT IS THE DIFFERENCE
DELETE AND TRUNCATE DIFFERENCE
DELETE command is DML command and TRUNCATE is DDL
TRUNCTE command will delete all rows because we can not use where clause
with truncate but WHERE clause can be applied to DELETE command and can
delete one or more rows through where condition
QUES 2: WHY TRUNCATE IS DDL ALTHOUGH IT IS WORKING ON DATA AND
DELETE IS DML IT IS ALSO WORKING ON DATA?
ANSWER:
 When TRUNCATE Command deletes all the row then we can never ROLLBACK
our deleted data because in truncate COMMIT is called implicitly
 But in the case of delete the log(backend file) is created from where you can get
your data ROLLBACK if deleted by mistake
 REMEMBER: In DELETE case ROLLBACK can be done only before the COMMIT

Why should we use rollback in sql explicitly?

I'm using PostgreSQL 9.3
I have one misunderstanding about transactions and how they work. Suppose we wrapped some SQL operator within a transaction like the following:
BEGIN;
insert into tbl (name, val) VALUES('John', 'Doe');
insert into tbl (name, val) VALUES('John', 'Doee');
COMMIT;
If something goes wrong the transaction will automatically be rolled back. Taking that into account I can't get when should we use ROLLBACK explicitly? Could you get an example when it's necessary?
In PostgreSQL the transaction is not automatically rolled back on error.
It is set to the aborted state, where further commands will fail with an error until you roll the transaction back.
Observe:
regress=> BEGIN;
BEGIN
regress=> LOCK TABLE nosuchtable;
ERROR: relation "nosuchtable" does not exist
regress=> SELECT 1;
ERROR: current transaction is aborted, commands ignored until end of transaction block
regress=> ROLLBACK;
ROLLBACK
This is important, because it prevents you from accidentally executing half a transaction. Imagine if PostgreSQL automatically rolled back, allowing new implicit transactions to occur, and you tried to run the following sequence of statements:
BEGIN;
INSERT INTO archive_table SELECT * FROM current_tabble;
DELETE FROM current_table;
COMMIT;
PostgreSQL will abort the transaction when it sees the typo current_tabble. So the DELETE will never happen - all statements get ignored after the error, and the COMMIT is treated as a ROLLBACK for an aborted transaction:
regress=> BEGIN;
BEGIN
regress=> SELECT typo;
ERROR: column "typo" does not exist
regress=> COMMIT;
ROLLBACK
If it instead automatically rolled the transaction back, it'd be like you ran:
BEGIN;
INSERT INTO archive_table SELECT * FROM current_tabble;
ROLLBACK; -- automatic
BEGIN; -- automatic
DELETE FROM current_table;
COMMIT; -- automatic
... which, needless to say, would probably make you quite upset.
Other uses for explicit ROLLBACK are manual modification and test cases:
Do some changes to the data (UPDATE, DELETE ...).
Run SELECT statements to check results of data modification.
Do ROLLBACK if results are not as expected.
In Postgres DB you can do this even with DDL statements (CREATE TABLE, ...)

Do changes made within one transaction "see" each other?

Suppose I do the following set of SQL queries (pseudocode) in a table with only one column CITY:
BEGIN TRANSACTION;
INSERT INTO MyTable VALUES( 'COOLCITY' );
SELECT * FROM MyTable WHERE ALL;
COMMIT TRANSACTION;
is the SELECT guaranteed to return COOLCITY?
Yes.
The INSERT operation would take an X lock out on at least the newly added row. This won't get released until the end of the transaction thus preventing a concurrent transaction from deleting or updating this row.
A transaction is not blocked by its own locks so the SELECT would return COOLCITY.