Why should we use rollback in sql explicitly? - sql

I'm using PostgreSQL 9.3
I have one misunderstanding about transactions and how they work. Suppose we wrapped some SQL operator within a transaction like the following:
BEGIN;
insert into tbl (name, val) VALUES('John', 'Doe');
insert into tbl (name, val) VALUES('John', 'Doee');
COMMIT;
If something goes wrong the transaction will automatically be rolled back. Taking that into account I can't get when should we use ROLLBACK explicitly? Could you get an example when it's necessary?

In PostgreSQL the transaction is not automatically rolled back on error.
It is set to the aborted state, where further commands will fail with an error until you roll the transaction back.
Observe:
regress=> BEGIN;
BEGIN
regress=> LOCK TABLE nosuchtable;
ERROR: relation "nosuchtable" does not exist
regress=> SELECT 1;
ERROR: current transaction is aborted, commands ignored until end of transaction block
regress=> ROLLBACK;
ROLLBACK
This is important, because it prevents you from accidentally executing half a transaction. Imagine if PostgreSQL automatically rolled back, allowing new implicit transactions to occur, and you tried to run the following sequence of statements:
BEGIN;
INSERT INTO archive_table SELECT * FROM current_tabble;
DELETE FROM current_table;
COMMIT;
PostgreSQL will abort the transaction when it sees the typo current_tabble. So the DELETE will never happen - all statements get ignored after the error, and the COMMIT is treated as a ROLLBACK for an aborted transaction:
regress=> BEGIN;
BEGIN
regress=> SELECT typo;
ERROR: column "typo" does not exist
regress=> COMMIT;
ROLLBACK
If it instead automatically rolled the transaction back, it'd be like you ran:
BEGIN;
INSERT INTO archive_table SELECT * FROM current_tabble;
ROLLBACK; -- automatic
BEGIN; -- automatic
DELETE FROM current_table;
COMMIT; -- automatic
... which, needless to say, would probably make you quite upset.

Other uses for explicit ROLLBACK are manual modification and test cases:
Do some changes to the data (UPDATE, DELETE ...).
Run SELECT statements to check results of data modification.
Do ROLLBACK if results are not as expected.
In Postgres DB you can do this even with DDL statements (CREATE TABLE, ...)

Related

How to use multiple transactions in Snowflake Task?

I have two ETL jobs running on a stream I've created on a table. I need to run both on the same stream data and I read that in order to do so the DML statements (in my case merge statements) need to be wrapped in a transaction and committed at the end. I can't seem to be able to do that in a task though. I think I'm messing the semi-colon somewhere. This is what I've tried
create or replace task my_task as
begin;
merge into my_table1 t using my_stream s on t.id=s.id when matched insert values (id, col1);
merge into my_table2 t using my_stream s on t.id=s.id when matched insert values (id, col2);
commit;
This is just an example, the merge statements do more complex stuff.
The script just runs up to begin if I use a semi-colon or get an EOF error if I don't use one even though I have multiple semi-colons later in the script (so it tries to read past commit)
The task can call a stored procedure that contains the different statements within a transaction:
create procedure ...
as
$$
...
statement1;
BEGIN TRANSACTION;
statement2;
COMMIT;
statement3;
...
$$;
https://docs.snowflake.com/en/sql-reference/transactions.html

Ensure Successful INSERT before triggering DELETE

Am currently working on archiving a database, I have come up with a simple approach. However, executing the script can run into errors. Leading to a case where the insert is not successfully executed but the delete. Which means I delete records from production before inserting them into archive. Is there a way to ensure delete statement would not be executed unless insert is run successfully?
INSERT INTO [archive].[dbo].[Table]
SELECT *
FROM [Production].[dbo].[Table]
WHERE TimeStamp < DATEADD(year,-2,SYSDATETIME())
DELETE FROM [Production].[dbo].[table]
WHERE TimeStamp < DATEADD(year,-2,SYSDATETIME())
As an alternative to an explict transaction, one can specify an OUTPUT clause on the DELETE to perform the operation as a single autocommit transaction. This will ensure all-or-none behavior.
DELETE [Production].[dbo].[Table]
OUTPUT DELETED.*
INTO [archive].[dbo].[Table]
WHERE TimeStamp < DATEADD(year,-2,SYSDATETIME());
Also, consider an explict column list instead of *.
Typically, you would create a transaction, making the operation into a single update.
BEGIN TRAN
BEGIN TRY
INSERT INTO [archive].[dbo].[Table]
SELECT * FROM [Production].[dbo].[Table] WHERE TimeStamp <
DATEADD(year,-2,SYSDATETIME())
DELETE FROM [Production].[dbo].[table]
WHERE TimeStamp < DATEADD(year,-2,SYSDATETIME())
COMMIT
END TRY
BEGIN CATCH
ROLLBACK
END CATCH
Basically, the code says.
BEGIN TRAN - Start a group of commands that either all complete or
none complete
BEGIN TRY - Commands to try
COMMIT - If I reach here, everything worked OK
BEGIN CATCH
ROLLBACK If an error occurs, roll back the commands (basically ignore
them)
You should probably add some status to indicate success or failure, and maybe capture the error in the BEGIN CATCH block, but this should give you enough to get started
A second approach is to modify your DELETE statement a bit
DELETE FROM [Production].[dbo].[table]
WHERE TimeStamp IN (SELECT TimeStamp FROM [archive].[dbo].[Table])
Good luck

Oracle PL/SQL How to rollback the newly inserted row

create or replace trigger trig_redeem_coffee
before insert
on buycoffee
for each row
declare
CID int;
customerPoint float;
pointNeeded float;
begin
select customer_id into CID
from purchase
where purchase_id = :new.purchase_id;
select total_points into customerPoint
from customer
where customer_id = CID;
pro_get_redeem_point (:new.coffee_ID, :new.redeem_quantity, pointNeeded);
if pointNeeded>customerPoint
then
rollback;
else
pointNeeded := -1*pointNeeded;
pro_update_point(CID, pointNeeded);
end if;
commit;
end;
/
The trigger can be successfully created, but when I insert into buycoffee table(it will meet the condition that pointNeeded>customerPoint), it returns an error that it cannot rollback in a trigger. Is this a proper way to rollback a newly inserted row? Or is there any better way to do it. (all procedures are built properly)
You cannot COMMIT or ROLLBACK inside of a TRIGGER, unless it's an autonomous transaction.
Inside your TRIGGER, you should do whatever logic you wish to apply, but if you reach an error condition, you should raise an application error, rather than ROLLBACK. That should cause the INSERT statement that fired the TRIGGER to error, doing a statement level rollback, and return your transaction to the state it was just before you executed the INSERT. At that point, you can evaluate the error, decide whether to rollback the entire transaction, or re-try the INSERT, or something else.
More on autonomous transactions:
https://docs.oracle.com/database/121/CNCPT/transact.htm#GUID-C0C61571-5175-400D-AEFC-FDBFE4F87188
More on statement-level rollback:
https://docs.oracle.com/cd/B19306_01/server.102/b14220/transact.htm#i8072

Is it possible to rollback DELETE, DROP and TRUNCATE?

We can rollback a delete query but not so for truncate and drop. When I execute queries then successfully done with rollback in delete, drop & truncate.
We can rollback the data in conditions of Delete, Truncate & Drop.
But must be used Begin Transaction before executing query Delete, Drop & Truncate.
Here is example:
Create Database Ankit
Create Table Tbl_Ankit(Name varchar(11))
insert into tbl_ankit(name) values('ankit');
insert into tbl_ankit(name) values('ankur');
insert into tbl_ankit(name) values('arti');
Select * From Tbl_Ankit
/*======================For Delete==================*/
Begin Transaction
Delete From Tbl_Ankit where Name='ankit'
Rollback
Select * From Tbl_Ankit
/*======================For Truncate==================*/
Begin Transaction
Truncate Table Tbl_Ankit
Rollback
Select * From Tbl_Ankit
/*======================For Drop==================*/
Begin Transaction
Drop Table Tbl_Ankit
Rollback
Select * From Tbl_Ankit
For MySql:
13.3.2 Statements That Cannot Be Rolled Back
Some statements cannot be rolled back. In general, these include data definition language (DDL) statements, such as those that create or drop databases, those that create, drop, or alter tables or stored routines.
You should design your transactions not to include such statements. If you issue a statement early in a transaction that cannot be rolled back, and then another statement later fails, the full effect of the transaction cannot be rolled back in such cases by issuing a ROLLBACK statement.
https://dev.mysql.com/doc/refman/8.0/en/cannot-roll-back.html
All of the above 3 transactions can be rolled back because all of them generate detail log. See this SO answer for more information. And this blog for detailed explanation with examples.
DELETE(DML)-------------------
Used to delete the tuples(row) of the table and have facility to DELETE ONE OR
MULTIPLE ROW USING WHERE CLAUSE
delete EX ->
 delete from emp ;
it will delete all the row of table
 delete from emp where id = 2;
it will delete one row
----------------------Drop(DDL)-------------------
This command will delete the whole structure of table with data so that's why it
is very important or crucial command and the access is to this command is only
to database administrator (DBA)
dropEX -> DROP EMP1;
it will vanish whole table with structure
-------------------TRUNCATE(DDL)----------------
Used to delete all rows (tuple) of the table.
truncate EX -> TRUNCATE EMP1;
----------------------QUESTIONS-------------------
QUES 1: IF WE CAN DELETE ALL ROWS FROM BOTH 'DELETE' COMMAND AND
'TRUNCATE' COMMAND THEN WHAT IS THE DIFFERENCE
DELETE AND TRUNCATE DIFFERENCE
DELETE command is DML command and TRUNCATE is DDL
TRUNCTE command will delete all rows because we can not use where clause
with truncate but WHERE clause can be applied to DELETE command and can
delete one or more rows through where condition
QUES 2: WHY TRUNCATE IS DDL ALTHOUGH IT IS WORKING ON DATA AND
DELETE IS DML IT IS ALSO WORKING ON DATA?
ANSWER:
 When TRUNCATE Command deletes all the row then we can never ROLLBACK
our deleted data because in truncate COMMIT is called implicitly
 But in the case of delete the log(backend file) is created from where you can get
your data ROLLBACK if deleted by mistake
 REMEMBER: In DELETE case ROLLBACK can be done only before the COMMIT

PL/SQL Oracle Stored Procedure loop structure

Just wondering if the way I put COMMIT in the code block is appropriate or not? Should I put them when it finished loop or after each insert statement or after the if else statement?
FOR VAL1 IN (SELECT A.* FROM TABLE_A A) LOOP
IF VAL1.QTY >= 0 THEN
INSERT INTO TEMP_TABLE VALUES('MORE OR EQUAL THAN 0');
COMMIT; /*<-- Should I put this here?*/
INSERT INTO AUDIT_TABLE VALUE('DATA INSERTED >= 0');
COMMIT; /*<-- Should I put this here too?*/
ELSE
INSERT INTO TEMP_TABLE VALUES ('0');
COMMIT; /*<-- Should I put this here too?*/
INSERT INTO AUDIT_TABLE('DATA INSERTED IS 0');
COMMIT; /*<-- Should I put this here too?*/
END IF;
/*Or put commit here?*/
END LOOP;
/*Or here??*/
Generally, committing in a loop is not a good idea, especially after every DML in that loop. Doing so you force oracle(LGWR) to write data in redo log files and may find yourself in a situation when other sessions hang because of log file sync wait event. Or facing ORA-1555 because undo segments will be cleared more often.
Divide your DMLs into logical units of work (transactions) and commit when that unit of work is done, not before and not too late or in a middle of a transaction. This will allow you to keep your database in a consistent state. If, for example, two insert statements form a one unit of work(one transaction), it makes sense to commit or rollback them altogether not separately.
So, generally, you should commit as less as possible. If you have to commit in a loop, introduce some threshold. For instance issue commit after, let say 150 rows:
declare
l_commit_rows number;
For i in (select * from some_table)
loop
l_commit_rows := l_commit_rows + 1;
insert into some_table(..) values(...);
if mode(l_commit_rows, 150) = 0
then
commit;
end if;
end loop;
-- commit the rest
commit;
It is rarely appropriate; say your insert into TEMP_TABLE succeeds but your insert into AUDIT_TABLE fails. You then don't know where you are at all. Additionally, commits will increase the amount of time it takes to perform an operation.
It would be more normal to do everything within a single transaction; that is remove the LOOP and perform your inserts in a single statement. This can be done by using a multi-table insert and would look something like this:
insert
when ( a.qty >= 0 ) then
into temp_table values ('MORE OR EQUAL THAN 0')
into audit_table values ('DATA INSERTED >= 0')
else
into temp_table values ('0')
into audit_table values ('DATA INSERTED IS 0')
select qty from table_a
A simple rule is to not commit in the middle of an action; you need to be able to tell exactly where you were if you have to restart an operation. This normally means, go back to the beginning but doesn't have to. For instance, if you were to place your COMMIT inside your loop but outside the IF statement then you know that that has completed. You'd have to write back somewhere to tell you that this operation has been completed though or use your SQL statement to determine whether you need to re-evaluate that row.
If you insert commit after each insert statement then the database will commit each row inserted. Same will happen if you insert commit after the IF statement ends. (So both will commit after each inserted row). If commit is given after loop then commit will happen after all rows are inserted.
Commit after the loop should work faster as it will commit bulk data but if your loop encounters any error (say after 50 rows are processed there is an error) then your 50 rows also won't be inserted.
So according to your requirement u can either use commit after if or after loop