PL/SQL Oracle Stored Procedure loop structure - sql

Just wondering if the way I put COMMIT in the code block is appropriate or not? Should I put them when it finished loop or after each insert statement or after the if else statement?
FOR VAL1 IN (SELECT A.* FROM TABLE_A A) LOOP
IF VAL1.QTY >= 0 THEN
INSERT INTO TEMP_TABLE VALUES('MORE OR EQUAL THAN 0');
COMMIT; /*<-- Should I put this here?*/
INSERT INTO AUDIT_TABLE VALUE('DATA INSERTED >= 0');
COMMIT; /*<-- Should I put this here too?*/
ELSE
INSERT INTO TEMP_TABLE VALUES ('0');
COMMIT; /*<-- Should I put this here too?*/
INSERT INTO AUDIT_TABLE('DATA INSERTED IS 0');
COMMIT; /*<-- Should I put this here too?*/
END IF;
/*Or put commit here?*/
END LOOP;
/*Or here??*/

Generally, committing in a loop is not a good idea, especially after every DML in that loop. Doing so you force oracle(LGWR) to write data in redo log files and may find yourself in a situation when other sessions hang because of log file sync wait event. Or facing ORA-1555 because undo segments will be cleared more often.
Divide your DMLs into logical units of work (transactions) and commit when that unit of work is done, not before and not too late or in a middle of a transaction. This will allow you to keep your database in a consistent state. If, for example, two insert statements form a one unit of work(one transaction), it makes sense to commit or rollback them altogether not separately.
So, generally, you should commit as less as possible. If you have to commit in a loop, introduce some threshold. For instance issue commit after, let say 150 rows:
declare
l_commit_rows number;
For i in (select * from some_table)
loop
l_commit_rows := l_commit_rows + 1;
insert into some_table(..) values(...);
if mode(l_commit_rows, 150) = 0
then
commit;
end if;
end loop;
-- commit the rest
commit;

It is rarely appropriate; say your insert into TEMP_TABLE succeeds but your insert into AUDIT_TABLE fails. You then don't know where you are at all. Additionally, commits will increase the amount of time it takes to perform an operation.
It would be more normal to do everything within a single transaction; that is remove the LOOP and perform your inserts in a single statement. This can be done by using a multi-table insert and would look something like this:
insert
when ( a.qty >= 0 ) then
into temp_table values ('MORE OR EQUAL THAN 0')
into audit_table values ('DATA INSERTED >= 0')
else
into temp_table values ('0')
into audit_table values ('DATA INSERTED IS 0')
select qty from table_a
A simple rule is to not commit in the middle of an action; you need to be able to tell exactly where you were if you have to restart an operation. This normally means, go back to the beginning but doesn't have to. For instance, if you were to place your COMMIT inside your loop but outside the IF statement then you know that that has completed. You'd have to write back somewhere to tell you that this operation has been completed though or use your SQL statement to determine whether you need to re-evaluate that row.

If you insert commit after each insert statement then the database will commit each row inserted. Same will happen if you insert commit after the IF statement ends. (So both will commit after each inserted row). If commit is given after loop then commit will happen after all rows are inserted.
Commit after the loop should work faster as it will commit bulk data but if your loop encounters any error (say after 50 rows are processed there is an error) then your 50 rows also won't be inserted.
So according to your requirement u can either use commit after if or after loop

Related

Rollback under condition in MariaDB

I have a transaction which reduces the variable with amount of money in loop, if the variable with money is below 0, the money amount should return to the value before transaction. How can I appropriately use rollback in MariaDB in this case?
---edit
I have something like that, and it doesn't work, check out the lines in if(budget<0) because if the money is below 0 and some, but not all of them, iterations were made and saved to the temp table, the table shows them
BEGIN
DECLARE temppesel text;
DECLARE tempsalary int;
DECLARE budget int DEFAULT cash;
DECLARE done bool DEFAULT false;
DECLARE occ CURSOR FOR (SELECT pesel, pensja FROM pracownicy where zawod=occupation);
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = true;
START TRANSACTION;
DROP TABLE IF EXISTS temp;
CREATE TABLE temp ( Result text );
OPEN occ;
occ : LOOP
FETCH occ INTO temppesel, tempsalary;
SET budget = budget - tempsalary;
IF(done) THEN
LEAVE occ;
END IF;
IF(budget<0) THEN
ROLLBACK;
LEAVE occ;
END IF;
INSERT INTO temp VALUES (concat('********',substr(temppesel,9,3), ', wyplacono'));
END LOOP;
CLOSE occ;
SELECT * FROM temp;
DROP TABLE temp;
COMMIT;
END
I believe that the CREATE TABLE statements are causing the transaction to be committed. Here is a list of commands that cause an implicit COMMIT.
As the aforementioned link describes, you can either move the START TRANSACTION statement after the DROP and CREATE commands or use the CREATE TEMPORARY TABLE syntax to create a temporary table:
CREATE TEMPORARY TABLE temp ( Result text );
BEGIN;
do some SQL
Loop:
do some SQL
if something is wrong, ROLLBACK and exit the loop and transaction
do some SQL
if something, go back to Loop
do some SQL
COMMIT;
That is, let ROLLBACK undo everything since the BEGIN.
More
Now that the SP is visible...
What engine is temp? If it MyISAM, then it is not rolled back. SHOW VARIABLES LIKE 'default_storage_engine';.
Please don't use occ for 2 different things, it confuses the reader.
Do you want the output to be part of the rows of pracownicy when the budget is blown? Or do you want no rows?
If you have multiple connections doing the same thing, there is a serious problem -- temp is visible to all connections, and they could step on each other. Change to CREATE TEMPORARY TABLE temp ...
However, with the pre-test (below), you can completely avoid the use of temp. First test for its need, then (if needed) simply do a single SELECT for all the rows.
If you want nothing, then a simple test something like this would pre-test whether it will overflow, therey obviating the need for testing in the loop:
..
IF ( SELECT SUM(pensja)
FROM pracownicy
where zawod=occupation ) > budget )

Count(*) not working properly

I create the trigger A1 so that an article with a certain type, that is 'Bert' cannot be added more than once and it can have only 1 in the stock.
However, although i create the trigger, i can still add an article with the type 'Bert'. Somehow, the count returns '0' but when i run the same sql statement, it returns the correct number. It also starts counting properly if I drop the trigger and re-add it. Any ideas what might be going wrong?
TRIGGER A1 BEFORE INSERT ON mytable
FOR EACH ROW
DECLARE
l_count NUMBER;
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
SELECT COUNT(*) INTO l_count FROM mytable WHERE article = :new.article;
dbms_output.put_line('Count: ' || l_count);
IF l_count >0 THEN
IF(:new.TYPEB = 'Bert') THEN
dbms_output.put_line('article already exists!');
ROLLBACK;
END IF;
ELSIF (:new.TYPEB = 'Bert' AND :new.stock_count>1) THEN
dbms_output.put_line('stock cannot have more than 1 of this article with type Bert');
ROLLBACK;
END IF;
END;
This is the insert statement I use:
INSERT INTO mytable VALUES('Chip',1,9,1,'Bert');
A couple of points. First, you are misusing the autonomous transaction pragma. It is meant for separate transactions you need to commit or rollback independently of the main transaction. You are using it to rollback the main transaction -- and you never commit if there is no error.
And those "unforeseen consequences" someone mentioned? One of them is that your count always returns 0. So remove the pragma both because it is being misused and so the count will return a proper value.
Another thing is don't have commits or rollbacks within triggers. Raise an error and let the controlling code do what it needs to do. I know the rollbacks were because of the pragma. Just don't forget to remove them when you remove the pragma.
The following trigger works for me:
CREATE OR REPLACE TRIGGER trg_mytable_biu
BEFORE INSERT OR UPDATE ON mytable
FOR EACH ROW
WHEN (NEW.TYPEB = 'Bert') -- Don't even execute unless this is Bert
DECLARE
L_COUNT NUMBER;
BEGIN
SELECT COUNT(*) INTO L_COUNT
FROM MYTABLE
WHERE ARTICLE = :NEW.ARTICLE
AND TYPEB = :NEW.TYPEB;
IF L_COUNT > 0 THEN
RAISE_APPLICATION_ERROR( -20001, 'Bert already exists!' );
ELSIF :NEW.STOCK_COUNT > 1 THEN
RAISE_APPLICATION_ERROR( -20001, 'Can''t insert more than one Bert!' );
END IF;
END;
However, it's not a good idea for a trigger on a table to separately access that table. Usually the system won't even allow it -- this trigger won't execute at all if changed to "after". If it is allowed to execute, one can never be sure of the results obtained -- as you already found out. Actually, I'm a little surprised the trigger above works. I would feel uneasy using it in a real database.
The best option when a trigger must access the target table is to hide the table behind a view and write an "instead of" trigger on the view. That trigger can access the table all it wants.
You need to do an AFTER trigger, not a BEFORE trigger. Doing a count(*) "BEFORE" the insert occurs results in zero rows because the data hasn't been inserted yet.

Why should we use rollback in sql explicitly?

I'm using PostgreSQL 9.3
I have one misunderstanding about transactions and how they work. Suppose we wrapped some SQL operator within a transaction like the following:
BEGIN;
insert into tbl (name, val) VALUES('John', 'Doe');
insert into tbl (name, val) VALUES('John', 'Doee');
COMMIT;
If something goes wrong the transaction will automatically be rolled back. Taking that into account I can't get when should we use ROLLBACK explicitly? Could you get an example when it's necessary?
In PostgreSQL the transaction is not automatically rolled back on error.
It is set to the aborted state, where further commands will fail with an error until you roll the transaction back.
Observe:
regress=> BEGIN;
BEGIN
regress=> LOCK TABLE nosuchtable;
ERROR: relation "nosuchtable" does not exist
regress=> SELECT 1;
ERROR: current transaction is aborted, commands ignored until end of transaction block
regress=> ROLLBACK;
ROLLBACK
This is important, because it prevents you from accidentally executing half a transaction. Imagine if PostgreSQL automatically rolled back, allowing new implicit transactions to occur, and you tried to run the following sequence of statements:
BEGIN;
INSERT INTO archive_table SELECT * FROM current_tabble;
DELETE FROM current_table;
COMMIT;
PostgreSQL will abort the transaction when it sees the typo current_tabble. So the DELETE will never happen - all statements get ignored after the error, and the COMMIT is treated as a ROLLBACK for an aborted transaction:
regress=> BEGIN;
BEGIN
regress=> SELECT typo;
ERROR: column "typo" does not exist
regress=> COMMIT;
ROLLBACK
If it instead automatically rolled the transaction back, it'd be like you ran:
BEGIN;
INSERT INTO archive_table SELECT * FROM current_tabble;
ROLLBACK; -- automatic
BEGIN; -- automatic
DELETE FROM current_table;
COMMIT; -- automatic
... which, needless to say, would probably make you quite upset.
Other uses for explicit ROLLBACK are manual modification and test cases:
Do some changes to the data (UPDATE, DELETE ...).
Run SELECT statements to check results of data modification.
Do ROLLBACK if results are not as expected.
In Postgres DB you can do this even with DDL statements (CREATE TABLE, ...)

PLSQL - Fastest way to check for dups before insert

(Using Oracle) I have a table with one column (myCol) which is the primary key.
When doing an insert, is it faster to check with a select statement before doing the insert or just write the insert and let error handling check it?
So would this be faster?
BEGIN
SELECT count(*) INTO v_count FROM myTbl WHERE myCol = v_newVal;
IF v_count = 0 THEN
INSERT INTO myTbl (myCol) VALUES (v_newVal);
END IF;
END;
or this?
BEGIN
INSERT INTO myTbl (myCol) VALUES (v_newVal);
EXCEPTION WHEN DUP_VAL_ON_INDEX THEN
null;
END;
Thanks!
In terms of performance, certainly the second option is better.
But, more importantly, the first option has a bug - it will fail in cases where more than one session tries to insert the same value at the same time - one of them will succeed, the other session will wait for the first to commit and then raise DUP_VAL_ON_INDEX (which you haven't handled).
I think you could try merge. Typically you should avoid count() from dup checking. There is also a hint you can use to ignore duplicates. This page gives a nice example of this.
http://guyharrison.squarespace.com/blog/2010/1/1/the-11gr2-ignore_row_on_dupkey_index-hint.html

Do changes made within one transaction "see" each other?

Suppose I do the following set of SQL queries (pseudocode) in a table with only one column CITY:
BEGIN TRANSACTION;
INSERT INTO MyTable VALUES( 'COOLCITY' );
SELECT * FROM MyTable WHERE ALL;
COMMIT TRANSACTION;
is the SELECT guaranteed to return COOLCITY?
Yes.
The INSERT operation would take an X lock out on at least the newly added row. This won't get released until the end of the transaction thus preventing a concurrent transaction from deleting or updating this row.
A transaction is not blocked by its own locks so the SELECT would return COOLCITY.