Would like to know whether rollback is required when a SQL exception (exception when others) is detected:
declare
cursor c_test is
select *
from tesing;
begin
for rec in c_test loop
begin
update test1 set test1.name=rec.name where test1.id=rec.id;
IF sql%rowcount = 1 THEN
commit;
ELSIF sql%rowcount =0 THEN
dbms_output.put_line('No Rows Updated');
else
dbms_output.put_line('More than 1 row exists');
rollback;
END IF;
exception when others then
dbms_output.put_line(Exception');
rollback;
end;
end;
First, I'm assuming we can ignore the syntax errors (for example, there is no END LOOP, the dbms_output.put_line call is missing the first single quote, etc.)
As to whether it is necessary to roll back changes, it depends.
In general, you would not have interim commits in a loop. That is generally a poor architecture both because it is much more costly in terms of I/O and elapsed time. It also makes it much harder to write restartable code. What happens, for example, if your SELECT statement selects 10 rows, you issue (and commit) 5 updates, and then the 6th update fails? The only way to be able to restart with the 6th row after you've fixed the exception would be to have a separate table where you stored (and updates) your code's progress. It also creates problems for any code that calls this block which has to then handle the case that half the work was done (and committed) and the other half was not.
In general, you would only put transaction control statements in the outermost blocks of your code. Since a COMMIT or a ROLLBACK in a procedure commits or rolls back any work done in the session whether or not it was done by the procedure, you want to be very cautious about adding transaction control statements. You generally want to let the caller make the determination about whether to commit or roll back. Of course, that only goes so far-- eventually, you're going to be in the outer-most block that will never be called from some other routine and you need to have appropriate transaction control-- but it's something to be very wary about if you're writing code that might be reused.
In this case, since you have interim commits, the only effect of your ROLLBACK would be that if the first update statement failed, the work that had been done in your session prior to calling this block would be rolled back. The interim commit would commit those previous changes if the first update statement was successful. That's the sort of side effect that people worry about when they talk about why interim commits and transaction control in reusable blocks are problematic.
Related
I have a somewhat unusual need for my transaction handling.
I am working on a mutation test framework for SQL Server, for this I need to run my tests inside a transaction, so that the database always is in the state it started when the tests finish.
However I have the problem that users can code inside the test procedures and may call rollback transaction that may or may not be inside a (nested) transaction (savepoint).
high level it looks like this
start transaction
initialize test
run test with user code
may or may not contain:
- start tran
- start tran savename
- commit tran
- commit tran savename
- rollback tran
- rollback tran savename
output testresults
rollback transaction
Is there a way to make sure I can at last always roll back to the initial state? I have to take in account that users can call stored procedures/triggers that maybe nested and can all contain transaction statements. With all my solutions the moment a user uses rollback tran in their test code they escape the transaction and not everything will be cleaned
What I want is that if a user calls rollback only their part of the transaction is rolled back and my transaction that I start before the initialization of the test is still intact.
If it is possible I want to prevent to force my users to use a transaction template that uses savepoints when a transaction already exists.
I do not think this is possible without any communication/rules for the user code. Despite what you do, if the user's code runs as many COMMITs as there are ##TRANCOUNT at that time, the transaction will be commited and there will be nothing you can do about that.
One way you could do this is if you check/enforce the user code to, instead of using COMMIT, change that to if ##TRANCOUNT>=2 COMMIT. This will make sure the TRUE data commit can only be done by YOUR COMMIT command. Of course, it happens you never really want to commit, so you just rollback and it's over.
You mention:
What I want is that if a user calls rollback only their part of the
transaction is rolled back
Please note that nested transactions are kind of a myth. Refer to this excellent article. Long story short: "Nested" BEGIN TRANSACTIONs and COMMITs do actually nothing except change the value of the system var ##TRANCOUNT so that some organisation can be made through procedures.
I don't think it is possible to rollback a part of transaction and keep the other transaction intact. The moment we rollback the transaction, the entire transaction gets rolled back.
As pointed out by serveral others nested transactions are not supported in SQL Server. I have decided to minimize the number of state changing statements after the users code and that I clean those statements at the start of a test batch. This way it doesn't really matter that I can't rollback my own transaction, since the heavy lifting will be rolled back either by me or the user.
I also decided to fail the test when the starting ##TRANCOUNT and the ending ##TRANCOUNT dont match up so that no test can pass when there is something wrong with the users transactions.
The current system however will still struggle with a user doing a COMMIT TRAN. That is a problem for another time.
I believe the actual need is to write the test cases for T-SQL. If that is correct then I feel you don't have to reinvent the wheel and start using the open source test framework T-SQLT https://tsqlt.org/
I have a trigger autonomous but only execute one time in the same session, then do nothing
CREATE OR REPLACE TRIGGER tdw_insert_unsus
BEFORE INSERT ON unsuscription_fact FOR EACH ROW
DECLARE
PRAGMA AUTONOMOUS_TRANSACTION;
v_id_suscription SUSCRIPTION_FACT.ID_SUSCRIPTION%TYPE;
v_id_date_suscription SUSCRIPTION_FACT.ID_DATE_SUSCRIPTION%TYPE;
v_id_date_unsuscription SUSCRIPTION_FACT.ID_DATE_UNSUSCRIPTION%TYPE;
v_suscription DATE;
v_unsuscription DATE;
v_live_time SUSCRIPTION_FACT.LIVE_TIME%TYPE;
BEGIN
SELECT id_suscription, id_date_suscription
INTO v_id_suscription, v_id_date_suscription
FROM(
SELECT id_suscription, id_date_suscription
FROM suscription_fact
WHERE id_mno = :NEW.ID_MNO
AND id_provider = :NEW.ID_PROVIDER
AND ftp_service_id = :NEW.FTP_SERVICE_ID
AND msisdn = :NEW.MSISDN
AND id_date_unsuscription IS NULL
ORDER BY id_date_suscription DESC
)
WHERE ROWNUM = 1;
-- calculate time
v_unsuscription := to_date(:NEW.id_date_unsuscription,'yyyymmdd');
v_suscription := to_date(v_id_date_suscription,'yyyymmdd');
v_live_time := (v_unsuscription - v_suscription);
UPDATE suscription_fact SET id_date_unsuscription = :NEW.id_date_unsuscription,
id_time_unsuscription = :NEW.id_time_unsuscription, live_time = v_live_time
WHERE id_suscription = v_id_suscription;
COMMIT;
EXCEPTION
WHEN NO_DATA_FOUND THEN
ROLLBACK;
END;
/
if I insert values works well the first o second time but after not work, but if I logout the session and login works for the first or second insertion
what is the problem?, I use oracle 10g
You're using an autonomous transaction to work around the fact that a trigger can not query its table itself. You've run into the infamous mutating table error and you have found that declaring the trigger as an autonomous transaction makes the error go away.
No luck for you though, this does not solve the problem at all:
First, any transaction logic is lost. You can't rollback the changes on the suscription_fact table, they are committed, while your main transaction is not and could be rolled back. So you've also lost your data integrity.
The trigger can not see the new row because the new row hasn't been committed yet! Since the trigger runs in an independent transaction, it can not see the uncommitted changes made by the main transaction: you will run into completely wrong results.
This is why you should never do any business logic in autonomous transactions. (there are legitimate applications but they are almost entirely limited to logging/debugging).
In your case you should either:
Update your logic so that it does not need to query your table (updating suscription_fact only if the new row is more recent than the old value stored in id_date_unsuscription).
Forget about using business logic in triggers and use a procedure that updates all tables correctly or use a view because here we have a clear case of redundant data.
Use a workaround that actually works (by Tom Kyte).
I would strongly advise using (2) here. Don't use triggers to code business logic. They are hard to write without bugs and harder still to maintain. Using a procedure guarantees that all the relevant code is grouped in one place (a package or a procedure), easy to read and follow and without unforeseen consequences.
I have created AFTER INSERT TRIGGER
Now if any case if an error occurs while executing Trigger. It should not effect Insert Operation on Triggered table.
In One word if any ERROR occurs in trigger it should Ignore it.
As I have used
BEGIN TRY
END TRY
BEGIN CATCH
END CATCH
But it give following error message and Rolled back Insert operation on Triggered table
An error was raised during trigger execution. The batch has been
aborted and the user transaction, if any, has been rolled back.
Interesting problem. By default, triggers are designed that if they fail, they rollback the command that fired it. So whenever trigger is executing there is an active transaction, whatever there was an explicit BEGIN TRANSACTION or not on the outside. And also BEGIN/TRY inside trigger will not work. Your best practice would be not to write any code in trigger that could possibly fail - unless it is desired to also fail the firing statement.
In this situation, to suppress this behavior, there are some workarounds.
Option A (the ugly way):
Since transaction is active at the beginning of trigger, you can just COMMIT it and continue with your trigger commands:
CREATE TRIGGER tgTest1 ON Test1 AFTER INSERT
AS
BEGIN
COMMIT;
... do whatever trigger does
END;
Note that if there is an error in trigger code this will still produce the error message, but data in Test1 table are safely inserted.
Option B (also ugly):
You can move your code from trigger to stored procedure. Then call that stored procedure from Wrapper SP that implements BEGIN/TRY and at the end - call Wrapper SP from trigger. This might be a bit tricky to move data from INSERTED table around if needed in the logic (which is in SP now) - probably using some temp tables.
SQLFiddle DEMO
You cannot, and any attempt to solve it is snake oil. No amount of TRY/CATCH or ##ERROR check will work around the fundamental issue.
If you want to use the tightly coupling of a trigger then you must buy into the lower availability induced by the coupling.
If you want to preserve the availability (ie. have the INSERT succeed) then you must give up coupling (remove the trigger). You must do all the processing you were planning to do in the trigger in a separate transaction that starts after your INSERT committed. A SQL Agent job that polls the table for newly inserted rows, an Service Broker launched procedure or even an application layer step are all going to fit the bill.
The accepted answer's option A gave me the following error: "The transaction ended in the trigger. The batch has been aborted.". I circumvented the problem by using the SQL below.
CREATE TRIGGER tgTest1 ON Test1 AFTER INSERT
AS
BEGIN
SET XACT_ABORT OFF
BEGIN TRY
SELECT [Column1] INTO #TableInserted FROM [inserted]
EXECUTE sp_executesql N'INSERT INTO [Table]([Column1]) SELECT [Column1] FROM #TableInserted'
END TRY
BEGIN CATCH
END CATCH
SET XACT_ABORT ON
END
I'm pretty new to PL-SQL although I've got lots of db experience with other RDBMS's. Here's my current issue.
procedure CreateWorkUnit
is
update workunit
set workunitstatus = 2 --workunit loaded
where
SYSDATE between START_DATE and END_DATE
and workunitstatus = 1 --workunit created;
--commit here?
call loader; --loads records based on status, will have a commit of its own
update workunit wu
set workunititemcount = (select count(*) from workunititems wui where wui.wuid = wu.wuid)
where workunitstatus = 2
So the behaviour I'm seeing, with or without commit statements is that I have to execute twice. Once to flip the statuses, then the loader will run on the second execution. I'd like it all to run in one go.
I'd appreciate any words of oracle wisdom.
Thanks!
When to commit transactions in a batch procedure? It is a good question, although it only seems vaguely related to the problems with the code you post. But let's answer it anyway.
We need to commit when the PL/SQL procedure has completed a unit of work. A unit of work is a business transaction. This would normally be at the end of the program, the last statement before the EXCEPTION section.
Sometimes not even then. The decision to commit or rollback properly lies with the top of the calling stack. If our PL/SQL is being called from a client (may a user clicking a button on a screen) then perhaps the client should issue the commit.
But it is not unreasonable for a batch process to manage its own commit (and rollback in the case of errors). But the main point is that the only the toppermost procedure should issue COMMIT. If a procedure calls other procedures those called programs should not issue commits or rollbacks. If they should handle any errors (log etc) and re-raise them to the calling program. Let it decode whether to rollback. Because all the called procedures run in the same session and hence the same transaction: a rollback in a called program will revert all the changes in the batch process. That's not right. The same reasoning applies to commits.
You will sometimes read advice on using intermittent commits to break up long running processes into smaller units e.g. every 1000 inserts. This is bad advice for several reasons, not all of them related to transactions. The pertinent ones are:
Issuing a commit frees locks on resources. This is the cause of ORA-1555 Snapshot too old errors.
It also affects read consistency, which only applies at the statement and/or transaction level. This is the cause of ORA-1002 Fetch out of sequence errors.
It affects re-startability. If the program fails having processed 30% of the records, can we be confident it will only process the remaining 70% when we re-run the batch?
Once we commit records other sessions can see those changes: does it make sense for other users to see a partially changed view of the data?
So, the words of "Oracle wisdom" are: always align the database transaction with the business transaction, with a single commit per unit of work.
Somebody mentioned autonmous transactions as a way of issuing commits in sub-processes. This is usually a bad idea. Changes made in an autonomous transaction are visible to other sessions but not to our own. That very rarely makes sense. It also creates the same problems with re-startability which I discussed earlier.
The only acceptable use for automomous transactions is recording activity (error log, trace, audit records). We need that data to persist regardless of what happens in the wider transaction. Any other use of the pragma is almost certainly a workaround for a porr design, which actually just makes the problem worse.
You may not need to commit in pl/sql procedure. the procedures that you call inside another procedure will use same session so you don't need to commit. by the way procedure must completely rollback if it session rollbacked or has an exception.
I mis-classfied my problem. I thought this was a transaction problem and really it was one of my flags not being set as expected.A number field was null when I was expecting 0.
Sorry for that.
Josh Robinson
Which procedure is more performant for an update which affects zero rows?
UPDATE table SET column = value WHERE id = number;
IF SQL%Rowcount > 0 THEN
COMMIT;
END IF;
or
UPDATE table SET column = value WHERE id = number;
COMMIT;
In other words if an Update affect ZERO rows and a commit is issued am I incurring any added expense at all?
I have a system which is being hampered by log file sync waits... and I'm wondering if issuing a commit; against a transaction which affects zero rows will write that statement to the log or not and thus cause more contention on LGWR.
COMMIT does force the log file sync so the system will have to wait indeed.
However, ROLLBACK does too and at some time either of them will have to happen.
So if you issue neither COMMIT nor ROLLBACK, you are just staying with an open transaction which sooner or later will cause a log sync wait.
Probably, you want to batch you UPDATE operations rather than waiting for a first successful update and committing it.
There are risks in this. Technically while the UPDATE may affect zero rows, it can fire before or after update triggers on the table (not at row level). Those triggers could potentially "do something" that requires a commit/rollback.
Safer to check to see if LOCAL_TRANSACTION_ID is set.
There are any number of reasons which can underlie waits for log file sync. It seems unlikely that the main culprit is committing SQL statements which have updated zero rows. It is true that issuing too many commits can be the cause of this problem. For instance, if the application is set up to commit after every statement (e.g. by using AUTOCOMMIT=TRUE) instead of designing proper transactions. If this is the cause then there is not much you can do, short of a major rewrite of the application.
If you want to delve deeper into the root causes of your problem I recommend you read this exhaustive (and exhausting) article by Pythian's Riyaj Shamsudeen on Tuning ‘log file sync’ Event Waits.