Does PL/SQL Procedure Automatically Commit When it Exits? [duplicate] - sql

I have 3 tables in oracle DB. I am writing one procedure to delete some rows in all the 3 tables based on some conditions.
I have used all three delete statements one by one in the procedure. While executing the mentioned stored procedure, is there any auto-commit happening in the at the time of execution?
Otherwise, Should I need to manually code the commit at the end?

There is no auto-commit on the database level, but the API that you use could potentially have auto-commit functionality. From Tom Kyte.
That said, I would like to add:
Unless you are doing an autonomous transaction, you should stay away from committing directly in the procedure: From Tom Kyte.
Excerpt:
I wish PLSQL didn't support commit/rollback. I firmly believe
transaction control MUST be done at the topmost, invoker level. That
is the only way you can take these N stored procedures and tie them
together in a transaction.
In addition, it should also be noted that for DDL (doesn't sound like you are doing any DDL in your procedure, based on your question, but just listing this as a potential gotcha), Oracle adds an implicit commit before and after the DDL.

There's no autocommit, but it's possible to set commit command into stored procedure.
Example #1: no commit
create procedure my_proc as
begin
insert into t1(col1) values(1);
end;
when you execute the procedure you need call commit
begin
my_proc;
commit;
end;
Example #2: commit
create procedure my_proc as
begin
insert into t1(col1) values(1);
commit;
end;
When you execute the procedure you don't nee call commit because procedure does this
begin
my_proc;
end;

There is no autocommit with in the scope of stored procedure. However if you are using SQL Plus or SQL Developer, depending on the settings autocommit is possible.
You should handle commit and rollback as part of the stored procedure code.

Related

Trigger to detect whether DELETE or UPDATE is called by stored proc

I have a scenario where certain users must have the rights to update or delete certain records in production. What I want to do put in a safeguard to make sure they do not accidentally update or delete a large section (or entirety) of the table, but only a few records as per their need. So I wrote a simple trigger to accomplish this.
CREATE TRIGGER delete_check
ON dbo.My_table
AFTER UPDATE,DELETE AS
BEGIN
IF (SELECT COUNT(*) FROM Deleted) > 15
BEGIN
RAISERROR ('Bulk deletes from this table are not allowed', 16, 1)
ROLLBACK
END
END --end trigger
But here is the problem. There is a stored procedure that can do bulk updates to the table. The users can and should be allowed to call the stored procedure, as it's scope is more constrained. So my trigger would unfortunately preclude them from calling the stored proc when they need to.
The only solution I have thought of is to run the stored proc as an impersonated user, then modify the trigger to exclude that user from the logic. But that will bring up other issues in my environment. Not unsurmountable, but annoying. Nevertheless, this seems the only viable option.
Am I thinking about this the right way, or is there a better approach?
You can add a check of ##NESTLEVEL in the trigger. The value will be 1 for an ad-hoc statement or 2 when called from the stored procedure.
CREATE TRIGGER delete_check
ON dbo.My_table
AFTER UPDATE,DELETE AS
BEGIN
IF (SELECT COUNT(*) FROM Deleted) > 15
AND ##NESTLEVEL = 1 --ad-hoc delete
BEGIN
RAISERROR ('Bulk deletes from this table are not allowed', 16, 1);
ROLLBACK;
END;
END;
I usually handle this with CONTEXT_INFO(). This gives you a better control than ##NESTLEVEL because you can actually identify the specific stored procedure doing the calling and handle them individually if required. You do this as follows:
Add the procedure name to CONTEXT_INFO() e.g.
-- START OF STORED PROCEDURE
-- Tell the trigger who we are, and that we can be trusted.
declare #OldContext char(128), #NewContext varbinary(128);
-- Get existing context_info()
set #OldContext = coalesce(convert(char(128), context_info()), '');
-- Add new info to context_info
set #NewContext = convert(varbinary(128),convert(char(128), 'dbo.MyProcedureName'));
-- Store new context info
set context_info #NewContext;
-- STORED PROCEDURE CONTENT
-- END OF STORED PROCEDURE
-- Restore context_info
set #NewContext = convert(varbinary(128), #OldContext);
set context_info #NewContext;
In the trigger return early if the CONTEXT_INFO() is from a trusted source e.g.
-- START OF TRIGGER
declare #NewContext char(128) = coalesce(convert(char(128),context_info()),'');
if #NewContext in ('dbo.MyProcedureName') begin
return;
end;
Another advantage of this approach (for other trigger uses) is you can avoid carrying out logic in a trigger when called from a SP. Because often you put logic in a trigger to ensure that it happens regardless of how the insert/update/delete happens. But when done in an SP you can ensure the required logic is carried out within the SP which avoids the need to do it in a trigger. Especially useful if you end up with performance issues due to too much logic in the trigger.
Note: For SQL Server 2016+ you can use SESSION_CONTEXT() in a similar way.
One way of doing this would be:
Create a stored procedure to perform the desired work
If the criteria varies greatly, create several procedures for each "kind" of work
Grant EXECUTE permissions to the desired users on those procedures ("kind of work") that they are permitted to do. (You are using Database Roles and Domain Groups, right?)
Revoke permissions to make ad hoc data modifications on the table
It can be fussy to set up, but it supports the "Principle of Least Permissions", permitting users to do only what they are supposed to do.

Postgres: Save rows from temp table before rollback

I have a main procedure (p_proc_a) in which I create a temp table for logging (tmp_log). In the main procedure I call some other procedures (p_proc_b, p_proc_c). In each of these procedures I insert data into table tmp_log.
How do I save rows from tmp_log into a physical table (log) in case of exception before rollback?
create procedure p_proc_a
language plpgsql
as $body$
begin
create temp table tmp_log (log_message text) on commit drop;
call p_proc_b();
call p_proc_c();
insert into log (log_message)
select log_message from tmp_log;
exception
when others then
begin
get stacked diagnostics
v_message_text = message_text;
insert into log (log_message)
values(v_message_text);
end;
end;
$body$
What is a workround to save logs into a table and rollback changes from p_proc_b and p_proc_c?
That is not possible in PostgreSQL.
The typical workaround is to use dblink to connect to the database itself and write the logs via dblink.
I found three solutions to store data within a transaction (im my case - for debugging propose), and still be able to see that data after rollback-ing the transaction
I have a scenario where inside, I use following block, so it may not apply to your scenario
DO $$
BEGIN
...
ROLLBACK;
END;
$$;
Two first solutions are suggested to me in the Postgres slack, and the other I tried and found after talking with them, a way that worked in other db.
Solutions
1 - Using DBLink
I don't remember how it was done, but you import some libraries and then connect to another db, and use the other DB - which maybe can support to be this db - which seem to be not affected by transactions
2 - Using COPY command
Using the
COPY (SELECT ...) TO PROGRAM 'psql -c "COPY xyz FROM stdin"'
BTW I never used it, and it seems that it requires Super User(SU) permission in Unix. And god knows how it is used, or how it output data
3 - Using Sub-Transactions
In this way, you use a sub-transaction (which I'm not sure it it's correct, but it must be called Autonomous transactions) to commit the result you want to keep.
In my case the command looks like this:
I used a Temp Table, but it seems (I'm not sure) to work with an actual table as well
CREATE TEMP TABLE
IF NOT EXISTS myZone AS
SELECT * from public."Zone"
LIMIT 0;
DO $$
BEGIN
INSERT INTO public."Zone" (...)VALUES(...);
BEGIN
INSERT INTO myZone
SELECT * from public."Zone";
commit;
END;
Rollback;
END; $$;
SELECT * FROM myZone;
DROP TABLE myZone;
don't ask what is the purpose of doing this, I'm creating a test scenario, and I wished to track what I did until now. since this block did not support SELECT of DQL, I had to do something else, and I wanted a clean set of report, not raising errors
According to www.techtarget.com:
*Autonomous transactions allow a single transaction to be subdivided into multiple commit/rollback transactions, each of which
will be tracked for auditing purposes. When an autonomous transaction
is called, the original transaction (calling transaction) is
temporarily suspended.
(This text was indexed by google and existed on 2022-10-11, and the website was not opened due to an E-mail validation issue)
Also this name seems to be coming from Oracle, which this article can relate
EIDTED:
Removing solution 3 as it won't work
POSTGRES 11 Claim to support Autonomous Transactions but it's not what we may expect...
For this functionality Postgres introduced the SAVEPOINT:
SAVEPOINT <name of savepoint>;
<... CODE ...>
<RELEASE|ROLLBACK> SAVEPOINT <name of savepoint>;
Now the issue is:
If you use nested BEGIN, the COMMIT Inside the nested code can COMMIT everything, and the ROLLBACK in the outside block will do none (Not rolling back anything that happened before COMMIT of inner
If you use SAVEPOINT, it is only used to rollbacks part of the code, and even if you COMMIT it, the ROLLBACK in the outside block will rollback the SAVEPOINT too

DB2 stored procedure for clearing the database can't be found

I'm trying to create a DB2 stored procedure that will clear all the data tables and reset the indexes to 0. The creating of the procedure is pretty straightforward, but the issue is that DB2 immediately forgets it exists. What am I doing wrong?
Create a simple script:
create procedure CLEARTABLES()
language sql
BEGIN
commit;
truncate TABLE1 immediate;
truncate TABLE2 immediate;
truncate TABLE3 immediate;
END;
Make sure we can execute it:
GRANT EXECUTE ON PROCEDURE CLEARTABLES TO PUBLIC;
And here is where it all breaks down with No authorized routine named "CLEARTABLES" of type "PROCEDURE" having compatible arguments was found.. SQLCODE=-440, SQLSTATE=42884, DRIVER=4.26.14
CALL CLEARTABLES;
I've also tried execute, but this does not appear to do anything.
EXECUTE CLEARTABLES;
And to prove it exists:
SELECT * FROM SYSIBM.SYSROUTINES s
WHERE s.ROUTINETYPE = 'P' AND s.ROUTINENAME = 'CLEARTABLES'
I feel like I'm missing very obvious here so I've tried a lot of small things like parentheses, no parentheses, lower/upper case etc. I'm using DBeaver and I can see the procedure under Application Objects > Procedures named CLEARTABLES in all caps no parameters, yet DB2 somehow can't find it with the way I'm calling for it.

T-SQL 2005: combine multiple create/alter procedure calls in one transaction

I want to build a T-SQL change script that rolls out database changes from dev to test to production.
I've split the script into three parts:
DDL statements
changes for stored procedures (create and alter procedure)
data creation and modification
I want all of the changes in those three scripts to be made in a transaction. Either all changes in the script are processed or - upon an error - all changes are rolled back.
I managed to do this for the steps 1 and 3 by using the try/catch and begin transaction statements.
My problem is now to do the same thing for the stored procedures.
A call to "begin transaction" directly before a "create stored procedure" statement results in a syntax error telling me that "alter/create procedure statement must be the first statement inside a query batch".
So I wonder how I could combine multiple create/alter procedure statements in one transaction.
Any help is highly appreciated ;-)
Thanks
You can use dynamic SQL to create your stored procedures.
EXEC ('CREATE PROC dbo.foo AS ....`)
This will avoid the error "alter/create procedure statement must be the first statement inside a query batch"
Try this:
begin transaction
go
create procedure foo as begin select 1 end
go
commit transaction
try putting the steps in a job
BEGIN TRANSACTION
BEGIN TRY
-- Do your stuff here
COMMIT TRANSACTION
PRINT 'Successfull.'
END TRY
BEGIN CATCH
SELECT
ERROR_NUMBER() as ErrorNumber,
ERROR_MESSAGE() as ErrorMessage;
ROLLBACK TRANSACTION
END CATCH

How does SQL Server treat statements inside stored procedures with respect to transactions?

Say I have a stored procedure consisting of several separate SELECT, INSERT, UPDATE and DELETE statements. There is no explicit BEGIN TRANS / COMMIT TRANS / ROLLBACK TRANS logic.
How will SQL Server handle this stored procedure transaction-wise? Will there be an implicit connection for each statement? Or will there be one transaction for the stored procedure?
Also, how could I have found this out on my own using T-SQL and / or SQL Server Management Studio?
Thanks!
There will only be one connection, it is what is used to run the procedure, no matter how many SQL commands within the stored procedure.
since you have no explicit BEGIN TRANSACTION in the stored procedure, each statement will run on its own with no ability to rollback any changes if there is any error.
However, if you before you call the stored procedure you issue a BEGIN TRANSACTION, then all statements are grouped within a transaction and can either be COMMITted or ROLLBACKed following stored procedure execution.
From within the stored procedure, you can determine if you are running within a transaction by checking the value of the system variable ##TRANCOUNT (Transact-SQL). A zero means there is no transaction, anything else shows how many nested level of transactions you are in. Depending on your sql server version you could use XACT_STATE (Transact-SQL) too.
If you do the following:
BEGIN TRANSACTION
EXEC my_stored_procedure_with_5_statements_inside #Parma1
COMMIT
everything within the procedure is covered by the transaction, all 6 statements (the EXEC is a statement covered by the transaction, 1+5=6). If you do this:
BEGIN TRANSACTION
EXEC my_stored_procedure_with_5_statements_inside #Parma1
EXEC my_stored_procedure_with_5_statements_inside #Parma1
COMMIT
everything within the two procedure calls are covered by the transaction, all 12 statements (the 2 EXECs are both statement covered by the transaction, 1+5+1+5=12).
You can find out on your own by creating a small stored procedure that does something simple, say insert a record into a test table. Then Begin Tran; run sp_test; rollback; Is the new record there? If so, then the SP ignores the outside transaction. If not, then the SP is just another statement executed inside the transaction (which I am pretty sure is the case).
You must understand that a transaction is a state of the session. The session can be in an explicit transaction state because there is at least one BEGIN TRANSACTION that have been executed in the session wherever the command "BEGIN TRANSACTION" has been throwed (before entering in a routine or inside the routine code). Otherwise, the state of the session is in an implicit transaction state. You can have multiple BEGIN TRANSACTION, but only the first one change the behavior of the session... The others only increase the ##TRANCOUNT global sesion variable.
Implicit transaction state means that all SQL orders (DDL, DML and DCL comands) wil have an invisble integrated transaction scope.