Is trigger in Oacle the part of request? - sql

I use Spring Repository, And Oracle DB.
I have a table and trigger that fire on insert/update/delete. If I execute some insert/delete/update against the table and trigger got an SQL error (locked resources or something else) will the Repository method got an exception? Or Oracle trigger executes as separated part of insert/delete/update statements?

I don't know Spring Repository, but I do know that in - for example: Oracle Forms, Application Express, Reports - trigger error propagates all the way up. Everything stops in that case.
Suppose that I have a form and enter some data. Database trigger fires and tries to do whatever it does, and it fails with raise_application_error. Nothing gets completed, and I see an error in my form, e.g. "ORA-20001: my custom error".
Therefore, I presume that you'd experience the same in Spring Repository. After all, wouldn't cost much to test it, right? I would if I could, but I can't so - do it yourself.

Your Spring Repository will be error out when you trying to insert data into table. Our DML command will not performed if there is any error in our associated trigger.
Mostly i use DML in my PLSQL procedure for insert and use GET_LOCKED_TRANSACTION() procedure to check if any resource is busy.
FUNCTION GET_LOCKED_TRANSACTION
(
P_WIP_ENTITY_ID IN NUMBER,
P_PRODUCTION_NOTE_NUMBER IN NUMBER
) RETURN BOOLEAN IS
ROW_LOCKED EXCEPTION;
PRAGMA EXCEPTION_INIT(ROW_LOCKED, -54);
BEGIN
/* cURSOR WITH nOWAIT Attribute */
FOR CC IN (SELECT *
FROM myTable
WHERE WIP_ENTITY_ID = P_WIP_ENTITY_ID
FOR UPDATE NOWAIT) LOOP
NULL;
END LOOP;
RETURN FALSE;
EXCEPTION
WHEN ROW_LOCKED THEN
RETURN TRUE;
END GET_LOCKED_TRANSACTION;

Related

Postgres: Save rows from temp table before rollback

I have a main procedure (p_proc_a) in which I create a temp table for logging (tmp_log). In the main procedure I call some other procedures (p_proc_b, p_proc_c). In each of these procedures I insert data into table tmp_log.
How do I save rows from tmp_log into a physical table (log) in case of exception before rollback?
create procedure p_proc_a
language plpgsql
as $body$
begin
create temp table tmp_log (log_message text) on commit drop;
call p_proc_b();
call p_proc_c();
insert into log (log_message)
select log_message from tmp_log;
exception
when others then
begin
get stacked diagnostics
v_message_text = message_text;
insert into log (log_message)
values(v_message_text);
end;
end;
$body$
What is a workround to save logs into a table and rollback changes from p_proc_b and p_proc_c?
That is not possible in PostgreSQL.
The typical workaround is to use dblink to connect to the database itself and write the logs via dblink.
I found three solutions to store data within a transaction (im my case - for debugging propose), and still be able to see that data after rollback-ing the transaction
I have a scenario where inside, I use following block, so it may not apply to your scenario
DO $$
BEGIN
...
ROLLBACK;
END;
$$;
Two first solutions are suggested to me in the Postgres slack, and the other I tried and found after talking with them, a way that worked in other db.
Solutions
1 - Using DBLink
I don't remember how it was done, but you import some libraries and then connect to another db, and use the other DB - which maybe can support to be this db - which seem to be not affected by transactions
2 - Using COPY command
Using the
COPY (SELECT ...) TO PROGRAM 'psql -c "COPY xyz FROM stdin"'
BTW I never used it, and it seems that it requires Super User(SU) permission in Unix. And god knows how it is used, or how it output data
3 - Using Sub-Transactions
In this way, you use a sub-transaction (which I'm not sure it it's correct, but it must be called Autonomous transactions) to commit the result you want to keep.
In my case the command looks like this:
I used a Temp Table, but it seems (I'm not sure) to work with an actual table as well
CREATE TEMP TABLE
IF NOT EXISTS myZone AS
SELECT * from public."Zone"
LIMIT 0;
DO $$
BEGIN
INSERT INTO public."Zone" (...)VALUES(...);
BEGIN
INSERT INTO myZone
SELECT * from public."Zone";
commit;
END;
Rollback;
END; $$;
SELECT * FROM myZone;
DROP TABLE myZone;
don't ask what is the purpose of doing this, I'm creating a test scenario, and I wished to track what I did until now. since this block did not support SELECT of DQL, I had to do something else, and I wanted a clean set of report, not raising errors
According to www.techtarget.com:
*Autonomous transactions allow a single transaction to be subdivided into multiple commit/rollback transactions, each of which
will be tracked for auditing purposes. When an autonomous transaction
is called, the original transaction (calling transaction) is
temporarily suspended.
(This text was indexed by google and existed on 2022-10-11, and the website was not opened due to an E-mail validation issue)
Also this name seems to be coming from Oracle, which this article can relate
EIDTED:
Removing solution 3 as it won't work
POSTGRES 11 Claim to support Autonomous Transactions but it's not what we may expect...
For this functionality Postgres introduced the SAVEPOINT:
SAVEPOINT <name of savepoint>;
<... CODE ...>
<RELEASE|ROLLBACK> SAVEPOINT <name of savepoint>;
Now the issue is:
If you use nested BEGIN, the COMMIT Inside the nested code can COMMIT everything, and the ROLLBACK in the outside block will do none (Not rolling back anything that happened before COMMIT of inner
If you use SAVEPOINT, it is only used to rollbacks part of the code, and even if you COMMIT it, the ROLLBACK in the outside block will rollback the SAVEPOINT too

Type of update on postgresql

New to Postgres and PL/pgSQL here.
How do I go about writing a PL/pgSQL function to perform different actions based on the type of update (insert,delete,etc) made to the table/record in a postgres database.
You seem to be looking for a trigger.
In SQL, triggers are procedures that are called (fired) when a specific event happens on an object, for example when a table is updated, deleted from or insterted into. Triggers can respond to many use cases such as implementing business integrity rules, cleaning data, auditing, security, ...
In Postgres, you should first define a PL/pgSQL function, and then reference it in the trigger declaration.
CREATE OR REPLACE FUNCTION my_table_function() RETURNS TRIGGER AS $my_table_trigger$
BEGIN
...
END
$my_table_trigger$ LANGUAGE plpgsql;
CREATE TRIGGER my_table_trigger
AFTER INSERT OR UPDATE OR DELETE ON mytable
FOR EACH ROW EXECUTE PROCEDURE my_table_function();
From within the trigger code, you have access a set of special variables such as :
NEW, OLD : pseudo records that contain new/old database records affected by the query
TG_OP : operation that fired the trigger (INSERT, UPDATE, DELETE, ...)
Using these variables and other triggers mechanisms, you can analyze or alter the on-going operation, or even abort it by raising an exception.
I would recommend reading Postgres documentation for the CREATE TRIGGER statement and Trigger Procedure (the latest gives lots of examples).

I have a trigger autonomous but only execute one time in the same session

I have a trigger autonomous but only execute one time in the same session, then do nothing
CREATE OR REPLACE TRIGGER tdw_insert_unsus
BEFORE INSERT ON unsuscription_fact FOR EACH ROW
DECLARE
PRAGMA AUTONOMOUS_TRANSACTION;
v_id_suscription SUSCRIPTION_FACT.ID_SUSCRIPTION%TYPE;
v_id_date_suscription SUSCRIPTION_FACT.ID_DATE_SUSCRIPTION%TYPE;
v_id_date_unsuscription SUSCRIPTION_FACT.ID_DATE_UNSUSCRIPTION%TYPE;
v_suscription DATE;
v_unsuscription DATE;
v_live_time SUSCRIPTION_FACT.LIVE_TIME%TYPE;
BEGIN
SELECT id_suscription, id_date_suscription
INTO v_id_suscription, v_id_date_suscription
FROM(
SELECT id_suscription, id_date_suscription
FROM suscription_fact
WHERE id_mno = :NEW.ID_MNO
AND id_provider = :NEW.ID_PROVIDER
AND ftp_service_id = :NEW.FTP_SERVICE_ID
AND msisdn = :NEW.MSISDN
AND id_date_unsuscription IS NULL
ORDER BY id_date_suscription DESC
)
WHERE ROWNUM = 1;
-- calculate time
v_unsuscription := to_date(:NEW.id_date_unsuscription,'yyyymmdd');
v_suscription := to_date(v_id_date_suscription,'yyyymmdd');
v_live_time := (v_unsuscription - v_suscription);
UPDATE suscription_fact SET id_date_unsuscription = :NEW.id_date_unsuscription,
id_time_unsuscription = :NEW.id_time_unsuscription, live_time = v_live_time
WHERE id_suscription = v_id_suscription;
COMMIT;
EXCEPTION
WHEN NO_DATA_FOUND THEN
ROLLBACK;
END;
/
if I insert values works well the first o second time but after not work, but if I logout the session and login works for the first or second insertion
what is the problem?, I use oracle 10g
You're using an autonomous transaction to work around the fact that a trigger can not query its table itself. You've run into the infamous mutating table error and you have found that declaring the trigger as an autonomous transaction makes the error go away.
No luck for you though, this does not solve the problem at all:
First, any transaction logic is lost. You can't rollback the changes on the suscription_fact table, they are committed, while your main transaction is not and could be rolled back. So you've also lost your data integrity.
The trigger can not see the new row because the new row hasn't been committed yet! Since the trigger runs in an independent transaction, it can not see the uncommitted changes made by the main transaction: you will run into completely wrong results.
This is why you should never do any business logic in autonomous transactions. (there are legitimate applications but they are almost entirely limited to logging/debugging).
In your case you should either:
Update your logic so that it does not need to query your table (updating suscription_fact only if the new row is more recent than the old value stored in id_date_unsuscription).
Forget about using business logic in triggers and use a procedure that updates all tables correctly or use a view because here we have a clear case of redundant data.
Use a workaround that actually works (by Tom Kyte).
I would strongly advise using (2) here. Don't use triggers to code business logic. They are hard to write without bugs and harder still to maintain. Using a procedure guarantees that all the relevant code is grouped in one place (a package or a procedure), easy to read and follow and without unforeseen consequences.

How to Ignoring errors in Trigger and Perform respective operation in MS SQL Server

I have created AFTER INSERT TRIGGER
Now if any case if an error occurs while executing Trigger. It should not effect Insert Operation on Triggered table.
In One word if any ERROR occurs in trigger it should Ignore it.
As I have used
BEGIN TRY
END TRY
BEGIN CATCH
END CATCH
But it give following error message and Rolled back Insert operation on Triggered table
An error was raised during trigger execution. The batch has been
aborted and the user transaction, if any, has been rolled back.
Interesting problem. By default, triggers are designed that if they fail, they rollback the command that fired it. So whenever trigger is executing there is an active transaction, whatever there was an explicit BEGIN TRANSACTION or not on the outside. And also BEGIN/TRY inside trigger will not work. Your best practice would be not to write any code in trigger that could possibly fail - unless it is desired to also fail the firing statement.
In this situation, to suppress this behavior, there are some workarounds.
Option A (the ugly way):
Since transaction is active at the beginning of trigger, you can just COMMIT it and continue with your trigger commands:
CREATE TRIGGER tgTest1 ON Test1 AFTER INSERT
AS
BEGIN
COMMIT;
... do whatever trigger does
END;
Note that if there is an error in trigger code this will still produce the error message, but data in Test1 table are safely inserted.
Option B (also ugly):
You can move your code from trigger to stored procedure. Then call that stored procedure from Wrapper SP that implements BEGIN/TRY and at the end - call Wrapper SP from trigger. This might be a bit tricky to move data from INSERTED table around if needed in the logic (which is in SP now) - probably using some temp tables.
SQLFiddle DEMO
You cannot, and any attempt to solve it is snake oil. No amount of TRY/CATCH or ##ERROR check will work around the fundamental issue.
If you want to use the tightly coupling of a trigger then you must buy into the lower availability induced by the coupling.
If you want to preserve the availability (ie. have the INSERT succeed) then you must give up coupling (remove the trigger). You must do all the processing you were planning to do in the trigger in a separate transaction that starts after your INSERT committed. A SQL Agent job that polls the table for newly inserted rows, an Service Broker launched procedure or even an application layer step are all going to fit the bill.
The accepted answer's option A gave me the following error: "The transaction ended in the trigger. The batch has been aborted.". I circumvented the problem by using the SQL below.
CREATE TRIGGER tgTest1 ON Test1 AFTER INSERT
AS
BEGIN
SET XACT_ABORT OFF
BEGIN TRY
SELECT [Column1] INTO #TableInserted FROM [inserted]
EXECUTE sp_executesql N'INSERT INTO [Table]([Column1]) SELECT [Column1] FROM #TableInserted'
END TRY
BEGIN CATCH
END CATCH
SET XACT_ABORT ON
END

Log error messages in Oracle stored procedure

We plan to configure a stored procedure to run as a batch job daily using Oracle DBMS scheduler package. We would like to know what would be the best way to log an error message when there is an error occured. Is logging to a temporary table an option? or is there a better option. Thanks in advance.
If you decide to roll your own logging and log into a table you might go the Autonomous Transaction route.
An Autonomous Transaction is a transaction that can be commited independently of the current transaction you are in.
That way you can log and commit all the info you want to your log table independently of the success or failure of your stored procedure or batch process parent transaction.
CREATE OR REPLACE PROCEDURE "SP_LOG" (
P_MESSAGE_TEXT VARCHAR2
) IS
pragma autonomous_transaction;
BEGIN
DBMS_OUTPUT.PUT_LINE(P_MESSAGE_TEXT);
INSERT INTO PROCESSING_LOG (
MESSAGE_DATE,
MESSAGE_TEXT
) VALUES (
SYSDATE,
P_MESSAGE_TEXT
);
COMMIT;
END;
/
Then if you call it like this, you can still get messages committed to your log table even if you have a failure and roll back your transaction:
BEGIN
SP_LOG('Starting task 1 of 2');
... code for task 1 ...
SP_LOG('Starting task 2 of 2');
... code for task 2 ...
SP_LOG('Ending Tasks');
... determine success or failure of process and commit or rollback ...
ROLLBACK;
END;
/
You may want to tidy it up with exceptions that make sense for your code, but that is the general idea, the data written in the calls to SP_LOG persists, but the parent transaction can still be rolled back.
You could use log4plsql http://log4plsql.sourceforge.net/and change the choice later by configuration changes not code changes
The log4plsql page gives a list of various places it can log.
It also depends how applications and systems are monitored in your environment - if there is a standard way fir example a business I worked add used used irc for monitoring - then you might want a function that calls to that.
You say that you don't have a lot of control over the DB environment to install logging packages - if this is the case then you'll be limited to querying the information in the dba_scheduler_job_run_details and dba_scheduler_job_log system views - you'll be able to see the history of executions here. Unhandled exceptions will show up in the ADDITIONAL_INFO column. If you need notification you can poll these views and generate email.
that depends on how you will deal with errors: if you just need to be notified, the email is the best option; if you need to manually continue process the error, the table is good choice.