How to access full OLD data in SQL Trigger - sql

I have a trigger whose purpose is to fire whenever there is a DELETE on a particular table and insert the deleted data into another table in json format.
The trigger works fine if I am specifying each column explicitly. Is there any way to access the entire table row?
This is my code.
TRIGGER1
AFTER DELETE
ON QUESTION
FOR EACH ROW
DECLARE
json_doc CLOB;
BEGIN
select json_arrayagg (
json_object ('code' VALUE :old.id,
'name' VALUE :old.text,
'description' VALUE :old.text) returning clob
) into json_doc
from dual;
PROCEDURE1(json_doc);
END;
This works fine. However, what I want is something like this. Instead of explicity specifying each column, I want to convert the entire :OLD data
TRIGGER1
AFTER DELETE
ON QUESTION
FOR EACH ROW
DECLARE
json_doc CLOB;
BEGIN
select json_arrayagg (
json_object (:old) returning clob
) into json_doc
from dual;
PROCEDURE1(json_doc);
END;
Any suggestion please.

The short and correct answer is you can't. We have a few tables in our application where we do this and the developer is responsible for updating the trigger when they add a column: this is enforced with code reviews and is probably the cleanest solution for this scenario.
The long answer is you can get close, but I wouldn't do this in production for several reasons:
Triggers are terrible for performance
Triggers are terrible for code clarity
This requires reading the row again using flashback query so
You aren't getting the values of this row from inside your current transaction: if you update the row in your transaction and then delete it the JSON will show what the values were BEFORE your update
There is a performance penalty for reading from UNDO
There is potential that UNDO won't be available and your trigger will fail
Your user needs permission to execute flashback queries
Your database needs to meet all the perquisites to support flashback queries
Deleting a lot of rows will cause the ROWID collection to get large and consume PGA
There are probably more reasons, but in the interest of "can it be done" here you go...
DROP TABLE t1;
DROP TABLE t2;
DROP TRIGGER t1_ad;
CREATE TABLE t1 (
id NUMBER,
name VARCHAR2(100),
description VARCHAR2(100)
);
CREATE TABLE t2 (
dt TIMESTAMP(9),
json_data CLOB
);
INSERT INTO t1 VALUES (1, 'A','aaaa');
INSERT INTO t1 VALUES (2, 'B','bbbb');
INSERT INTO t1 VALUES (3, 'C','cccc');
INSERT INTO t1 VALUES (4, 'D','dddd');
CREATE OR REPLACE TRIGGER t1_ad
FOR DELETE ON t1
COMPOUND TRIGGER
TYPE t_rowid_tab IS TABLE OF ROWID;
v_rowid_tab t_rowid_tab := t_rowid_tab();
AFTER EACH ROW IS
BEGIN
v_rowid_tab.extend;
v_rowid_tab(v_rowid_tab.last) := :old.rowid;
END AFTER EACH ROW;
AFTER STATEMENT IS
v_scn v$database.current_scn := dbms_flashback.get_system_change_number;
v_json_data CLOB;
v_sql CLOB;
BEGIN
FOR i IN 1 .. v_rowid_tab.count
LOOP
SELECT 'SELECT json_arrayagg(json_object(' ||
listagg('''' || lower(t.column_name) || ''' VALUE ' ||
lower(t.column_name),
', ') within GROUP(ORDER BY t.column_id) || ') RETURNING CLOB) FROM t1 AS OF SCN :scn WHERE rowid = :r'
INTO v_sql
FROM user_tab_columns t
WHERE t.table_name = 'T1';
EXECUTE IMMEDIATE v_sql
INTO v_json_data
USING v_scn, v_rowid_tab(i);
INSERT INTO t2
VALUES
(current_timestamp,
v_json_data);
END LOOP;
END AFTER STATEMENT;
END t1_ad;
/
UPDATE t1
SET NAME = 'zzzz' -- not captured
WHERE id = 2;
DELETE FROM t1 WHERE id < 3;
SELECT *
FROM t2;
-- 13-NOV-20 01.08.15.955426000 PM [{"id":1,"name":"A","description":"aaaa"}]
-- 13-NOV-20 01.08.15.969755000 PM [{"id":2,"name":"B","description":"bbbb"}]

Related

Insert value on second trigger with value from first trigger PL\SQL

Hi I try to Insert value in the second trigger with new id from first trigger only if condition is fulfiled, but I'm stuck.
table1_trg works
CREATE TABLE table1 (
id NUMBER(9,0) NOT NULL,
subject VARCHAR2(200) NOT NULL,
url_address VARCHAR2(200) NOT NULL,
)
CREATE OR REPLACE TRIGGER table1_trg
BEFORE INSERT ON table1
FOR EACH ROW
BEGIN
SELECT table1_seq.NEXTVAL
INTO :new.id
FROM dual;
END;
/
CREATE OR REPLACE TRIGGER table1_url
BEFORE INSERT ON table1
FOR EACH ROW
WHEN (NEW.subject = 'Task')
BEGIN
INSERT INTO CSB.table1 (url_address)
VALUES ('blabla.com?' || :new.id);
END;
/
I insert only subject but after that i receive exception that subject can not be null.
INSERT INTO corp_tasks_spec (subject) VALUES ('Task')
Any ideas how to resolve it?
You should not be inserting a new record into the same table, you should be modifying the column values for the row you're already inserting - which the trigger is firing against. You're getting the error because of that second insert - which is only specifying the URL value, not the subject or ID (though the first trigger would fire again and set the ID for that new row as well - so it complains about the subject).
Having two triggers on the same firing point can be difficult in old versions of Oracle as the order they fired wasn't guaranteed - so for instance your second trigger might fire before the first, and ID hasn't been set yet. You can control the order in later versions (from 11g) with FOLLOWS:
CREATE OR REPLACE TRIGGER table1_url
BEFORE INSERT ON table1
FOR EACH ROW
FOLLOWS table1_trg
WHEN (NEW.subject = 'Task')
BEGIN
:NEW.url_address := 'blabla.com?' || :new.id;
END;
/
This now fires after the first trigger, so ID is set, and assigns a value to the URL in this row rather than trying to create another row:
INSERT INTO table1 (subject) VALUES ('Task');
1 row inserted.
SELECT * FROM table1;
ID SUBJECT URL_ADDRESS
---------- ---------- --------------------
2 Task blabla.com?2
But you don't really need two triggers here, you could do:
DROP TRIGGER table1_url;
CREATE OR REPLACE TRIGGER table1_trg
BEFORE INSERT ON table1
FOR EACH ROW
BEGIN
:NEW.id := table1_seq.NEXTVAL; -- no need to select from dual in recent versions
IF :NEW.subject = 'Task' THEN
:NEW.url_address := 'blabla.com?' || :new.id;
END IF;
END;
/
Then that trigger generates the ID and sets the URL:
INSERT INTO table1 (subject) VALUES ('Task');
1 row inserted.
SELECT * FROM table1;
ID SUBJECT URL_ADDRESS
---------- ---------- --------------------
2 Task blabla.com?2
3 Task blabla.com?3
Of course, for anything except Task you'll have to specify the URL as part of the insert, or it will error as that is a not-null column.
Create sequence
CREATE SEQUENCE table1_SEQ
START WITH 1
MAXVALUE 100000
MINVALUE 1
NOCYCLE
NOCACHE
NOORDER;
CREATE TRIGGER
CREATE OR REPLACE TRIGGER table1_TRG
Before Insert
ON table1 Referencing New As New Old As Old
For Each Row
Declare V_Val Number;
Begin
Select table1_SEQ.NextVal into V_Val From Dual;
If Inserting Then
:New.id:= V_Val;
End If;
End;
/

How to execute sql query on a table whose name taken from another table

I have a table that store the name of other tables. Like
COL_TAB
--------------
TABLE_NAME
--------------
TAB1
TAB2
TAB3
What i want to do is that, i want to run a sql query on table like this,
SELECT * FROM (SELECT TABLE_NAME from COL_TAB WHERE TABLE_NAME = 'TAB1')
Thanks
An Oracle SQL query can use a dynamic table name, using Oracle Data Cartridge and the ANY* types. But before you use those advanced features, take a step back and ask yourself if this is really necessary.
Do you really need a SQL statement to be that dynamic? Normally this is better handled by an application that can submit different types of queries. There are many application programming languages and toolkits that can handle unexpected types. If this is for a database-only operation, then normally the results are stored somewhere, in which case PL/SQL and dynamic SQL are much easier.
If you're sure you've got one of those rare cases that needs a totally dynamic SQL statement, you'll need something like my open source project Method4. Download and install it and try the below code.
Schema Setup
create table tab1(a number);
create table tab2(b number);
create table tab3(c number);
insert into tab1 values(10);
insert into tab2 values(20);
insert into tab3 values(30);
create table col_tab(table_name varchar2(30), id number);
insert into col_tab values('TAB1', 1);
insert into col_tab values('TAB1', 2);
insert into col_tab values('TAB1', 3);
commit;
Query
select * from table(method4.dynamic_query(
q'[
select 'select * from '||table_name sql
from col_tab
where id = 1
]'));
Result:
A
--
10
You'll quickly discover that queries within queries are incredibly difficult. There's likely a much easier way to do this, but it may require a design change.
I don't have a database by hand to test this but I think you are looking for something like this:
DECLARE
-- Create a cursor on the table you are looking through.
CURSOR curTable IS
SELECT *
FROM MainTable;
recTable curTable%ROWTYPE;
vcQuery VARCHAR2(100);
BEGIN
-- Loop through all rows of MainTable.
OPEN curTable;
LOOP
FETCH curTable INTO recTable;
EXIT WHEN curTable%NOTFOUND;
-- Set up a dynamic query, with a WHERE example.
vcQuery := 'SELECT ColumnA, ColumnB FROM ' || recTable.Table_Name || ' WHERE 1 = 1';
-- Execute the query.
OPEN :dyn_cur FOR vcQuery;
END LOOP;
CLOSE curTable;
END;
/
Try this
CREATE OR REPLACE PROCEDURE TEST IS
sql_stmt VARCHAR2(200);
V_NAME VARCHAR2(20);
BEGIN
sql_stmt := 'SELECT * FROM ';
EXECUTE IMMEDIATE sql_stmt|| V_NAME;
END;
Update
select statement dont work in procedure.
in sql server you can try sql block
Declare #name varchar2(50)
Select #name='Select * from '+TABLE_NAME from COL_TAB WHERE TABLE_NAME = 'TAB1'
EXEC(#name);

How can I modify this trigger to include column-name, old-value and new-value?

Suppose, a trigger that keeps track of AREA-table and records the changes in AREA_LOGGING_TABLE.
CREATE TABLE AREA
( AREA_NUMBER NUMBER,
AREA_NAME VARCHAR(20)
)
CREATE TABLE AREA_LOGGING_TABLE
( WHO_MODIFIED VARCHAR(20),
WHEN_MODIFIED DATE,
OLD_VALUE BLOB,
NEW_VALUE BLOB,
COLUMN_NAME VARCHAR(30)
)
I want to record username, date-time, column-name, old-data, and, new-data.
How can I do that?
CREATE OR REPLACE TRIGGER AREA_MODIFY_LOGGER_COLUMN_LVL
AFTER INSERT or UPDATE or DELETE
ON AREA
REFERENCING OLD AS old_data NEW AS new_data
FOR EACH ROW
DECLARE
v_username varchar2(10);
BEGIN
-- Find username of person performing the DELETE on the table
SELECT user INTO v_username
FROM dual;
-- Insert record into audit table
INSERT INTO AREA_LOGGING_TABLE(who_modified, when_modified, old_value, new_value)
VALUES ( v_username, sysdate, :old_data.area_number, :new_data.area_number);
END;
This is not working.
Besides, I don't know how to include column-name here.
The utl_raw.cast_to_raw function can be used to convert your values to BLOB.
Regarding the column_name, I think you can hard code it in the insert statement as you already doing it after :NEW and :OLD.
The nvl function was used to handle nulls in :NEW \ :OLD.
CREATE OR REPLACE TRIGGER AREA_MODIFY_LOGGER_COLUMN_LVL
AFTER INSERT or UPDATE or DELETE
ON AREA
REFERENCING OLD AS Old NEW AS New
FOR EACH ROW
DECLARE
v_username varchar2(10);
BEGIN
-- Find username of person performing the DELETE on the table
SELECT user INTO v_username
FROM dual;
if nvl(:old.area_number, -1) <> nvl(:new.area_number, -1) then
-- Insert record into audit table
INSERT INTO AREA_LOGGING_TABLE(who_modified, when_modified, old_value, new_value, column_name)
VALUES ( v_username, sysdate, utl_raw.cast_to_raw(:Old.area_number), utl_raw.cast_to_raw(:New.area_number), 'AREA_NUMBER');
end if;
if nvl(:old.area_name , '-1') <> nvl(:new.area_name, '-1') then
-- Insert record into audit table
INSERT INTO AREA_LOGGING_TABLE(who_modified, when_modified, old_value, new_value, column_name)
VALUES ( v_username, sysdate, utl_raw.cast_to_raw(:Old.AREA_NAME), utl_raw.cast_to_raw(:New.AREA_NAME), 'AREA_NAME');
end if;
END;
Just to giving you brief for trigger to enhance your concept for trigger
You have below inbuilt table created by SQL when trigger fired.
- deleted (i.e. select #empid=d.Emp_ID from deleted d)
- inserted (i.e. select #empid=i.Emp_ID from inserted i) (can be used in Insert/update operation)

Extract first line from all subpartitions in all tables in Oracle database

I want to get the first value for all subpartitions of all tables in my Oracle database.
From my resultset, I want to update another table with the values returned and insert a comment('No value' for example) for subpartitions whic are empty.
I have written a piece of code to begin with one table.
Can you please tell me if the algorithm I am using is right and why there is no output?
DECLARE
BEGIN
for i in (select table_name, subpartition_name
from dba_tab_subpartitions
where table_name ='my_table'
order by 1)
loop
execute immediate ' insert into bkp_part (partition_name) values('||i.subpartition_name||')';
commit;
execute immediate 'select *
from '|| i.table_name||' subpartition('||i.subpartition_name||')
where rownum < 2';
end loop;
end;
END;
/
There is no output because you aren't creating any. Your dynamic query isn't going to be executed because you don't select the columns into anything; if you really want the entire row you'd need a %rowtype variable to select into, but I'm not sure if you really want the whole row or just one value from it. And having selected into something, you then need to use that variable - putting it into a table from what you've said, or displaying for debug purpoises via dbms_output.
Your insert statement doesn't need to be dynamic; you can just do:
insert into bkp_part (partition_name) values (i.subpartition_name);
To get the first row from each subpartition, you need to establish what you mean by 'first'. In this example I'm using a hash subpartition, so for the sake of argument I'll decide that the 'first' row is the lowest value in that hash bucket, and to keep it simple I'm only interested in that single column.
I've set up a dummy table with:
create table my_table (id number, part_date date)
partition by range (part_date)
subpartition by hash (id)
subpartition template (
subpartition subpart_1,
subpartition subpart_2,
subpartition subpart_3)
(
partition part_1 values less than (date '2015-01-01')
);
insert into my_table values (1, date '2014-12-29');
insert into my_table values (2, date '2014-12-30');
insert into my_table values (4, date '2014-12-31');
Looking at how the data is distributed, I have nothing in the first subpartition, IDs 1 and 4 in the second, and ID 2 in the third.
That means I can do:
set serveroutput on
declare
l_id my_table.id%type;
begin
for i in (
select table_name, subpartition_name
from user_tab_subpartitions
where table_name = 'MY_TABLE'
order by table_name, subpartition_position
) loop
-- skipping because I don't have that table
-- insert into bkp_part (partition_name) values (i.subpartition_name);
execute immediate 'select min(id) from ' || i.table_name
|| ' subpartition(' || i.subpartition_name || ')'
into l_id;
-- just for debugging/demo
dbms_output.put_line('Subpartition ' || i.subpartition_name
|| ' minimum ID is ' || l_id);
-- do something else with the value; update or insert...
-- update bkp_part set min_id = l_id
-- where partition_name = i.subpartition_name;
end loop;
end;
/
anonymous block completed
Subpartition PART_1_SUBPART_1 minimum ID is
Subpartition PART_1_SUBPART_2 minimum ID is 1
Subpartition PART_1_SUBPART_3 minimum ID is 2
You can adapt that to get the columns you're interested in and do something more useful with them (like inserting/updating your bkp_part table)

PLSQL Insert into with subquery and returning clause

I can't figure out the correct syntax for the following pseudo-sql:
INSERT INTO some_table
(column1,
column2)
SELECT col1_value,
col2_value
FROM other_table
WHERE ...
RETURNING id
INTO local_var;
I would like to insert something with the values of a subquery.
After inserting I need the new generated id.
Heres what oracle doc says:
Insert Statement
Returning Into
OK i think it is not possible only with the values clause...
Is there an alternative?
You cannot use the RETURNING BULK COLLECT from an INSERT.
This methodology can work with updates and deletes howeveer:
create table test2(aa number)
/
insert into test2(aa)
select level
from dual
connect by level<100
/
set serveroutput on
declare
TYPE t_Numbers IS TABLE OF test2.aa%TYPE
INDEX BY BINARY_INTEGER;
v_Numbers t_Numbers;
v_count number;
begin
update test2
set aa = aa+1
returning aa bulk collect into v_Numbers;
for v_count in 1..v_Numbers.count loop
dbms_output.put_line('v_Numbers := ' || v_Numbers(v_count));
end loop;
end;
You can get it to work with a few extra steps (doing a FORALL INSERT utilizing TREAT)
as described in this article:
returning with insert..select
T
to utilize the example they create and apply it to test2 test table
CREATE or replace TYPE ot AS OBJECT
( aa number);
/
CREATE TYPE ntt AS TABLE OF ot;
/
set serveroutput on
DECLARE
nt_passed_in ntt;
nt_to_return ntt;
FUNCTION pretend_parameter RETURN ntt IS
nt ntt;
BEGIN
SELECT ot(level) BULK COLLECT INTO nt
FROM dual
CONNECT BY level <= 5;
RETURN nt;
END pretend_parameter;
BEGIN
nt_passed_in := pretend_parameter();
FORALL i IN 1 .. nt_passed_in.COUNT
INSERT INTO test2(aa)
VALUES
( TREAT(nt_passed_in(i) AS ot).aa
)
RETURNING ot(aa)
BULK COLLECT INTO nt_to_return;
FOR i IN 1 .. nt_to_return.COUNT LOOP
DBMS_OUTPUT.PUT_LINE(
'Sequence value = [' || TO_CHAR(nt_to_return(i).aa) || ']'
);
END LOOP;
END;
/
Unfortunately that's not possible. RETURNING is only available for INSERT...VALUES statements. See this Oracle forum thread for a discussion of this subject.
You can't, BUT at least in Oracle 19c, you can specify a SELECT subquery inside the VALUES clause and so use RETURNING! This can be a good workaround, even if you may have to repeat the WHERE clause for every field:
INSERT INTO some_table
(column1,
column2)
VALUES((SELECT col1_value FROM other_table WHERE ...),
(SELECT col2_value FROM other_table WHERE ...))
RETURNING id
INTO local_var;
Because the insert is based on a select, Oracle is assuming that you are permitting a multiple-row insert with that syntax. In that case, look at the multiple row version of the returning clause document as it demonstrates that you need to use BULK COLLECT to retrieve the value from all inserted rows into a collection of results.
After all, if your insert query creates two rows - which returned value would it put into an single variable?
EDIT - Turns out this doesn't work as I had thought.... darn it!
This isn't as easy as you may think, and certainly not as easy as it is using MySQL. Oracle doesn't keep track of the last inserts, in a way that you can ping back the result.
You will need to work out some other way of doing this, you can do it using ROWID - but this has its pitfalls.
This link discussed the issue: http://forums.oracle.com/forums/thread.jspa?threadID=352627