Trigger after insert to check and compare records between table - sql

In an Oracle Database, I need to create some trigger or procedure to treat this case in the most performative way possible (is an extremely large amount of data).
I have a table called ORDER_A that every day receives a full load (its truncated, and all records are inserted again).
I have a table called ORDER_B which is a copy of ORDER_A, containing the same data and some additional control dates.
Each insertion on ORDER_A must trigger a process that looks for a record with the same identifier (primary key: order_id) in table B.
If a record exists with the same order_id, and any of the other columns have changed, an update must be performed on table B
If a record exists with the same order_id, and no values ​​in the other columns have been modified, nothing should be performed, the record must remain the same in table B.
If there is no record with the same order_id, it must be inserted in table B.
My tables are like this
CREATE TABLE ORDER_A
(
ORDER_ID NUMBER NOT NULL,
ORDER_CODE VARCHAR2(50),
ORDER_STATUS VARCHAR2(20),
ORDER_USER_ID NUMBER,
ORDER_DATE TIMESTAMP(6),
PRIMARY KEY (ORDER_ID)
);
CREATE TABLE ORDER_B
(
ORDER_ID NUMBER NOT NULL,
ORDER_CODE VARCHAR2(50),
ORDER_STATUS VARCHAR2(20),
ORDER_USER_ID NUMBER,
ORDER_DATE TIMESTAMP(6)
INSERT_AT TIMESTAMP(6),
UPDATED_AT TIMESTAMP(6),
PRIMARY KEY (ORDER_ID)
);
I have no idea how to do this and what is the best way (with a trigger, procedure, using merge, etc.)
Can someone give me a direction, please?

Here is some pseudo-code to show you a potential trigger based solution that does not fall back into slow row-by-row processing.
create or replace trigger mytrg
for insert or update delete on ordera
compound trigger
pklist sys.odcinumberlist;
before statement is
begin
pklist := sys.odcinumberlist();
end before statement ;
after each row is
begin
pklist.extend;
pklist(pklist.count) := :new.order_id;
end before each row;
after statement is
begin
merge into orderb b
using (
select a.*
from ordera a,
table(pklist) t
where a.order_id = t.column_value
) m
when matched then
update
set b.order_code = m.order_code,
b.order_status = m.order_status,
...
where decode(b.order_code,m.order_code,0,1)=1
or decode(b.order_status,m.order_status,0,1)=1
....
when not matched then
insert (b.order_id,b.order_code,....)
values (m.order_id,m.order_code,....);
end after statement ;
end;
We hold the impacted primary keys, and then build a single merge later, with an WHERE embed to minimise update activities.
If your application allows the update of primary keys, you'd need some additions, but this should get you started

Related

How to implement AFTER INSERT Trigger in Oracle PL/SQL?

I am trying to implement after insert trigger in PLSQL. The goal is to check if there are multiple (>1) rows having specific status for each client. If so I'd like to rise an exception and roll the insertion back.
I am struggling with implementing warning-free query, which causes error during insertion. How could I manage this?
Here is my implemented trigger which I guess needs some changes.
CREATE TRIGGER blatrigger
AFTER INSERT
ON BLATABLE
REFERENCING NEW AS NEW OLD AS OLD
FOR EACH ROW
DECLARE
exception_name EXCEPTION;
PRAGMA EXCEPTION_INIT (exception_name, -20999);
BEGIN
if (select count(*) as counter from BLATABLE where CLIENTID = :NEW.CLIENTID and STATUS='PENDING').counter > 1
THEN
raise exception_name;
END IF;
END;
Here is the table itself:
create table BLATABLE
(
ID NUMBER(19) not null primary key,
CLIENTID NUMBER(10),
CREATED TIMESTAMP(6),
STATUS VARCHAR2(255 char)
);
The goal is to check if there are multiple (>1) rows having specific status for each client. If so I'd like to rise an exception and roll the insertion back.
No need for a trigger. It looks like a simple unique constraint should get the job done here:
create table blatable (
id number(19) not null primary key,
clientid number(10),
created timestamp(6),
status varchar2(255 char),
constraint blaconstraint unique (clientid, status)
);
The unique constraint prevents duplicates on (clientid, status) across the whole table. If a DML operation (insert, update) attempts to generate a duplicate, an error is raised and the operation is rolled back.
If, on the other end, you want to allow only one "PENDING" status per user, then you can use a unique index as follows:
create unique index bla_index
on blatable( (case when status = 'PENDING' then clientid end) );
Use a Statement Level Trigger, rather than a Row Level by removing FOR EACH ROW, and converting to your code as below :
CREATE OR REPLACE TRIGGER blatrigger
AFTER INSERT ON BLATABLE
REFERENCING NEW AS NEW OLD AS OLD
DECLARE
counter INT;
exception_name EXCEPTION;
PRAGMA EXCEPTION_INIT(exception_name, -20999);
BEGIN
SELECT MAX(COUNT(*))
INTO counter
FROM BLATABLE
WHERE STATUS = 'PENDING'
GROUP BY CLIENTID;
IF counter > 1 THEN
RAISE exception_name;
END IF;
END;
/
where
the SELECT statement need to be removed from IF .. THEN conditional
Most probably, the mutating table error would raise for Row Level Trigger case
Demo

PL/SQL: NO DATA FOUND while updating another table based on conditions

So I have a column on my PAYMENT table that is called Status.. It has a foreign key of another table called reservation with Reservation_ID. The Reservation Table also has a status column and it will only get updated when there is a value in the status column of payment table. So If my status field in payment table has the value "Confirmed", the value for that particular Reservation_ID is supposed to turn to 1.. Otherwise 22. This is how I made the trigger:
CREATE OR REPLACE TRIGGER stats BEFORE INSERT OR DELETE OR UPDATE ON PAYMENT FOR EACH ROW
DECLARE
V_STATUS VARCHAR2(20);
BEGIN
SELECT Status INTO V_STATUS FROM PAYMENT INNER JOIN RESERVATION ON PAYMENT.Reservation_ID=RESERVATION.Reservation_ID WHERE PAYMENT.Reservation_ID=:NEW.Reservation_ID;
IF INSERTING AND V_STATUS='CONFIRMED' THEN
UPDATE RESERVATION SET status=1 WHERE Reservation_ID=:new.Reservation_ID;
ELSIF UPDATING AND V_STATUS='CONFIRMED' THEN
UPDATE RESERVATION SET status=1 WHERE Reservation_ID=:new.Reservation_ID;
ELSE
UPDATE RESERVATION SET status=22 WHERE Reservation_ID=:new.Reservation_ID;
END IF;
END;
So the trigger basically gets compiled but when I try inserting values inside Payment Table, I get the following error:
Error report -
ORA-01403: no data found
ORA-06512: at "ME.STATS", line 4
ORA-04088: error during execution of trigger 'ME.STATS'
create statments for both tables:
CREATE TABLE RESERVATION(RESERVATION_id NUMBER(10) NOT NULL, MEMBER_ID NUMBER(10) CONSTRAINT RE_MEM_fk REFERENCES MEMBER(MEMBER_ID) ON DELETE SET NULL,status NUMBER(10) CONSTRAINT RES_status_fk REFERENCES STATUS(RESERVATION_status_id) ON DELETE SET NULL, CONSTRAINT PK_BOOK PRIMARY KEY(RESERVATION_id));
CREATE TABLE PAYMENT(Payment_ID NUMBER(10) NOT NULL ,RESERVATION_id NUMBER(10) CONSTRAINT Pay_RES_fk REFERENCES RESERVATION(RESERVATION_id) ON DELETE SET NULL, TicketPrice NUMBER(10), ExtraFaciliFees Number(10),TOTAL_AMOUNT Number(10), PromotionalCode VARCHAR2(10), CONSTRAINT PK_PAY PRIMARY KEY(Payment_ID));
First :
CREATE TABLE RESERVATION(
Status NUMBER(10));
SELECT Status INTO V_STATUS
IF INSERTING AND V_STATUS='CONFIRMED'
Could you explain me how you expect a NUMBER to match a string ?
Next (from http://www.dba-oracle.com/sf_ora_01403_no_data_found.htm )
SELECT INTO clauses are standard SQL queries which pull a row or set
of columns from a database, and put the retrieved data into variables
which have been predefined.
If the SELECT INTO statement doesn't return at least on e row,
ORA-01403 is thrown.
So this :
SELECT
Status INTO V_STATUS
FROM PAYMENT p
INNER JOIN RESERVATION r
ON p.Reservation_ID = r.Reservation_ID
WHERE p.Reservation_ID = :NEW.Reservation_ID;
Is likely to output no row at all...
Agree with #Blag, below statement is giving the no data found exception.
In general if you want to know the exact line number the error is pointing to you can refer to the object via DBA_SOURCE or ALL_SOURCES.
SELECT
Status INTO V_STATUS
FROM PAYMENT p
INNER JOIN RESERVATION r
ON p.Reservation_ID = r.Reservation_ID
WHERE p.Reservation_ID = :NEW.Reservation_ID;

Storing date & time and updater/user of row data in Oracle Apex

Looking for an ideal way to store (1) date & time of update (2) updater/user per row of table data using Oracle Apex. I am thinking of adding 2 extra columns to store the info and trying to come up with a good as to how changes per row can be tracked.
If you want create logs of insert, update , delete on your table, adding 2 columns not enough. Each new update will erase previous and delete couldn't be logged. So you need to store log table separately from data table, and fill it on before and after triggers created on your data table. If you want sample I can provide some.
Here simplified example, of course in real life data will be more complex and I guess a trigger should be more smarter, but this is a simple start point to create your own. After executing codes below, try to insert, update, delete delete records in table TEST_DATA, see what happens in TEST_LOG
Create data table
create table TEST_DATA (
UNID number,
COL_B varchar2(50)
);
-- Create/Recreate primary, unique and foreign key constraints
alter table TEST_DATA
add constraint PK_TEST_DATA_UNID primary key (UNID);
Create log table for it
create table TEST_LOG (
UNID number,
OPERATION varchar2(1),
COL_OLD varchar2(50),
COL_NEW varchar2(50),
CHNGUSER varchar2(50),
CHNGDATE date
);
and finally create trigger which tracks changes
create or replace trigger TR_LOG_TEST_DATA
after update or insert or delete on TEST_DATA
referencing new as new old as old
for each row
begin
if Inserting then
insert into TEST_LOG
(UNID, OPERATION, COL_OLD, COL_NEW, CHNGUSER, CHNGDATE)
values
(:new.unid, 'I', null, :new.col_b, user, sysdate);
end if;
if Updating then
insert into TEST_LOG
(UNID, OPERATION, COL_OLD, COL_NEW, CHNGUSER, CHNGDATE)
values
(:new.unid, 'U', :old.col_b, :new.col_b, user, sysdate);
end if;
if Deleting then
insert into TEST_LOG
(UNID, OPERATION, COL_OLD, COL_NEW, CHNGUSER, CHNGDATE)
values
(:old.unid, 'D', :old.col_b, null, user, sysdate);
end if;
end;

Restoring a Truncated Table from a Backup

I am restoring the data of a truncated table in an Oracle Database from an exported csv file. However, I find that the primary key auto-increments and does not insert the actual values of the primary key constrained column from the backed up file.
I intend to do the following:
1. drop the primary key
2. import the table data
3. add primary key constraints on the required column
Is this a good approach? If not, what is recommended? Thanks.
EDIT: After more investigation, I observed there's a trigger to generate nextval on a sequence to be inserted into the primary key column. This is the source of the predicament. Hence, following the procedure above would not solve the problem. It lies in the trigger (and/or sequence) on the table. This is solved!
easier to use your .csv as an external table and then go
create table your_table_temp as select * from external table
examine the data in the new temp table to ensure you know what range of primary keys is present
do a merge into the new table
samples from here and here
CREATE TABLE countries_ext (
country_code VARCHAR2(5),
country_name VARCHAR2(50),
country_language VARCHAR2(50)
)
ORGANIZATION EXTERNAL (
TYPE ORACLE_LOADER
DEFAULT DIRECTORY ext_tab_data
ACCESS PARAMETERS (
RECORDS DELIMITED BY NEWLINE
FIELDS TERMINATED BY ','
MISSING FIELD VALUES ARE NULL
(
country_code CHAR(5),
country_name CHAR(50),
country_language CHAR(50)
)
)
LOCATION ('Countries1.txt','Countries2.txt')
)
PARALLEL 5
REJECT LIMIT UNLIMITED;
and the merge
MERGE INTO employees e
USING hr_records h
ON (e.id = h.emp_id)
WHEN MATCHED THEN
UPDATE SET e.address = h.address
WHEN NOT MATCHED THEN
INSERT (id, address)
VALUES (h.emp_id, h.address);
Edit: after you have merged the data you can drop the temp table and the result is your previous table with the old data and the new data together
Edit you mention " During imports, the primary key column does not insert from the file, but auto-increments". This can only happen when there is a trigger on the table, likely, Before insert on each row. Disable the trigger and then do your import. Re-enable the trigger after committing your inserts.
I used the following procedure to solve it:
drop trigger trigger_name
Imported the table data into target table
drop sequence sequence_name
CREATE SEQUENCE SEQ_NAME INCREMENT BY 1 START WITH start_index_for_next_val MAXVALUE max_val MINVALUE 1 NOCYCLECACHE 20 NOORDER
CREATE OR REPLACE TRIGGER "schema_name"."trigger_name"
before insert on target_table
for each row
begin
select seq_name.nextval
into :new.unique_column_name
from dual;
end;

Tackling nested inserts using functions

Hi people i need some help deciding on the best way to do an insert into table ‘shop’ which has a serial id field. I also need to insert into tables ‘shopbranch’ and ‘shopproperties’ which both references shop.id.
In a nutshell I need to insert one shop record. Then two records for each table of the following tables, shopproperty and shopbranch, whose shopid (FK) references the just created shop.id field
I saw somewhere that i could wrap the ‘shop’ insert, inside a function called lets say ‘insert_shop’ which does the 'shop' insert and returns its id using a select statement
Then inside another function which inserts shoproperty and shopbranch records i could do one call to insert_shop function to return the shop id which can be used to be passed in as the shop id for the records.
Can you let me know if I’m looking at this in the correct way as I’m a newbie.
One way to approach this is to create a view on your three tables that shows all columns from all three tables that can be inserted or updated. If you then create an INSTEAD OF INSERT trigger on the view then you can manipulate the view contents as if it were a table. You can do the same with UPDATE and even combine the two into an INSTEAD OF INSERT OR UPDATE trigger. The function that your trigger calls then has three INSERT statements that redirect the insert on the view to the underlying tables:
CREATE TABLE shop (
id serial PRIMARY KEY,
nm text,
...
);
CREATE TABLE shopbranch (
id serial PRIMARY KEY,
shop integer NOT NULL REFERENCES shop,
branchcode text,
loc text,
...
);
CREATE TABLE shopproperties (
id serial PRIMARY KEY,
shop integer NOT NULL REFERENCES shop,
prop1 text,
prop2 text,
...
);
CREATE VIEW shopdetails AS
SELECT s.*, b.*, p.*
FROM shop s, shopbranch b, shopproperties p,
WHERE b.shop = s.id AND p.shop = s.id;
CREATE FUNCTION shopdetails_insert() RETURNS trigger AS $$
DECLARE
shopid integer;
BEGIN
INSERT INTO shop (nm, ...) VALUES (NEW.nm, ...) RETURNING id INTO shopid;
IF NOT FOUND
RETURN NULL;
END;
INSERT INTO shopbranch (shop, branchcode, loc, ...) VALUES (shopid, NEW.branchcode, NEW.loc, ...);
INSERT INTO shopproperties(shop, prop1, prop2, ...) VALUES (shopid, NEW.prop1, NEW.prop2, ...);
RETURN NEW;
END; $$ LANGUAGE plpgsql;
CREATE TRIGGER shopdetails_trigger_insert
INSTEAD OF INSERT
FOR EACH ROW EXECUTE PROCEDURE shopdetails_insert();
You could of course play with the view and show only those columns from the three tables that can be inserted or updated (such as excluding primary and foreign keys).