Using the updated item in a trigger - sql

I am trying to create an SQL trigger for when the quantity on hand (qoh) of an item in the Oracle 12c SQL database falls below 5. I want to select the description of that item from another table, and have come up with the following query, but I am getting an error when I try to run it:
/*Creates a trigger to notify someone when an item is out of stock*/
CREATE OR REPLACE TRIGGER ItemOutOfStock
AFTER UPDATE OF INV_QOH ON inventory
FOR EACH ROW
WHEN (new.INV_QOH < 5)
BEGIN
SELECT I.ITEM_DESC
FROM ITEMS I
WHERE I.ITEM_ID = new.ITEM_ID;
END;
/
From what I have been able to figure out I should be able to call new.item_id and it works, but that isn't the case. When I update the inventory table and set an item's quantity to less than 5 with just a dbms_output.put_line command it puts the text to the output, so I know the problem is somewhere in the select statement.

A SELECT statement has to do something with the data. Potentially, you could store the data in a local variable declared in your trigger. Something like
CREATE OR REPLACE TRIGGER ItemOutOfStock
AFTER UPDATE OF INV_QOH ON inventory
FOR EACH ROW
WHEN (new.INV_QOH < 5)
DECLARE
l_item_desc item.item_desc%type;
BEGIN
SELECT I.ITEM_DESC
INTO l_item_desc
FROM ITEMS I
WHERE I.ITEM_ID = :new.ITEM_ID;
<<do something with l_item_desc>>
END;
/
Note that the :new pseudo-record needs to be prefaced with a colon.
Be aware as well that sending an email from a trigger is generally a bad idea since sending an email is non-transactional. A trigger can be fired but the transaction can be rolled back so the email can get sent even though the update was never committed. A trigger can also be executed multiple times for a single change (with a rollback in between) because of write consistency. To do it correctly, you'd really want the trigger to do something like write to a table that a separate processes polls periodically in order to send emails or to submit a job (using dbms_job) that sends an email if and only if the underlying transaction commits.

Related

Getting newest not handled, updated rows using txid

I have a table in my PostgreSQL (actually its mulitple tables, but for the sake of simplicity lets assume its just one) and multiple clients that periodically need to query the table to find changed items. These are updated or inserted items (deleted items are handled by first marking them for deletion and then actually deleting them after a grace period).
Now the obvious solution would be to just keep a “modified” timestamp column for each row, remember it for each each client and then simply fetch the changed ones
SELECT * FROM the_table WHERE modified > saved_modified_timestamp;
The modified column would then be kept up to date using triggers like:
CREATE FUNCTION update_timestamp()
RETURNS trigger
LANGUAGE ‘plpgsql’
AS $$
BEGIN
NEW.modified = NOW();
RETURN NEW;
END;
$$;
CREATE TRIGGER update_timestamp_update
BEFORE UPDATE ON the_table
FOR EACH ROW EXECUTE PROCEDURE update_timestamp();
CREATE TRIGGER update_timestamp_insert
BEFORE INSERT ON the_table
FOR EACH ROW EXECUTE PROCEDURE update_timestamp();
The obvious problem here is that NOW() is the time the transation started. So it might happen that a transaction is not yet commited while fetching the updated rows and when its commited, the timestamp is lower than the saved_modified_timestamp, so the update is never registered.
I think I found a solution that would work and I wanted to see if you can find any flaws with this approach.
The basic idea is to use xmin (or rather txid_current()) instead of the timestamp and then when fetching the changes, wrap them in an explicit transaction with REPEATABLE READ and read txid_snapshot() (or rather the three values it contains txid_snapshot_xmin(), txid_snapshot_xmax(), txid_snapshot_xip()) from the transaction.
If I read the postgres documentation correctly, then all changes made transactions that are < txid_snapshot_xmax() and not in txid_snapshot_xip() should be returned in that fetch transaction. This information should then be all that is required to get all the update rows when fetching again. The select would then look like this, with xmin_version replacing the modified column:
SELECT * FROM the_table WHERE
xmin_version >= last_fetch_txid_snapshot_xmax OR xmin_version IN last_fetch_txid_snapshot_xip;
The triggers would then be simply like this:
CREATE FUNCTION update_xmin_version()
RETURNS trigger
LANGUAGE ‘plpgsql’
AS $$
BEGIN
NEW.xmin_version = txid_current();
RETURN NEW;
END;
$$;
CREATE TRIGGER update_timestamp_update
BEFORE UPDATE ON the_table
FOR EACH ROW EXECUTE PROCEDURE update_xmin_version();
CREATE TRIGGER update_timestamp_update_insert
BEFORE INSERT ON the_table
FOR EACH ROW EXECUTE PROCEDURE update_xmin_version();
Would this work? Or am I missing something?
Thank you for the clarification about the 64-bit return from txid_current() and how the epoch rolls over. I am sorry I confused that epoch counter with the time epoch.
I cannot see any flaw in your approach but would verify through experimentation that having multiple client sessions concurrently in repeatable read transactions taking the txid_snapshot_xip() snapshot does not cause any problems.
I would not use this method in practice because I assume that the client code will need to solve for handling duplicate reads of the same change (insert/update/delete) as well as periodic reconciliations between the database contents and the client's working set to handle drift due to communication failures or client crashes. Once that code is written, then using now() in the client tracking table, clock_timestamp() in the triggers, and a grace interval overlap when the client pulls changesets would work for use cases I have encountered.
If requirements called for stronger real-time integrity than that, then I would recommend a distributed commit strategy.
Ok, so I've tested it in depth now and haven't found any flaws so far. I had around 30 clients writing and reading to the database at the same time and all of them got consistent updates. So I guess this approach works.

Use new value in after trigger sql statement

I am trying to create a trigger that will dump out a csv file of data at a certain point in an application. When you create a payment order from a payment proposal then it means it is ready to be paid and uploaded to the bank. There is a wizard in the erp that makes the payment order from the payment proposal. There is also a header and a detail table on both proposals and orders. I need it to make it when there is a new row in the payment order table that has new.way_id = 'ACH' and new.institute_id = 'BMO'.
The problem is the wizard has multiple steps and doesn't insert the detail rows for the order till the last step when you click ok, but the header is created already before this and executes the trigger. Because of this when the header is created I am going to pull all of the data from the proposal header and detail because it is all already there.
When The trigger executes it makes a sql statement and then passes it to a stored procedure that will take any sql query and dump it to a csv file. For some reason it won't let me use the new reference when I create my query. I get an error saying "new.selected_proposals invalid identifier". I need it to do a like on this also because you can select multiple proposal header id's when you create an order and I only want it to do it for the ones from the proposals that have a way_id of ACH.
I am guessing I have to add the .new table, whatever it is, into the join or something, but I am not sure how to do that.
This is Oracle database 11g. Here is the code. The commented out section in the query is what I am trying to fix, just to give an idea of what I am trying to do.
create or replace TRIGGER CREATE_BMO_ACH_FILE
AFTER INSERT ON PAYMENT_ORDER_TAB
for each row
when (new.way_id = 'ACH' and new.institute_id = 'BMO')
declare sql_ varchar(4000);
BEGIN
sql_ := q'[select pp.company, pp.Proposal_id, pp.CREATION_DATE, pl.identity, pl.payee_identity, pl.ledger_item_id, pl.currency, pl.curr_amount, pl.GROSS_PAYMENT_AMOUNT, pl.PLANED_PAYMENT_DATE, pl.Order_Reference, pl.PAYMENT_REFERENCE
from payment_proposal pp, PROPOSAL_LEDGER_ITEM pl
where pp.company = pl.company
and pp.proposal_id = pl.proposal_id
and pp.way_id = 'ACH'
/*and pp.proposal_id like '%' || new.selected_proposals || '%'*/]';
dump_sql_to_csv( sql_, 'E:\Accounting', 'test.csv');
END;
I think your just missing a colon in front of the new i.e. use :new.selected_proposals. Colons aren't required for old and new in the WHEN but are in the code block.

Trigger that insert into a dblink table

I am trying to do a trigger, that select values into some tables, and then insert them into another table.
So, for now I got this. There is a lot of columns, so I don't copy them, it is only varchar2 values, and this part works, so I don't think it is useful :
create or replace
TRIGGER TRIGGER_FICHE
AFTER INSERT ON T_AG
BEGIN
declare
begin
INSERT INTO t_ag_hab#DBLINK_DEV
()
values
();
/*commit;*/
end;
END;
Stored procedure where trigger will be called(again a lot of parameters, not relevant to copy them :
INSERT INTO T_AG()
VALUES
();
commit work;
The thing is, we cannot do commit into a trigger, I read that, and understand it.
But, how can I see an update of my table, with the new value?
When the process is runnign, there is nor error, but I don't see the new line into t_ag_hab.
I know it's not very clear, but I don't know how to explain it other way.
How can I fix this?,
Because you're inserting into a remove table via a database link, you have a distributed transaction:
... distributed transaction processing is more complicated because the database must coordinate the committing or rolling back of the changes in a transaction as an atomic unit. The entire transaction must commit or roll back.
When you commit, you're committing both the local insert and the remote insert performed by your trigger, as an atomic unit. You cannot commit one without the other, and you don't have to do anything extra to commit the remote change:
The two-phase commit mechanism is transparent to users who issue distributed transactions. In fact, users need not even know the transaction is distributed. A COMMIT statement denoting the end of a transaction automatically triggers the two-phase commit mechanism. No coding or complex statement syntax is required to include distributed transactions within the body of a database application.
If you can't see the inserted data from the remote database afterwards then something else has deleted it after the commit, or more likely you're looking at the wrong database.
One slight downside (though also a feature) of a database link is that it hides the details of where the work is being done. You can drop and recreate a link to make your code update a different target database without having to modify the code itself. But that means your code doesn't know where the insert is actually going - you'd need to check the data dictionary to see where the link is pointing. And even then you might not know as the link can be using a TNS alias to identify the database, and changes to the tnsnames.ora aren't visible from within the database.
If you can see the data after committing by querying t_ag_ab#dblink_dev from the same database you ran your procedure, but you can't see if when queried locally from the database you expect that to be pointing to, then the link isn't pointing where you think it is. The insert is going to one database, and you are performing your query against a different one. Only you can decide which is the 'correct' database though; and either redefine the link (or TNS entry, if appropriate), or change where you're doing the query.
I am not able to understand you requirement clearly. For updating records in main table and insert the old records in audit table. we can use the below query as a trigger.(MS-SQL)
Create trigger trg_update ON T_AGENT
AFTER UPDATE AS
BEGIN
UPDATE Tab1
SET COL1 = I.COL1, COL2=I.COL2
FROM INSERTED I INNER JOIN Tab1 ON I.COL3=Tab1.Col3
INSERT Tab1_Audit(COL1,COL2,COL3)
SELECT Tab1 FROM DELETED
RETURN;
END;
So far what you presented is just for Inserting trigger. If you want to see the update action done try to add Update like this example.
SQL> CREATE OR REPLACE TRIGGER validate_update
2 AFTER INSERT OR UPDATE ON T_AGENT
3 FOR EACH ROW
4 BEGIN
5 IF UPDATING('ACCOUNT_ID') THEN -- do something like this when updating
6 DBMS_OUTPUT.put_line ('ERROR'); -- add your action here
7 ELSIF INSERTING THEN
8 INSERT INTO t_ag_hab#DBLINK_DEV() values();
9 END IF;
10 END;
11 /
Trigger created.

I have a trigger autonomous but only execute one time in the same session

I have a trigger autonomous but only execute one time in the same session, then do nothing
CREATE OR REPLACE TRIGGER tdw_insert_unsus
BEFORE INSERT ON unsuscription_fact FOR EACH ROW
DECLARE
PRAGMA AUTONOMOUS_TRANSACTION;
v_id_suscription SUSCRIPTION_FACT.ID_SUSCRIPTION%TYPE;
v_id_date_suscription SUSCRIPTION_FACT.ID_DATE_SUSCRIPTION%TYPE;
v_id_date_unsuscription SUSCRIPTION_FACT.ID_DATE_UNSUSCRIPTION%TYPE;
v_suscription DATE;
v_unsuscription DATE;
v_live_time SUSCRIPTION_FACT.LIVE_TIME%TYPE;
BEGIN
SELECT id_suscription, id_date_suscription
INTO v_id_suscription, v_id_date_suscription
FROM(
SELECT id_suscription, id_date_suscription
FROM suscription_fact
WHERE id_mno = :NEW.ID_MNO
AND id_provider = :NEW.ID_PROVIDER
AND ftp_service_id = :NEW.FTP_SERVICE_ID
AND msisdn = :NEW.MSISDN
AND id_date_unsuscription IS NULL
ORDER BY id_date_suscription DESC
)
WHERE ROWNUM = 1;
-- calculate time
v_unsuscription := to_date(:NEW.id_date_unsuscription,'yyyymmdd');
v_suscription := to_date(v_id_date_suscription,'yyyymmdd');
v_live_time := (v_unsuscription - v_suscription);
UPDATE suscription_fact SET id_date_unsuscription = :NEW.id_date_unsuscription,
id_time_unsuscription = :NEW.id_time_unsuscription, live_time = v_live_time
WHERE id_suscription = v_id_suscription;
COMMIT;
EXCEPTION
WHEN NO_DATA_FOUND THEN
ROLLBACK;
END;
/
if I insert values works well the first o second time but after not work, but if I logout the session and login works for the first or second insertion
what is the problem?, I use oracle 10g
You're using an autonomous transaction to work around the fact that a trigger can not query its table itself. You've run into the infamous mutating table error and you have found that declaring the trigger as an autonomous transaction makes the error go away.
No luck for you though, this does not solve the problem at all:
First, any transaction logic is lost. You can't rollback the changes on the suscription_fact table, they are committed, while your main transaction is not and could be rolled back. So you've also lost your data integrity.
The trigger can not see the new row because the new row hasn't been committed yet! Since the trigger runs in an independent transaction, it can not see the uncommitted changes made by the main transaction: you will run into completely wrong results.
This is why you should never do any business logic in autonomous transactions. (there are legitimate applications but they are almost entirely limited to logging/debugging).
In your case you should either:
Update your logic so that it does not need to query your table (updating suscription_fact only if the new row is more recent than the old value stored in id_date_unsuscription).
Forget about using business logic in triggers and use a procedure that updates all tables correctly or use a view because here we have a clear case of redundant data.
Use a workaround that actually works (by Tom Kyte).
I would strongly advise using (2) here. Don't use triggers to code business logic. They are hard to write without bugs and harder still to maintain. Using a procedure guarantees that all the relevant code is grouped in one place (a package or a procedure), easy to read and follow and without unforeseen consequences.

Solving the mutating table problem in Oracle SQL produces a deadlock

Hey, I'm trying to create a trigger in my Oracle database that changes all other records except the one that has just been changed and launched the trigger to 0. Because I am updating records in the same table as the one that launched the trigger I got the mutating table error. To solve this, I put the code as an anonymous transaction, however this causes a deadlock.
Trigger code:
CREATE OR REPLACE TRIGGER check_thumbnail AFTER INSERT OR UPDATE OF thumbnail ON photograph
FOR EACH ROW
BEGIN
IF :new.thumbnail = 1 THEN
check_thumbnail_set_others(:new.url);
END IF;
END;
Procedure code:
CREATE OR REPLACE PROCEDURE check_thumbnail_set_others(p_url IN VARCHAR2)
IS PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
UPDATE photograph SET thumbnail = 0 WHERE url <> p_url;
COMMIT;
END;
I assume I'm causing a deadlock because the trigger is launching itself within itself. Any ideas?
Using an autonomous transaction for this sort of thing is almost certainly a mistake. What happens if the transaction that inserted the new thumbnail needs to rollback? You've already committed the change to the other rows in the table.
If you want the data to be transactionally consistent, you would need multiple triggers and some way of storing state. The simplest option would be to create a package with a collection of thumbnail.url%type then create three triggers on the table. A before statement trigger would clear out the collection. A row-level trigger would insert the :new.url value into the collection. An after statement trigger would then read the values from the collection and call the check_thumbnail_set_others procedure (which would not be an autonomous transaction).