I am trying to do a trigger, that select values into some tables, and then insert them into another table.
So, for now I got this. There is a lot of columns, so I don't copy them, it is only varchar2 values, and this part works, so I don't think it is useful :
create or replace
TRIGGER TRIGGER_FICHE
AFTER INSERT ON T_AG
BEGIN
declare
begin
INSERT INTO t_ag_hab#DBLINK_DEV
()
values
();
/*commit;*/
end;
END;
Stored procedure where trigger will be called(again a lot of parameters, not relevant to copy them :
INSERT INTO T_AG()
VALUES
();
commit work;
The thing is, we cannot do commit into a trigger, I read that, and understand it.
But, how can I see an update of my table, with the new value?
When the process is runnign, there is nor error, but I don't see the new line into t_ag_hab.
I know it's not very clear, but I don't know how to explain it other way.
How can I fix this?,
Because you're inserting into a remove table via a database link, you have a distributed transaction:
... distributed transaction processing is more complicated because the database must coordinate the committing or rolling back of the changes in a transaction as an atomic unit. The entire transaction must commit or roll back.
When you commit, you're committing both the local insert and the remote insert performed by your trigger, as an atomic unit. You cannot commit one without the other, and you don't have to do anything extra to commit the remote change:
The two-phase commit mechanism is transparent to users who issue distributed transactions. In fact, users need not even know the transaction is distributed. A COMMIT statement denoting the end of a transaction automatically triggers the two-phase commit mechanism. No coding or complex statement syntax is required to include distributed transactions within the body of a database application.
If you can't see the inserted data from the remote database afterwards then something else has deleted it after the commit, or more likely you're looking at the wrong database.
One slight downside (though also a feature) of a database link is that it hides the details of where the work is being done. You can drop and recreate a link to make your code update a different target database without having to modify the code itself. But that means your code doesn't know where the insert is actually going - you'd need to check the data dictionary to see where the link is pointing. And even then you might not know as the link can be using a TNS alias to identify the database, and changes to the tnsnames.ora aren't visible from within the database.
If you can see the data after committing by querying t_ag_ab#dblink_dev from the same database you ran your procedure, but you can't see if when queried locally from the database you expect that to be pointing to, then the link isn't pointing where you think it is. The insert is going to one database, and you are performing your query against a different one. Only you can decide which is the 'correct' database though; and either redefine the link (or TNS entry, if appropriate), or change where you're doing the query.
I am not able to understand you requirement clearly. For updating records in main table and insert the old records in audit table. we can use the below query as a trigger.(MS-SQL)
Create trigger trg_update ON T_AGENT
AFTER UPDATE AS
BEGIN
UPDATE Tab1
SET COL1 = I.COL1, COL2=I.COL2
FROM INSERTED I INNER JOIN Tab1 ON I.COL3=Tab1.Col3
INSERT Tab1_Audit(COL1,COL2,COL3)
SELECT Tab1 FROM DELETED
RETURN;
END;
So far what you presented is just for Inserting trigger. If you want to see the update action done try to add Update like this example.
SQL> CREATE OR REPLACE TRIGGER validate_update
2 AFTER INSERT OR UPDATE ON T_AGENT
3 FOR EACH ROW
4 BEGIN
5 IF UPDATING('ACCOUNT_ID') THEN -- do something like this when updating
6 DBMS_OUTPUT.put_line ('ERROR'); -- add your action here
7 ELSIF INSERTING THEN
8 INSERT INTO t_ag_hab#DBLINK_DEV() values();
9 END IF;
10 END;
11 /
Trigger created.
Related
I have a main procedure (p_proc_a) in which I create a temp table for logging (tmp_log). In the main procedure I call some other procedures (p_proc_b, p_proc_c). In each of these procedures I insert data into table tmp_log.
How do I save rows from tmp_log into a physical table (log) in case of exception before rollback?
create procedure p_proc_a
language plpgsql
as $body$
begin
create temp table tmp_log (log_message text) on commit drop;
call p_proc_b();
call p_proc_c();
insert into log (log_message)
select log_message from tmp_log;
exception
when others then
begin
get stacked diagnostics
v_message_text = message_text;
insert into log (log_message)
values(v_message_text);
end;
end;
$body$
What is a workround to save logs into a table and rollback changes from p_proc_b and p_proc_c?
That is not possible in PostgreSQL.
The typical workaround is to use dblink to connect to the database itself and write the logs via dblink.
I found three solutions to store data within a transaction (im my case - for debugging propose), and still be able to see that data after rollback-ing the transaction
I have a scenario where inside, I use following block, so it may not apply to your scenario
DO $$
BEGIN
...
ROLLBACK;
END;
$$;
Two first solutions are suggested to me in the Postgres slack, and the other I tried and found after talking with them, a way that worked in other db.
Solutions
1 - Using DBLink
I don't remember how it was done, but you import some libraries and then connect to another db, and use the other DB - which maybe can support to be this db - which seem to be not affected by transactions
2 - Using COPY command
Using the
COPY (SELECT ...) TO PROGRAM 'psql -c "COPY xyz FROM stdin"'
BTW I never used it, and it seems that it requires Super User(SU) permission in Unix. And god knows how it is used, or how it output data
3 - Using Sub-Transactions
In this way, you use a sub-transaction (which I'm not sure it it's correct, but it must be called Autonomous transactions) to commit the result you want to keep.
In my case the command looks like this:
I used a Temp Table, but it seems (I'm not sure) to work with an actual table as well
CREATE TEMP TABLE
IF NOT EXISTS myZone AS
SELECT * from public."Zone"
LIMIT 0;
DO $$
BEGIN
INSERT INTO public."Zone" (...)VALUES(...);
BEGIN
INSERT INTO myZone
SELECT * from public."Zone";
commit;
END;
Rollback;
END; $$;
SELECT * FROM myZone;
DROP TABLE myZone;
don't ask what is the purpose of doing this, I'm creating a test scenario, and I wished to track what I did until now. since this block did not support SELECT of DQL, I had to do something else, and I wanted a clean set of report, not raising errors
According to www.techtarget.com:
*Autonomous transactions allow a single transaction to be subdivided into multiple commit/rollback transactions, each of which
will be tracked for auditing purposes. When an autonomous transaction
is called, the original transaction (calling transaction) is
temporarily suspended.
(This text was indexed by google and existed on 2022-10-11, and the website was not opened due to an E-mail validation issue)
Also this name seems to be coming from Oracle, which this article can relate
EIDTED:
Removing solution 3 as it won't work
POSTGRES 11 Claim to support Autonomous Transactions but it's not what we may expect...
For this functionality Postgres introduced the SAVEPOINT:
SAVEPOINT <name of savepoint>;
<... CODE ...>
<RELEASE|ROLLBACK> SAVEPOINT <name of savepoint>;
Now the issue is:
If you use nested BEGIN, the COMMIT Inside the nested code can COMMIT everything, and the ROLLBACK in the outside block will do none (Not rolling back anything that happened before COMMIT of inner
If you use SAVEPOINT, it is only used to rollbacks part of the code, and even if you COMMIT it, the ROLLBACK in the outside block will rollback the SAVEPOINT too
is there a manner to lock a table specifying that only the insert with a specific value should be blocked?
For example I have the table "task" and the table "subtask" I need an operation that when the task is closed also the subtasks pertaining to it and still open should be closed. So I would like to:
Start a transaction
Lock the table subtask but only preventing that an insert with a given task id can be performed
Close all the subtask
Close the task using the optimistic lock (if the task version changed do a rollback of all)
Commit the transaction
Is it possible to do what I described in step 2? If not (or if there is a better way) how can I obtain a safe concurrency on this kind of scenario?
The best way is to create the row yourself! But you can delete it at the end of the transaction. Here's how it goes
CREATE OR REPLACE FUNCTION ....
AS $$
BEGIN
INSERT INTO subtasks VALUE(pk, ...) ON CONFLICT (id) DO NOTHING;
GET DIAGNOSTICS inserted = ROW_COUNT
if inserted THEN
DELETE from subtasks where pk = id
endif;
COMMIT;
$$ LANGUAGE plpgsql;
Note that in the above code INSERT ON CONFLICT does not actually make any changes to the existing data. It will return the number of rows changed. If no rows were changed that inserted variable will hold zero.
What's happening here is that having inserted a record with in your transaction, any other connection will not be able to save the same record. You can confirm this for your self by opening to connections to the database with psql. Then do
begin;
insert ...
# just wait here
commit;
and in the other one, try the same insert, you will see that it hangs. Whether the insert suceeds or fails depends on whether you commit or rollback in the first psql connection.
You can't lock a table for a specific value but in your case this solution would apply: create a trigger on insert in subtasks and in this trigger check if the corespondent task is closed. If the task is closed than do not allow the insert.
Another trigger on tasks will close all corespondent subtask as soon as the task is closed.
I want to prevent any row with VERSIONID=1 from being deleted in a certain table. I also want to log this in an audit table so we can see when this happens for logging purposes. I'm trying to do this with a trigger:
CREATE TRIGGER TPMDBO.PreventVersionDelete
BEFORE DELETE ON TPM_PROJECTVERSION
FOR EACH ROW
DECLARE
BEGIN
IF( :old.VERSIONID = 1 )
THEN
INSERT INTO TPM_AUDIT VALUES ('Query has attempted to delete root project version!', sysdate);
RAISE_APPLICATION_ERROR( -20001, 'Query has attempted to delete root project version!' );
END IF;
END;
I get the following results:
SQL> delete from TPM_PROJECTVERSION where PROJECTID=70 and VERSIONID=1;
delete from TPM_PROJECTVERSION where PROJECTID=70 and VERSIONID=1
*
ERROR at line 1:
ORA-20001: Query has attempted to delete root project version!
ORA-06512: at "TPMDBO.PREVENTVERSIONDELETE", line 6
ORA-04088: error during execution of trigger 'TPMDBO.PREVENTVERSIONDELETE'
However, the table TPM_AUDIT is empty. Am I doing something wrong?
If your trigger raises an error, the DELETE statement fails and the transaction is rolled back to the implicit savepoint that is created before the statement is run. That means that any changes made by the trigger are rolled back as well.
You can work around this by using autonomous transactions. Something like
CREATE PROCEDURE write_audit
AS
PRAGMA AUTOMOMOUS_TRANSACTION;
BEGIN
INSERT INTO tpm_audit
VALUES( 'Query has attempted to delete root project version!',
sysdate );
commit;
END;
CREATE TRIGGER TPMDBO.PreventVersionDelete
BEFORE DELETE ON TPM_PROJECTVERSION
FOR EACH ROW
DECLARE
BEGIN
IF( :old.VERSIONID = 1 )
THEN
write_audit;
RAISE_APPLICATION_ERROR( -20001, 'Query has attempted to delete root project version!' );
END IF;
END;
This will put the INSERT into TPM_AUDIT into a separate transaction that can be committed outside the context of the DELETE statement. Be very careful about using autonomous transactions, however
If you ever find yourself using autonomous transactions for anything other than writing to a log table, you're almost certainly doing something wrong.
Code in a PL/SQL block declared using autonomous transactions is truly autonomous so it cannot see uncommitted changes made by the current session.
Because of write consistency, it is entirely possible that Oracle will partially execute a DELETE statement, firing the row-level trigger a number of times, roll back that work, and then re-execute the DELETE. That silent rollback, however, will not roll back the changes made by the autonomous transaction. So it is entirely possible that a single DELETE of a single row would actually cause the trigger to be fired more than once and, therefore, create multiple rows in TPM_AUDIT.
If you can create a UNIQUE constraint on the TPM_PROJECTVERSION pk columns + the version column, then you can create a second table that would reference those rows.
Trying to delete a row in TPM_PROJECTVERSION would then fail because child rows are present. This would at least throw an error in your application and prevent the deletion.
The other table could be automatically populated through an insert trigger on TPM_PROJECTVERSION.
If you revoke the DELETE privilege on that helper table, it would never be possible to remove those rows.
I believe you need to COMMIT the INSERT operation before calling RAISE_APPLICATION_ERROR, which rolls back the transaction.
Hey, I'm trying to create a trigger in my Oracle database that changes all other records except the one that has just been changed and launched the trigger to 0. Because I am updating records in the same table as the one that launched the trigger I got the mutating table error. To solve this, I put the code as an anonymous transaction, however this causes a deadlock.
Trigger code:
CREATE OR REPLACE TRIGGER check_thumbnail AFTER INSERT OR UPDATE OF thumbnail ON photograph
FOR EACH ROW
BEGIN
IF :new.thumbnail = 1 THEN
check_thumbnail_set_others(:new.url);
END IF;
END;
Procedure code:
CREATE OR REPLACE PROCEDURE check_thumbnail_set_others(p_url IN VARCHAR2)
IS PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
UPDATE photograph SET thumbnail = 0 WHERE url <> p_url;
COMMIT;
END;
I assume I'm causing a deadlock because the trigger is launching itself within itself. Any ideas?
Using an autonomous transaction for this sort of thing is almost certainly a mistake. What happens if the transaction that inserted the new thumbnail needs to rollback? You've already committed the change to the other rows in the table.
If you want the data to be transactionally consistent, you would need multiple triggers and some way of storing state. The simplest option would be to create a package with a collection of thumbnail.url%type then create three triggers on the table. A before statement trigger would clear out the collection. A row-level trigger would insert the :new.url value into the collection. An after statement trigger would then read the values from the collection and call the check_thumbnail_set_others procedure (which would not be an autonomous transaction).
I keep finding myself in this situtation:
I have an ASP.NEt 2.0 app. I have to insert into 2 tables in SQL. There is a dependency between the tables. I insert a record the first using a transaction. Then I move onto the 2nd table. But, because the first isn't committed yet the second one throws a error.
You must not be using the same connection when you modify each table. Try using the same connection and a single transaction around all of your changes to both tables. When in a transaction you see your changes and no one else will see them (unless they force dirty reads) until you COMMIT them.
Why don't you create a STORED PROCEDURE?
In this way, with A SINGLE transaction you invoke the STORED PROCEDURE, that makes first the INSERT into the parent table, and then (inside the same transaction) inserts into the child table.
CREATE OR REPLACE PROCEDURE GOOFY
IS
BEGIN
INSERT INTO MY_PARENT_TABLE VALUES (1111, 'ALPHA');
INSERT INTO MY_CHILD_TABLE VALUES (1111, 'BETA');
END;
/
All you have to do is to invoke GOOFY from ASP.NET 2.0: the two INSERTS belong to the same transaction, and you can decide to COMMIT or ROLLBACK.