in postgres, splitting an update between two tables using Rules - sql

attempting to maintain an edit log using rules.
create table t1(
id serial primary key,
c1 text,
... );
create table edit_log(
id int references t1,
editor_id int references users,
edit_ts timestamp default current_timestamp );
with an update, wish to update t1 and insert into edit_lot
update t1 set c1='abc', ... where id=456;
insert into edit_log( id, editor_id, current_timestamp );
this would be a pretty straightforward except for the arbitrary number of columns, eg,
update t1 set c1='abc', c2='def', editor_id=123 where id=456;
update t1 set c3='xyz', editor_id=123 where id=456;
how to write a rule for that?

I think a trigger will serve you better than a rule. Consider this demo.
Test setup
CREATE TEMP TABLE t1(id int, editor_id int, c1 text);
INSERT INTO t1(id, editor_id) VALUES (1,1),(2,2);
CREATE TEMP TABLE edit_log(id int, editor_id int, edit_ts timestamp);
Create trigger function
CREATE OR REPLACE FUNCTION trg_t1_upaft_log()
RETURNS trigger AS
$BODY$
BEGIN
IF OLD IS DISTINCT FROM NEW THEN -- to avoid empty updates
INSERT INTO edit_log(id, editor_id, edit_ts)
VALUES(NEW.id, NEW.editor_id, now()::timestamp);
END IF;
RETURN NULL; -- trigger will be fired AFTER updates, return value is irrelevant.
END;
$BODY$
LANGUAGE plpgsql VOLATILE;
Create trigger
CREATE TRIGGER upaft_log
AFTER UPDATE ON t1
FOR EACH ROW
EXECUTE PROCEDURE trg_t1_upaft_log();
Test
UPDATE t1 SET c1 = 'baz' WHERE id = 1;
SELECT * FROM edit_log; -- 1 new entry
UPDATE t1 SET c1 = 'baz' WHERE id = 1;
SELECT * FROM edit_log; -- no new entry, update changed nothing!
UPDATE t1 SET c1 = 'blarg';
SELECT * FROM edit_log; -- 2 new entries, update changed two rows.
Cleanup
DROP TRIGGER upaft_log ON t1;
DROP FUNCTION trg_t1_upaft_log()
-- Temp. tables will be dropped automatically at end of session.
Comment
It is very hard or plain impossible (depending on the details of your setup) for a rule to figure out which rows are updated.
A trigger AFTER UPDATE can decide after the fact and is the better choice. Also easy to integrate with (most) additional triggers and / or rules in this scenario.

Related

Updating another table on conflict postgres

I have a requirement to insert on one table (test01) and update on another table(result) whenever the conflict arises. I tried with below query:
insert into test01 as cst (col1,col2)
select col1,col2 from (
select 1 col1,'test' col2) as excluded
on conflict (col1) do
update result as rst set conflictid = excluded.col1, updated_at = now() where rst.conflictid= excluded.col1 ;
but its returns "syntax error at or near result". Can anyone please help me with the right solution.
Basically, your approach is not feasible. The ON CONFLICT ... DO UPDATE clause applies only to the table into which the rows are inserted. See INSERT syntax in the documentation.
A solution requires a bit more work. You should create a trigger for table test01 to get the effect you want.
Example tables (slightly changed names of columns and table):
create table test01_conflicts(
conflict_id int primary key,
updated_at timestamp);
create table test01(
id int primary key,
str text);
When the table test01 is updated with the same id the trigger function inserts or updates a row in the conflict table. Note that in this case the function returns null so the update on test01 will not proceed.
create or replace function before_update_on_test01()
returns trigger language plpgsql as $$
begin
if new.id = old.id then
insert into test01_conflicts(conflict_id, updated_at)
values (new.id, now())
on conflict(conflict_id)
do update set updated_at = now();
return null;
end if;
return new;
end $$;
create trigger before_update_on_test01
before update on test01
for each row execute procedure before_update_on_test01();
The query. On conflict update test01 with the same id only:
insert into test01
select col1, col2
from (
select 1 as col1, 'test' as col2
) as source
on conflict (id) do
update set
id = excluded.id;

Sqlite running update in insert trigger to execute update trigger

Using sqlite I want to have a field 'url' generated from the given 'id' using a trigger both on insert and on update. For example id: '1' => url: 'test.com/1'
The table looks like this:
CREATE TABLE t1 (
id TEXT,
url TEXT
);
Since sqlite can't run the same trigger for update and insert, I see two options to accomplish this.
Option A
run a trigger after insert, that updates the id to itself, which in turns triggers the update trigger
CREATE TRIGGER run_updates_on_insert AFTER INSERT ON t1
BEGIN
UPDATE t1 SET id = NEW.id WHERE id = NEW.id;
END;
CREATE TRIGGER set_url_on_update BEFORE UPDATE on t1
BEGIN
UPDATE t1 SET url = 'test.com/' || NEW.id WHERE id = OLD.id;
END;
Option B
replicating the logic in two separate triggers for update and insert
CREATE TRIGGER set_url_on_insert AFTER INSERT on t1
BEGIN
UPDATE t1 SET url = 'test.com/' || NEW.id WHERE id = NEW.id;
END;
CREATE TRIGGER set_url_on_update BEFORE UPDATE on t1
BEGIN
UPDATE t1 SET url = 'test.com/' || NEW.id WHERE id = OLD.id;
END;
Both of these options give me the desired results, I tend to favor Option A, as I only have to write the update logic once, but I was wondering if there are any other advantages/disadvantage to prefer one to the other?
EDIT: For this particular use case it is better to use generated column (see forpas answer below)
Since version 3.31.0 (2020-01-22) of SQLite, you can create generated columns (stored or virtual), so you don't need any triggers:
CREATE TABLE t1 (
id TEXT,
url TEXT GENERATED ALWAYS AS ('test.com/' || id) STORED
);

Change number of Rows Affected by Update

I am trying to achieve here is to basically override 0 rows Updated, when UPDATE is issued in-case the actual PK/UK value doesn't exist in the table. This is what I have done:
Actual Table:
CREATE TABLE fdrgiit.vereine(
team numeric(10) primary key,
punkte int not null,
serie int not null
);
Dummy Table:
CREATE TABLE fdrgiit.dummyup
(
id numeric(1) PRIMARY KEY,
datetest timestamp
);
Inserted records in both the tables:
insert into vereine(team,punkte,serie) values(1, 50, 1);
insert into vereine(team,punkte,serie) values(2, 30, 1);
insert into vereine(team,punkte,serie) values(3, 25, 1);
insert into vereine(team,punkte,serie) values(4, 37, 2);
insert into dummyup values(1, now());
Created the following function and trigger:
create or replace function updateover()
returns trigger as
$BODY$
begin
if EXISTS (select 1 FROM vereine WHERE team = new.team ) then
RETURN NEW;
else
UPDATE fdrgiit.dummyup set datetest=now() where id=1;
RETURN NULL;
end if;
end;
$BODY$
LANGUAGE plpgsql;
create trigger update_redundancy
before update on vereine
for each row
execute procedure updateover() ;
But when I execute an UPDATE like this on the , I am still get 0 rows affected
update vereine set punkte=87 where team=5;
Kindly review and please suggest if this is something that can be done.
You cannot trigger anything with an UPDATE that does not affect row as triggers are only fired for affected rows.
But you could wrap your alternative UPDATE into a function:
CREATE OR REPLACE FUNCTION updateover()
RETURNS int AS
$func$
UPDATE dummyup
SET datetest = now()
WHERE id = 1
RETURNING 2;
$func$ LANGUAGE sql;
... and run your UPDATE nested like this:
WITH upd AS (
UPDATE vereine
SET punkte = 87
WHERE team = 5 -- does not exist!
RETURNING 1
)
SELECT 1 FROM upd
UNION ALL
SELECT updateover()
LIMIT 1;
db<>fiddle here
If no row qualifies for an UPDATE, then 1st outer SELECT 1 FROM upd returns no row and Postgres keeps processing the 2nd SELECT updateover(). But if at least one row is affected, the final SELECT is never executed. Exactly what you want.
This updates dummyup one time if the UPDATE on vereine does not affect any rows; never several times. But that's ok, since now() is STABLE for the duration of the transaction.
Related:
Return a value if no record is found

Oracle trigger to prevent inserting the new row upon a condition

I've found few questions addressing the same question but without a better solution.
I need to create an Oracle trigger which will prevent new inserts upon a condition, but silently (without raising an error).
Ex : I need to stop inserting rows with bar='FOO' only. (I can't edit the constraints of the table, can't access the procedure which really does the insertion etc so the trigger is the only option)
Solutions so far confirms that it isn't possible. One promising suggestion was to create an intermediate table, insert key values to that when bar='FOO' and then delete those records from original table once insertion is done, which is not correct I guess.
Any answer will be highly appreciated.
Apparently, it is not possible to use a trigger to stop inserts without raising an exception.
However, if you have access to the schema (and asking about a trigger this is probably ok), you could think about replacing the table with a view and an instead of trigger.
As a minimal mock up for your current table. myrole is just a stand in for the privileges granted on the table:
CREATE ROLE myrole;
CREATE TABLE mytable (
bar VARCHAR2(30)
);
GRANT ALL ON mytable TO myrole;
Now you rename the table and make sure nobody can directly access it anymore, and replace it with a view. This view can be protected by a instead of trigger:
REVOKE ALL ON mytable FROM myrole;
RENAME mytable TO myrealtable;
CREATE OR REPLACE VIEW mytable AS SELECT * FROM myrealtable;
GRANT ALL ON mytable TO myrole;
CREATE OR REPLACE TRIGGER myioftrigger
INSTEAD OF INSERT ON mytable
FOR EACH ROW
BEGIN
IF :new.bar = 'FOO' THEN
NULL;
ELSE
INSERT INTO myrealtable(bar) VALUES (:new.bar);
END IF;
END;
/
So, if somebody is inserting a normal row into the fake view, the data gets inserted into your real table:
INSERT INTO mytable(bar) VALUES('OK');
1 row inserted.
SELECT * FROM mytable;
OK
But if somebody is inserting the magic value 'FOO', the trigger silently swallows it and nothing gets changed in the real table:
INSERT INTO mytable(bar) VALUES('FOO');
1 row inserted.
SELECT * FROM mytable;
OK
Caution: If you want to protect your table from UPDATEs as well, you'd have to add a second trigger for the updates.
One way would be to hide the row. From 12c this is reasonably easy:
create table demo
( id integer primary key
, bar varchar2(10) );
-- This adds a hidden column and registers the table for in-database archiving:
alter table demo row archival;
-- Set the hidden column to '1' when BAR='FOO', else '0':
create or replace trigger demo_hide_foo_trg
before insert or update on demo
for each row
begin
if :new.bar = 'FOO' then
:new.ora_archive_state := '1';
else
:new.ora_archive_state := '0';
end if;
end demo_hide_foo_trg;
/
-- Enable in-database archiving for the session
-- (probably you could set this in a log-on trigger):
alter session set row archival visibility = active;
insert into demo (id, bar) values (1, 'ABC');
insert into demo (id, bar) values (2, 'FOO');
insert into demo (id, bar) values (3, 'XYZ');
commit;
select * from demo;
ID BAR
-------- --------
1 ABC
3 XYZ
-- If you want to see all rows (e.g. to delete hidden rows):
alter session set row archival visibility = all;
In earlier versions of Oracle, you could achieve the same thing using a security policy.
Another way might be to add a 'required' flag which defaults to 'Y' and set it to to 'N' in a trigger when bar = 'FOO', and (assuming you can't change the application to use a view etc) have a second trigger delete all such rows (or perhaps better, move them to an archive table).
create table demo
( id integer primary key
, bar varchar2(10) );
alter table demo add required_yn varchar2(1) default on null 'Y';
create or replace trigger demo_set_not_required_trg
before insert or update on demo
for each row
begin
if :new.bar = 'FOO' then
:new.required_yn := 'N';
end if;
end demo_hide_foo_trg;
/
create or replace trigger demo_delete_not_required_trg
after insert or update on demo
begin
delete demo where required_yn = 'N';
end demo_delete_not_required_trg;
/

Values of the inserted row in a Trigger Oracle

I want a trigger that updates the value of a column, but I just want to update a small set of rows that depends of the values of the inserted row.
My trigger is:
CREATE OR REPLACE TRIGGER example
AFTER INSERT ON table1
FOR EACH ROW
BEGIN
UPDATE table1 t
SET column2 = 3
WHERE t.column1 = :new.column1;
END;
/
But as I using FOR EACH ROW I have a problem when I try it, I get the mutating table runtime error.
Other option is not to set the FOR EACH ROW, but if I do this, I dont know the inserted "column1" for comparing (or I dont know how to known it).
What can I do for UPDATING a set of rows that depends of the last inserted row?
I am using Oracle 9.
You should avoid the DML statements on the same table as defined in a trigger. Use before DML to change values of the current table.
create or replace trigger example
before insert on table1
for each row
begin
:new.column2 := 3;
end;
/
You can modify the same table with pragma autonomous_transaction:
create or replace trigger example
after insert on table1 for each row
declare
procedure setValues(key number) is
pragma autonomous_transaction;
begin
update table1 t
set column2 = 3
where t.column1 = key
;
end setValues;
begin
setValues(:new.column1);
end;
/
But I suggest you follow #GordonLinoff answere to your question - it's a bad idea to modify the same table in the trigger body.
See also here
If you need to update multiple rows in table1 when you are updating one row, then you would seem to have a problem with the data model.
This need suggests that you need a separate table with one row per column1. You can then fetch the value in that table using join. The trigger will then be updating another table, so there will be no mutation problem.
`create table A
(
a INTEGER,
b CHAR(10)
);
create table B
(
b CHAR (10),
d INTEGER
);
create trigger trig1
AFTER INSERT ON A
REFERENCING NEW AS newROW
FOR EACH ROW
when(newROW.a<=10)
BEGIN
INSERT into B values(:newROW.b,:newROW.a);
END trig1;
insert into A values(11,'Gananjay');
insert into A values(5,'Hritik');
select * from A;
select * from B;`