Prevent insert if condition is met - sql

I have a table Content like this:
id | text | date | idUser → User | contentType
And another table Answer:
idAnswer → Content | idQuestion → Content | isAccepted
I want to ensure that the Answer's date is bigger than the Question's date. A question is a Content with contentType = 'QUESTION'.
I tried to solve this with the following trigger, but when I try to insert an Answer there's an error:
ERROR: record "new" has no field "idanswer"
CONTEXT: SQL statement "SELECT (SELECT "Content".date FROM "Content" WHERE "Content".id = NEW.idAnswer) < (SELECT "Content".date FROM "Content" WHERE "Content".id = NEW.idQuestion)"
PL/pgSQL function "check_valid_date_answer" line 2 at IF
Trigger:
CREATE TRIGGER check_valid_answer
AFTER INSERT ON "Answer"
FOR EACH ROW EXECUTE PROCEDURE check_valid_date_answer();
Trigger function:
CREATE FUNCTION check_valid_date_answer() RETURNS trigger
LANGUAGE plpgsql
AS $$BEGIN
IF (SELECT "Content".date FROM "Content"
WHERE "Content".id = NEW.idAnswer)
< (SELECT "Content".date FROM "Content"
WHERE "Content".id = NEW.idQuestion)
THEN
RAISE NOTICE 'This Answer is an invalid date';
END IF;
RETURN NEW;
END;$$;
So, my question is: do I really need to create a trigger for this? I saw that I can't use a CHECK in Answer because I need to compare with an attribute of another table. Is there any other (easier/better) way to do this? If not, why the error and how can I solve it?

Your basic approach is sound. The trigger is a valid solution. It should work except for 3 problems:
1) Your naming convention:
We would need to see your exact table definition to be sure, but the evidence is there. The error message says: has no field "idanswer" - lower case. Doesn't say "idAnswer" - CaMeL case. If you create CaMeL case identifiers in Postgres, you are bound to double-quote them everywhere for the rest of their life.
Are PostgreSQL column names case-sensitive?
2) Abort violating insert
Either raise an EXCEPTION instead of a friendly NOTICE to actually abort the whole transaction.
Or RETURN NULL instead of RETURN NEW to just abort the inserted row silently without raising an exception and without rolling anything back.
I would do the first. This will probably fix the error at hand and work:
CREATE FUNCTION trg_answer_insbef_check()
RETURNS trigger AS
$func$
BEGIN
IF (SELECT c.date FROM "Content" c WHERE c.id = NEW."idAnswer")
< (SELECT c.date FROM "Content" c WHERE c.id = NEW."idQuestion") THEN
RAISE EXCEPTION 'This Answer is an invalid date';
END IF;
RETURN NEW;
END
$func$ LANGUAGE plpgsql;
The proper solution is to use legal, lower case names exclusively and avoid such problems altogether. That includes your unfortunate table names as well as the column name date, which is a reserved word in standard SQL and should not be used as identifier - even if Postgres allows it.
3) Should be a BEFORE trigger
CREATE TRIGGER insbef_check
BEFORE INSERT ON "Answer"
FOR EACH ROW EXECUTE PROCEDURE trg_answer_insbef_check();
You want to abort invalid inserts before you do anything else.
Of course you will have to make sure that the timestamps table Content cannot be changed or you need more triggers to make sure your conditions are met.
The same goes for the fk columns in Answer.

I would approach this in a different way.
Recommendation:
use a BEFORE INSERT trigger if you want to change data before
inserting it
use a AFTER INSERT trigger if you have to do additional
work
use a CHECK clause if you have additional data consistency requirements.
So write a sql function that checks the condition that one date be earlier than the other, and add the check constraint. Yes, you can select from other tables in your function.
I wrote something similar (complex check) in answer to this question on SO.

Related

How to check if OLD column exist in Postgres Trigger Function

I want to create a deleted logs and insert data from the OLD row column. The problem is the column is not same for each table, some tables only has transaction_date and other table only has created_at. So I want to check if transaction_date just use it, otherwise use created_at column. I tried using coalesce function but still return:
ERROR: record "old" has no field "transaction_date" CONTEXT: SQL
statement "INSERT INTO "public"."delete_logs" ("table", "date") VALUES
(TG_TABLE_NAME, coalesce(OLD.transaction_date,
coalesce(OLD.created_at, now())))" PL/pgSQL function delete_table()
line 2 at SQL statement
here is my function:
CREATE OR REPLACE FUNCTION delete_table() RETURNS trigger AS
$$BEGIN
INSERT INTO "public"."deleted_logs" ("table", "created_at") VALUES (TG_TABLE_NAME, coalesce(OLD.transaction_date, coalesce(OLD.created_at, now())));
RETURN OLD;
END;$$ LANGUAGE plpgsql;
CREATE TRIGGER "testDelete" AFTER DELETE ON "exampletable" FOR EACH ROW EXECUTE PROCEDURE "delete_table"();
Actually, I wanted to create a function for each table, but I think it will be difficult to update the function in the future, so I need to create a single function for all tables.
So I want to check if transaction_date just use it, otherwise use created_at column.
You can avoid the exception you saw by converting the row to json:
CREATE OR REPLACE FUNCTION log_ts_after_delete()
RETURNS trigger
LANGUAGE plpgsql AS
$func$
BEGIN
INSERT INTO public.deleted_logs
(table_name , created_at) -- "table" is a reserved word
VALUES (TG_TABLE_NAME, COALESCE(to_json(OLD)->>'transaction_date', to_json(OLD)->>'created_at')::timestamptz);
RETURN NULL; -- not used in AFTER trugger
END
$func$;
My answer assumes that transaction_date is defined NOT NULL. Else, the expression defaults to created_at. Probably not what you want.
JSON is not as strict as SQL. A reference to a non-existing JSON key results in NULL instead of the exception for the reference to a non-existing table column. So COALESCE just works.
Related:
How to set value of composite variable field using dynamic SQL
If the row is wide, it might be cheaper to convert to JSON only once and save it to a variable, or do it in a subquery or CTE.
Related:
To convert from Python arrays to PostgreSQL quickly?
If tables never switch the columns in question, passing a parameter in the trigger definition would be much cheaper.
You find out (at trigger creation time) once with:
SELECT attname
FROM pg_attribute
WHERE attrelid = 'public.exampletable'::regclass
AND attname IN ('transaction_date', 'created_at')
AND NOT attisdropped
ORDER BY attname DESC
This returns 'transaction_date' if such a column exists in the table, else 'created_at', else NULL (no row). Related:
PostgreSQL rename a column only if it exists
It's still cheapest to have a separate trigger function for each type of trigger. Just two functions instead of one. If the trigger is fired often I would do that.
Avoid exception handling if you can. The manual:
Tip
A block containing an EXCEPTION clause is significantly more
expensive to enter and exit than a block without one. Therefore, don't
use EXCEPTION without need.

PL/pgSQL trigger to stop a river crossing another river

I have to write a trigger to stop a river crossing another river. I'm really struggling with it and any help would be appreciated. myriver is the table containing all the information on the rivers. So upon insert of a new river if it crosses an existing river, I should receive an error. Here's what I have:
CREATE FUNCTION river_check() RETURNS TRIGGER AS $river_check$
BEGIN
-- Check that gid is given
IF NEW.gid IS NULL THEN
RAISE EXCEPTION 'river gid cannot be null';
END IF;
NEW.the_geom = (SELECT r.the_geom FROM myriver as r
WHERE ST_CROSSES(NEW.the_geom, r.the_geom));
IF NEW.the_geom THEN
RAISE EXCEPTION 'a river cannot cross another river';
END IF;
RETURN NEW;
END;
$river_check$ LANGUAGE plpgsql;
-- Function river_check is linked to a TRIGGER of same name:
CREATE TRIGGER river_check
BEFORE INSERT OR UPDATE ON myriver
FOR EACH ROW EXECUTE PROCEDURE river_check();
You are using a column of the to-be-inserted/-updated row (NEW.the_geom) as a temporary variable. So you will either over-write that column variable (giving the new row a bogus value), or get an irrelevant result on your IF check (because NEW.the_geom had data in it anyway before the trigger was run).
Note also that Postgres, and pl/pgsql, is strictly typed, so you can't use an arbitrary value in an IF statement to see if it is "empty", like you would in a scripting language like PHP.
You need to either add a DECLARE block to give you a proper temporary variable, and check if it IS NULL; Or just use an EXISTS check directly:
IF EXISTS (
SELECT r.the_geom FROM myriver as r
WHERE ST_CROSSES(NEW.the_geom, r.the_geom)
)
THEN
RAISE EXCEPTION 'a river cannot cross another river';
END IF;

PostgreSQL 1 to many trigger procedure

I wrote this query in PostgreSQL:
CREATE OR REPLACE FUNCTION pippo() RETURNS TRIGGER AS $$
BEGIN
CHECK (NOT EXISTS (SELECT * FROM padre WHERE cod_fis NOT IN (SELECT padre FROM paternita)));
END;
$$ LANGUAGE plpgsql;
It returns:
Syntax error at or near CHECK.
I wrote this code because I have to realize a 1..n link between two tables.
You can't use CHECK here. CHECK is for table and column constraints.
Two further notes:
If this is supposed to be a statement level constraint trigger, I'm guessing you're actually looking for IF ... THEN RAISE EXCEPTION 'message'; END IF;
(If not, you may want to expand and clarify what you're trying to do.)
The function should return NEW, OLD or NULL.

Within a trigger function, how to get which fields are being updated

Is this possible? I'm interested in finding out which columns were specified in the UPDATE request regardless of the fact that the new value that is being sent may or may not be what is stored in the database already.
The reason I want to do this is because we have a table that can receive updates from multiple sources. Previously, we weren't recording which source the update originated from. Now the table stores which source has performed the most recent update. We can change some of the sources to send an identifier, but that isn't an option for everything. So I'd like to be able to recognize when an UPDATE request doesn't have an identifier so I can substitute in a default value.
If a "source" doesn't "send an identifier", the column will be unchanged. Then you cannot detect whether the current UPDATE was done by the same source as the last one or by a source that did not change the column at all. In other words: this does not work properly.
If the "source" is identifiable by any session information function, you can work with that. Like:
NEW.column = session_user;
Unconditionally for every update.
General Solution
I found a way how to solve the original problem.
Set the column to a default value if it's not targeted in an UPDATE (not in the SET list). Key element is a per-column trigger introduced with PostgreSQL 9.0 - a column-specific trigger using the UPDATE OFcolumn_name clause. The manual:
The trigger will only fire if at least one of the listed columns is
mentioned as a target of the UPDATE command.
That's the only simple way I found to distinguish whether a column was updated with a new value identical to the old, versus not updated at all.
One could also parse the text returned by current_query(). But that seems cumbersome, tricky and unreliable.
Trigger functions
I assume a column source defined NOT NULL.
Step 1: Set source to NULL if unchanged:
CREATE OR REPLACE FUNCTION trg_tbl_upbef_step1()
RETURNS trigger
LANGUAGE plpgsql AS
$func$
BEGIN
IF NEW.source = OLD.source THEN
NEW.source := NULL; -- "impossible" value (source is NOT NULL)
END IF;
RETURN NEW;
END
$func$;
Step 2: Revert to old value. Trigger will only be fired, if the value was actually updated (see below):
CREATE OR REPLACE FUNCTION trg_tbl_upbef_step2()
RETURNS trigger
LANGUAGE plpgsql AS
$func$
BEGIN
IF NEW.source IS NULL THEN
NEW.source := OLD.source;
END IF;
RETURN NEW;
END
$func$;
Step 3: Now we can identify the lacking update and set a default value instead:
CREATE OR REPLACE FUNCTION trg_tbl_upbef_step3()
RETURNS trigger
LANGUAGE plpgsql AS
$func$
BEGIN
IF NEW.source IS NULL THEN
NEW.source := 'UPDATE default source'; -- optionally same as column default
END IF;
RETURN NEW;
END
$func$;
Triggers
The trigger for Step 2 is fired per column!
CREATE TRIGGER upbef_step1
BEFORE UPDATE ON tbl
FOR EACH ROW
EXECUTE PROCEDURE trg_tbl_upbef_step1();
CREATE TRIGGER upbef_step2
BEFORE UPDATE OF source ON tbl -- key element!
FOR EACH ROW
EXECUTE PROCEDURE trg_tbl_upbef_step2();
CREATE TRIGGER upbef_step3
BEFORE UPDATE ON tbl
FOR EACH ROW
EXECUTE PROCEDURE trg_tbl_upbef_step3();
db<>fiddle here
Trigger names are relevant, because they are fired in alphabetical order (all being BEFORE UPDATE)!
The procedure could be simplified with something like "per-not-column triggers" or any other way to check the target-list of an UPDATE in a trigger. But I see no handle for this, currently (unchanged as of Postgres 14).
If source can be NULL, use any other "impossible" intermediate value and check for NULL additionally in trigger function 1:
IF OLD.source IS NOT DISTINCT FROM NEW.source THEN
NEW.source := '#impossible_value#';
END IF;
Adapt the rest accordingly.
Another way is to exploit JSON/JSONB functions that come in recent versions of PostgreSQL. It has the advantage of working both with anything that can be converted to a JSON object (rows or any other structured data), and you don't even need to know the record type.
To find the differences between any two rows/records, you can use this little hack:
SELECT pre.key AS columname, pre.value AS prevalue, post.value AS postvalue
FROM jsonb_each(to_jsonb(OLD)) AS pre
CROSS JOIN jsonb_each(to_jsonb(NEW)) AS post
WHERE pre.key = post.key AND pre.value IS DISTINCT FROM post.value
Where OLD and NEW are the built-in records found in trigger functions representing the pre and after state respectively of the changed record. Note that I have used the table aliases pre and post instead of old and new to avoid collision with the OLD and NEW built-in objects. Note also the use of IS DISTINCT FROM instead of a simple != or <> to handle NULL values appropriately.
Of course, this will also work with any ROW constructor such as ROW(1,2,3,...) or its short-hand (1,2,3,...). It will also work with any two JSONB objects that have the same keys.
For example, consider an example with two rows (already converted to JSONB for the purposes of the example):
SELECT pre.key AS columname, pre.value AS prevalue, post.value AS postvalue
FROM jsonb_each('{"col1": "same", "col2": "prediff", "col3": 1, "col4": false}') AS pre
CROSS JOIN jsonb_each('{"col1": "same", "col2": "postdiff", "col3": 1, "col4": true}') AS post
WHERE pre.key = post.key AND pre.value IS DISTINCT FROM post.value
The query will show the columns that have changed values:
columname | prevalue | postvalue
-----------+-----------+------------
col2 | "prediff" | "postdiff"
col4 | false | true
The cool thing about this approach is that it is trivial to filter by column. For example, imagine you ONLY want to detect changes in columns col1 and col2:
SELECT pre.key AS columname, pre.value AS prevalue, post.value AS postvalue
FROM jsonb_each('{"col1": "same", "col2": "prediff", "col3": 1, "col4": false}') AS pre
CROSS JOIN jsonb_each('{"col1": "same", "col2": "postdiff", "col3": 1, "col4": true}') AS post
WHERE pre.key = post.key AND pre.value IS DISTINCT FROM post.value
AND pre.key IN ('col1', 'col2')
The new results will exclude col3 from the results even if it's value has changed:
columname | prevalue | postvalue
-----------+-----------+------------
col2 | "prediff" | "postdiff"
It is easy to see how this approach can be extended in many ways. For example, say you want to throw an exception if certain columns are updated. You can achieve this with a universal trigger function, that is, one that can be applied to any/all tables, without having to know the table type:
CREATE OR REPLACE FUNCTION yourschema.yourtriggerfunction()
RETURNS TRIGGER AS
$$
DECLARE
immutable_cols TEXT[] := ARRAY['createdon', 'createdby'];
BEGIN
IF TG_OP = 'UPDATE' AND EXISTS(
SELECT 1
FROM jsonb_each(to_jsonb(OLD)) AS pre, jsonb_each(to_jsonb(NEW)) AS post
WHERE pre.key = post.key AND pre.value IS DISTINCT FROM post.value
AND pre.key = ANY(immutable_cols)
) THEN
RAISE EXCEPTION 'Error 12345 updating table %.%. Cannot alter these immutable cols: %.',
TG_TABLE_SCHEMA, TG_TABLE_NAME, immutable_cols;
END IF;
END
$$
LANGUAGE plpgsql VOLATILE
You would then register the above trigger function to any and all tables you want to control via:
CREATE TRIGGER yourtiggername
BEFORE UPDATE ON yourschema.yourtable
FOR EACH ROW EXECUTE PROCEDURE yourschema.yourtriggerfunction();
In plpgsql you could do something like this in your trigger function:
IF NEW.column IS NULL THEN
NEW.column = 'default value';
END IF;
I have obtained another solution to similar problem almost naturally, because my table contained a column with semantics of 'last update timestamp' (lets call it UPDT).
So, I decided to include new values of source and UPDT in any update only at once (or none of them). Since UPDT is intended to change on every update, with such a policy one can use condition new.UPDT = old.UPDT to deduce that no source was specified with current update and substitute the default one.
If one already has 'last update timestamp' column in his table, this solution will be simpler, than creating three triggers. Not sure if it is better idea to create UPDT, when it is not needed already. If updates are so frequent that there is risk of timestamp similarity, a sequencer can be used instead of timestamp.

Postgres: checking value before conditionally running an update or delete

I've got a fairly simple table which stores the records' authors in a text field as shown here:
CREATE TABLE "public"."test_tbl" (
"index" SERIAL,
"testdate" DATE,
"pfr_author" TEXT DEFAULT "current_user"(),
CONSTRAINT "test_tbl_pkey" PRIMARY KEY("index");
The user will never see the index or pfr_author fields, but I'd like them to be able to UPDATE the testdate field or DELETE whole records if they have permission and if they are the author. i.e. if test_tbl.pfr_author = CURRENT_USER THEN permit the UPDATE OR DELETE, but if not then raise an error message such as "Sorry, you do not have permission to edit this record.".
I have not gone down the route of using a trigger as I figure that even if it is executed before row update the user-requested update will still take place afterwards regardless.
I've tried doing this through a rule, but end up with infinite recursion as I put an update command inside the rule. Is there some way to do this using rules alone or a combination of a rule and trigger?
Thanks very much for any help!
Use a row level BEFORE trigger on UPDATE and DELETE to do this. Just have it return NULL when the operation is not permitted and the operation will be skipped.
http://www.postgresql.org/docs/9.0/interactive/trigger-definition.html
the trigger function have some problem,resulting recursive loop update.You should do like this:
CREATE OR REPLACE FUNCTION "public"."test_tbl_trig_func" () RETURNS trigger AS $body$
BEGIN
IF not (old.pfr_author = "current_user"() OR "current_user"() = 'postgres') THEN
NULL;
END IF;
RETURN new;
END;
$body$ LANGUAGE 'plpgsql' VOLATILE CALLED ON NULL INPUT SECURITY INVOKER COST 100;
I have a test like this,it does well;
UPDATE test_tbl SET testdate = CURRENT_DATE WHERE test_tbl."index" = 2;