Keeping a column in sync with another column in Postgres - sql

I'm wondering if it's possible to have a column always kept in sync with another column in the same table.
Let this table be an example:
+------+-----------+
| name | name_copy |
+------+-----------+
| John | John |
+------+-----------+
| Mary | Mary |
+------+-----------+
I'd like to:
Be able to INSERT into this table, using providing a value only for the name column - The name_copy column should automatically take the value I used in name
When UPDATE-ing the name column on a pre-existing row, the name_copy should automatically update to match the new & updated name_column.
Some solutions
I could do this via code but that would be terribly bad as there's no guarantee the data would always be accessible by my code (what if someone changes the data through a DB client?)
What would be a safe and reliable and easy way to tackle this in Postgres?

You can create a trigger. Simple trigger function:
create or replace function trigger_on_example()
returns trigger language plpgsql as $$
begin
new.name_copy := new.name;
return new;
end
$$;
In Postgres 12+ there is a nice alternative in the form of generated columns.
create table my_table(
id int,
name text,
name_copy text generated always as (name) stored);
Note that a generated column cannot be written to directly.
Test both solutions in db<>fiddle.

Don't put name_copy into the table. One method is to create the column and access it using a view:
create view v_table as
select t.*, name as name_copy
from t;
That said, I don't really see a use for this.

Related

how to use trigger add column value while relate table insert record?

I created the following PostgreSQL function and the corresponding trigger. I want to insert data into the user_star_raider table, the stars column data in the raider table can automatically increase by 1, but now I use the current function and trigger and it does not meet expectations The result, can you help me?
raider table:
raider_id | stars | visits
-----------+-------+--------
13243 | 0 | 4525
user_star:
user_id | raider_id | star_date
---------+-----------+-----------
I create function and trigger:
CREATE OR REPLACE FUNCTION auto_increase_star() RETURNS TRIGGER AS $$
BEGIN
UPDATE raider SET stars = stars + 1 WHERE raider.raider_id = new.raider_id;
RETURN new;
end
$$
LANGUAGE plpgsql;
CREATE TRIGGER add_user_star_raider_trigger
AFTER INSERT ON user_star_raider FOR EACH ROW EXECUTE PROCEDURE auto_increase_star();
When I insert a row ('1900327840#qq.com', '13243', '2020-07-25') to use_star_raider, I want the raider stars column to be increased by 1.
Is there any good solution?
As mentioned in the comment, your code works. Here is a db<>fiddle.
I realize that I made one change in the db<>fiddle to facilitate typing -- and that might be the cause of your problem. I changed the raider_id from a string to a number: fewer pesky quotes to deal with.
That, in turn, means that the values necessarily match; integers have the nice property that what you see is what you get. However, with strings, strange things can happen -- such as hidden characters or look-alikes from different character sets. What may be happening is that everything works, but the raider_id values don't match between the tables. My advice is to set up a foreign key relationship, to be sure that the raider_id in the second table is a valid raider_id based on the first table.

How to create a trigger that tracks changes to specific columns?

In a PostgreSQL database I have a table called SURVEYS which looks like this:
| ID (uuid) | name (varchar) | status (boolean) | update_at (timestamp) |
|--------------------------------------|----------------|------------------|--------------------------|
| 9bef1274-f1ee-4879-a60e-16e94e88df38 | Doom | 1 | 2019-03-26 00:00:00 |
As you can see, the table has the columns status and update_at.
My task is to create a trigger that will start the function if the user updates the value in status column to 2 and changes the value in the update_at column. In the function I would use the ID of the entry which was changed. I created such a trigger. Do you think is it correct to check column values in the trigger, or do I need to check it in the function? I am little bit confused.
CREATE TRIGGER СHECK_FOR_UPDATES_IN_SURVEYS
BEFORE UPDATE ON SURVEYS
FOR EACH ROW
WHEN
(OLD.update_at IS DISTINCT FROM NEW.update_at)
AND
(OLD.condition IS DISTINCT FROM NEW.condition AND NEW.condition = 2)
EXECUTE PROCEDURE CREATE_SURVEYS_QUESTIONS_RELATIONSHIP(NEW.id);
Your trigger looks just fine.
There is only one slight syntax problem: the whole WHEN clause has to be surrounded by parentheses.
Also, you cannot pass anything but a constant to the trigger function. But you don't have to do that at all: NEW will be available in the trigger function automatically.
So you could write it like this:
CREATE TRIGGER СHECK_FOR_UPDATES_IN_SURVEYS
BEFORE UPDATE ON SURVEYS
FOR EACH ROW
WHEN
(OLD.update_at IS DISTINCT FROM NEW.update_at
AND
OLD.condition IS DISTINCT FROM NEW.condition AND NEW.condition = 2)
EXECUTE PROCEDURE CREATE_SURVEYS_QUESTIONS_RELATIONSHIP();
It is always preferable to check conditions in the trigger definition, because that will save you unnecessary function calls.

Create a trigger for a logbook in postgresql and get data from 2 distint tables after a delete

Hello my problem is the next I have two tables of which are called connection and this has the following columns
boxnum(pk) | date | partnum
boxnum is the pk
then there is the market table that has the following fields
boxnumm(PK)(FK) | entrydate | exitdate | existence(boolean)
and what I want to do is that every time a record is deleted of the market
that is registered in the table called logbook
Logbook table
ID | boxnum | entrydatem | exitdatem | partnum
this is easy using a trigger that is thrown by a delete
but the problem I have is that I want the connection boxnum to be linked to the market boxnum
so I can get the partnum I had at that time the record removed and what I have is this
CREATE OR REPLACE FUNCTION insertar_trigger() RETURNS TRIGGER AS $insertar$
DECLARE BEGIN
INSERT INTO public.logbook (boxnum, entrydatem, exitdatem, partnum) SELECT old.boxnumm, old.entrydate, old.exitdate, partnum
FROM public.market me INNER JOIN public.connection cp ON me.boxnumm = cp.boxnum
where cp.boxnum = old.boxnumm;
RETURN NULL;
END;
$insertar$ LANGUAGE plpgsql;
CREATE TRIGGER insertar_bitacora BEFORE DELETE
ON mercado FOR EACH ROW
EXECUTE PROCEDURE insertar_trigger();
but as you can see I use the before DELETE to do this works very well the trigger saves the data I want but in the market table the record is never erased
appears as deleted but if I show the fields in this table again appear those that were apparently deleted, then I changed the before for the after but this made it impossible to fulfill the part of the where, I do not know how to fix it, if you could help me I would appreciate it.
Quote from the manual
Row-level triggers fired BEFORE can return null to signal the trigger manager to skip the rest of the operation for this row (i.e., subsequent triggers are not fired, and the INSERT/UPDATE/DELETE does not occur for this row) [...]Note that NEW is null in DELETE triggers, so returning that is usually not sensible. The usual idiom in DELETE triggers is to return OLD
(emphasis mine)
You are returning NULL from your BEFORE trigger. So your trigger function insert the row into the logbook table, but the original DELETE is cancelled.
If you change RETURN NULL; to RETURN OLD; it should work.

Postgres trigger to combine many columns into one JSON column

I'd like to create a Postgres trigger on a table to run when a row gets inserted or updated. The table has many columns, and I'd like the trigger to insert that row into another table. But in that other table, all those columns should be combined into one JSON object (JSONB in newer versions of Postgres).
original table
column1|column2|column3 |
-------|-------|--------|
A |B |C |
new table
combined_column |
---------------------------------------|
{ column1: A, column2: B, column3: C } |
So the table that the trigger is created on would have for example 3 columns, but the table that the trigger inserts into would have only 1 column (a JSON object combining all the columns for the inserted/updated row in the original table).
It would be more efficient to save rows in original form. No transformation needed, occupies less disk space, faster, cleaner.
Just create a log table with identical structure:
CREATE TABLE tbl_log AS TABLE tbl LIMIT 0;
Or use the LIKE keyword to specify more closely what to take from the original with INCLUDING clauses. Example:
CREATE TABLE tbl_log (LIKE tbl INCLUDING STORAGE);
Trigger function:
CREATE OR REPLACE FUNCTION trg_tbl_log()
RETURNS trigger
LANGUAGE plpgsql AS
$func$
BEGIN
INSERT INTO tbl_log VALUES (NEW.*);
RETURN NEW;
END
$func$;
Trigger:
CREATE TRIGGER tbl_log
BEFORE INSERT OR UPDATE ON tbl
FOR EACH ROW EXECUTE PROCEDURE trg_tbl_log();
In Postgres 11 or later, rather use cleaner syntax:
...
FOR EACH ROW EXECUTE FUNCTION trg_tbl_log();
You can easily transform the row into a json value if you need to, with row_to_json(). Or simpler, just to_json(). It might be better to use to_jsonb() and save jsonb instead of json:
...
INSERT INTO generic_js_log (json_column) SELECT to_jsonb(NEW);
...
JSON functions in the manual.

How to select a table dynamically with HSQLDB and Hibernate?

I have a table with references to other tables. Stored is the table name and the entity id.
Like this:
ref_table
id | table_name | refId
-------+------------+-------
1 | test | 6
2 | test | 9
3 | other | 5
Now I try to formulate an SQL/FUNCTION that returns the correct entities from the correct tables. Something like:
SELECT * FROM resolveId(3)
I would expect to get the entity with the id "5" from the table "other". Is this possible? I would guess I can do it with a stored procedure (CREATE FUNCTION). The function would have to inspect the "ref_table" and return the name of the table to use in the SQL statement ... but how exactly?
If you want to use the resuling entities in select statements or joins, you should use CREATE FUNCTION with RETURNS TABLE ( .. )
There is a limitation in HSQLDB routines which disallows dynamically creating SQL. Therefore the body of the CREATE FUNCTION may include a CASE or IF ELSE block that switches to a pre-defined SELECT statement based on the input value (1, 2, 3, ..).
The details of CREATE FUNCTION are documented here:
http://hsqldb.org/doc/2.0/guide/sqlroutines-chapt.html#N12CC4
There is one example for an SQL function with RETURNS TABLE.