I am doing a simple exercise that given a Mission view i have to recalculate the data on new insertions to do this i need to make a trigger this is what i have done at the moment:
CREATE VIEW Missioni AS
SELECT d.Codice, SUM(v.Chilometri) AS KmTotali, SUM(v.Chilometri) * a.CostoKm AS CostoTotale
FROM Dipendente d join Viaggio v on v.Dipendente = d.Codice join Auto a on a.Targa = v.Auto
GROUP BY d.Codice, a.CostoKm;
CREATE OR REPLACE FUNCTION CALCULATE_MISSION()
RETURNS trigger AS $CALCULATE_MISSION$
BEGIN
DELETE FROM Missioni;
INSERT INTO Missioni SELECT Dipendente, SUM(Chilometri), SUM(Chilometri * CostoKm) FROM Viaggio v JOIN AUTO a ON a.targa = v.auto GROUP BY Dipendente;
END;
$CALCULATE_MISSION$ LANGUAGE plpgsql;
CREATE TRIGGER CalcolaVistaOneShot AFTER INSERT ON Viaggio
FOR EACH STATEMENT
EXECUTE FUNCTION CALCULATE_MISSION();
At the moment I am doing this on pgAdmin 4 in the query editor and it is giving me the following error:
ERROR: it is not possible to delete from the "missions" view
DETAIL: Views containing GROUP BY are not auto-updatable.
HINT: To allow deletions from the view, either an INSTEAD OF DELETE trigger or an ON DELETE DO INSTEAD rule without conditions must be provided.
CONTEXT: SQL statement "DELETE FROM Missions"
PL / pgSQL calculate_mission () function line 3 to SQL statement
Regardless of which type you want, views cannot be directly updated. With standard views there is no dml applied - they are defined by the query and that is run on each time the view is referenced. In this case there would be no triggering at all as there is nothing to do. A materialized view is refreshed (see documentation previously referenced) with the REFRESH MATERIALIZED VIEW ddl statement. In this case your trigger consists of a single execute statement and return null.
create or replace function calculate_mission()
returns trigger
language plpgsql
as $$
begin
execute 'refresh materialized view missioni';
return null;
end; $$;
A couple of notes: It seems you are attempting to maintain a live "as of now" value for KmTotali, and CostoTotale. If the result set is small a standard view will likely perform sufficiently. If the result set is large the a materialized view likely performs better on select, but DML operations of the tables involved must absorb the entire refresh time; especially problematic when dml activity is heavy on the underlying tables.
Additionally, you are refreshing only on insert, what happens if I update the values in the calculation of when rows are delete or updated. I have put together a small fictitious demo.
Finally, spend some time and familiarize yourself with the Postgres Documentation. It is quite complete, although sometimes it can be difficult to navigate to exactly what you need.
Related
I'm trying to sort table automatically by specified row each time a new record is added (or removed or updated).
For that, I've create a function
CREATE FUNCTION pid_cluster_function()
RETURNS TRIGGER
LANGUAGE PLPGSQL
AS $$
BEGIN
-- trigger logic
cluster verbose public.pid using pid_idx;
END;
$$
and add a trigger
CREATE trigger pid_cluster_trigger
after INSERT or update or DELETE on public.pid
FOR EACH row
execute procedure pid_cluster_function();
but with adding a record
INSERT INTO public.pid (pid,pid_name) VALUES ('111','new 111');
I've received such an error
SQL Error [55006]: ERROR: cannot CLUSTER "pid" because it is being used by active queries in this session
Where: SQL statement "cluster verbose public.pid using pid_idx"
PL/pgSQL function pid_cluster_function() line 5 at SQL statement
What is the reason for this error?
Or is it possible to achieve sorting by adding or modifying the records in some other way?
Ok, thank you, everyone, in the comments. I see that my idea is not clever =)
I have a simple PL/PGSQL block Postgres 9.5 that loops over records in a table and conditionally updates some of the records.
Here's a simplified example:
DO $$
DECLARE
-- Define a cursor to loop through records
my_cool_cursor CURSOR FOR
SELECT
u.id AS user_id,
u.name AS user_name,
u.email AS user_email
FROM users u
;
BEGIN
FOR record IN my_cool_cursor LOOP
-- Simplified example:
-- If user's first name is 'Anjali', set email to NULL
IF record.user_name = 'Anjali' THEN
BEGIN
UPDATE users SET email = NULL WHERE id = record.user_id;
END;
END IF;
END LOOP;
END;
$$ LANGUAGE plpgsql;
I'd like to execute this block directly against my database (from my app, via the console, etc...). I do not want to create a FUNCTION() or stored procedure to do this operation.
The Issue
The issue is that the CURSOR and LOOP create a table-level lock on my users table, since everything between the outer BEGIN...END runs in a transaction. This blocks any other pending queries against it. If users is sufficiently large, this locks it up for several seconds or even minutes.
What I tried
I tried to COMMIT after each UPDATE so that it clears the transaction and the lock periodically. I was surprised to see this error message:
ERROR: cannot begin/end transactions in PL/pgSQL
HINT: Use a BEGIN block with an EXCEPTION clause instead.
I'm not quite sure how this is done. Is it asking me to raise an EXCEPTION to force a COMMIT? I tried reading the documentation on Trapping Errors but it only mentions ROLLBACK, so I don't see any way to COMMIT.
How do I periodically COMMIT a transaction inside the LOOP above?
More generally, is my approach even correct? Is there a better way to loop through records without locking up the table?
1.
You cannot COMMIT within a PostgreSQL function or DO command at all (plpgsql or any other PL). The error message you reported is to the point (as far as Postgres 9.5 is concerned):
ERROR: cannot begin/end transactions in PL/pgSQL
A procedure could do that in Postgres 11 or later. See:
PostgreSQL cannot begin/end transactions in PL/pgSQL
In PostgreSQL, what is the difference between a “Stored Procedure” and other types of functions?
There are limited workarounds to achieve "autonomous transactions" in older versions:
How do I do large non-blocking updates in PostgreSQL?
Does Postgres support nested or autonomous transactions?
Do stored procedures run in database transaction in Postgres?
But you do not need any of this for the presented case.
2.
Use a simple UPDATE instead:
UPDATE users
SET email = NULL
WHERE user_name = 'Anjali'
AND email IS DISTINCT FROM NULL; -- optional improvement
Only locks the rows that are actually updated (with corner case exceptions). And since this is much faster than a CURSOR over the whole table, the lock is also very brief.
The added AND email IS DISTINCT FROM NULL avoids empty updates. Related:
Update a column of a table with a column of another table in PostgreSQL
How do I (or can I) SELECT DISTINCT on multiple columns?
It's rare that explicit cursors are useful in plpgsql functions.
If you want to avoid locking rows for a long time, you could also define a cursor WITH HOLD, for example using the DECLARE SQL statement.
Such cursors can be used across transaction boundaries, so you could COMMIT after a certain number of updates. The price you are paying is that the cursor has to be materialized on the database server.
Since you cannot use transaction statements in functions, you will either have to use a procedure or commit in your application code.
New to Postgres and PL/pgSQL here.
How do I go about writing a PL/pgSQL function to perform different actions based on the type of update (insert,delete,etc) made to the table/record in a postgres database.
You seem to be looking for a trigger.
In SQL, triggers are procedures that are called (fired) when a specific event happens on an object, for example when a table is updated, deleted from or insterted into. Triggers can respond to many use cases such as implementing business integrity rules, cleaning data, auditing, security, ...
In Postgres, you should first define a PL/pgSQL function, and then reference it in the trigger declaration.
CREATE OR REPLACE FUNCTION my_table_function() RETURNS TRIGGER AS $my_table_trigger$
BEGIN
...
END
$my_table_trigger$ LANGUAGE plpgsql;
CREATE TRIGGER my_table_trigger
AFTER INSERT OR UPDATE OR DELETE ON mytable
FOR EACH ROW EXECUTE PROCEDURE my_table_function();
From within the trigger code, you have access a set of special variables such as :
NEW, OLD : pseudo records that contain new/old database records affected by the query
TG_OP : operation that fired the trigger (INSERT, UPDATE, DELETE, ...)
Using these variables and other triggers mechanisms, you can analyze or alter the on-going operation, or even abort it by raising an exception.
I would recommend reading Postgres documentation for the CREATE TRIGGER statement and Trigger Procedure (the latest gives lots of examples).
Have two triggers on a table. One trigger is executed when there is a insert or update for each row in the table. Second trigger is executed when there is a update for each row in the table. Which trigger gets executed first in ORACLE 10G when there is a update statement on a row in the table. Is there any order of execution for triggers in oracle? If so how can i set it?
The order in which the triggers will fire is arbitrary and not something that you can control in 10g. I believe, technically, it goes in the order that the triggers happened to be created but that's certainly not something that you'd want to count on.
In 11g, you can control the firing order of triggers. However you are almost always better off replacing the two triggers with one trigger that calls two stored procedures. So rather than
CREATE TRIGGER trg_1
BEFORE UPDATE ON t
FOR EACH ROW
BEGIN
<<do thing 1>>
END;
CREATE TRIGGER trg_2
BEFORE UPDATE ON t
FOR EACH ROW
BEGIN
<<do thing 2>>
END;
you would be much better served with something like
CREATE PROCEDURE p1( <<arguments>> )
AS
BEGIN
<<do thing 1>>
END;
CREATE PROCEDURE p2( <<arguments>> )
AS
BEGIN
<<do thing 2>>
END;
CREATE TRIGGER trg
BEFORE UPDATE ON t
FOR EACH ROW
BEGIN
p1( <<list of arguments>> );
p2( <<list of arguments>> );
END;
For versions before 11g, no, the order is unspecified. From 10g Release 2 docs:
For enabled triggers, Oracle automatically performs the following actions:
Oracle runs triggers of each type in a planned firing sequence when more than one trigger is fired by a single SQL statement. First, statement level triggers are fired, and then row level triggers are fired.
Oracle performs integrity constraint checking at a set point in time with respect to the different types of triggers and guarantees that triggers cannot compromise integrity constraints.
Oracle provides read-consistent views for queries and constraints.
Oracle manages the dependencies among triggers and schema objects referenced in the code of the trigger action
Oracle uses two-phase commit if a trigger updates remote tables in a distributed database.
Oracle fires multiple triggers in an unspecified, random order, if more than one trigger of the same type exists for a given statement; that is, triggers of the same type for the same statement are not guaranteed to fire in any specific order.
No order to trigger firing can be relied upon in 10g beyond the normal before statement, before row, after row, after statement order. In 11g a new FOLLOWS clause was added to the CREATE TRIGGER statement.
In Oracle 10g we do not control the triggers that are created on same timing. It is executed randomly. So we cannot say which trigger is fired first. To overcome this problem, Oracle 11g introduced FOLLOWS CLAUSE. Using this we can control the execution order.
I have to execute a loop in database. This is only a one time requirement.
After executing the function, I am dropping the function now.
Is there any good approach for creating temporary / disposable functions?
I needed to know how to do a many time use in a script I was writing. Turns out you can create a temporary function using the pg_temp schema. This is a schema that is created on demand for your connection and is where temporary tables are stored. When your connection is closed or expires this schema is dropped. Turns out if you create a function on this schema, the schema will be created automatically. Therefore,
create function pg_temp.testfunc() returns text as
$$ select 'hello'::text $$ language sql;
will be a function that will stick around as long as your connection sticks around. No need to call a drop command.
A couple of additional notes to the smart trick in #crowmagnumb's answer:
The function must be schema-qualified at all times, even if pg_temp is in the search_path (like it is by default), according to Tom Lane to prevent Trojan horses:
CREATE FUNCTION pg_temp.f_inc(int)
RETURNS int AS 'SELECT $1 + 1' LANGUAGE sql IMMUTABLE;
SELECT pg_temp.f_inc(42);
f_inc
-----
43
A function created in the temporary schema is only visible inside the same session (just like temp tables). It's invisible to all other sessions (even for the same role). You could access the function as a different role in the same session after SET ROLE.
You could even create a functional index based on this "temp" function:
CREATE INDEX foo_idx ON tbl (pg_temp.f_inc(id));
Thereby creating a plain index using a temporary function on a non-temp table. Such an index would be visible to all sessions but still only valid for the creating session. The query planner will not use a functional index, where the expression is not repeated in the query. Still a bit of a dirty trick. It will be dropped automatically when the session is closed - as a depending object. Feels like this should not be allowed at all ...
If you just need to execute a function repeatedly and all you need is SQL, consider a prepared statement instead. It acts much like a temporary SQL function that dies at the end of the session. Not the same thing, though, and can only be used by itself with EXECUTE, not nested inside another query. Example:
PREPARE upd_tbl AS
UPDATE tbl t SET set_name = $2 WHERE tbl_id = $1;
Call:
EXECUTE upd_tbl(123, 'foo_name');
Details:
Split given string and prepare case statement
If you are using version 9.0, you can do this with the new DO statement:
http://www.postgresql.org/docs/current/static/sql-do.html
With previous versions, you'll need to create the function, call it, and drop it again.
For ad hock procedures, cursors aren't too bad. They are too inefficient for productino use however.
They will let you easily loop on sql results in the db.