I am trying to understand how transaction works in Postgres and what happens when multiple commands try to work on the same table. My doubt is related to a small experiment that I carried out.
Consider a table called experiment with a trigger (experiment_log) on it that is fired after every update, delete, or insert.
Now consider this function.
CREATE OR REPLACE FUNCTION test_func() RETURNS void AS $body$
DECLARE
_q_txt text;
version_var integer;
BEGIN
EXECUTE 'DROP TRIGGER IF EXISTS experiment_log ON experiment';
SELECT version INTO version_var FROM experiment;
RAISE NOTICE 'VERSION AFTER DROPPING TRIGGER: %', version_var;
EXECUTE 'SELECT pg_sleep(20);';
SELECT version INTO version_var FROM experiment;
RAISE NOTICE 'VERSION BEFORE RECREATING TRIGGER: %', version_var;
EXECUTE 'CREATE TRIGGER experiment_log AFTER INSERT OR UPDATE OR DELETE ON experiment FOR EACH ROW EXECUTE PROCEDURE experiment_log_trigger_func();';
END;
$body$
language 'plpgsql';
So, this function drops the trigger and waits for 20 seconds before re-creating this trigger. Now any update operation performed during the time when function is sleeping, the update operation blocks. It means that I can not update the experiment table until the function test_func has executed completely.
Can anyone explain this behaviour? It seems I am missing something out to reason this behaviour.
That is because DROP TRIGGER places an ACCESS EXCLUSIVE lock on the table, and the lock is held until the transaction ends, that is, for the whole duration of the function call.
If you want to disable a trigger temporarily, use
ALTER TABLE experiment DISABLE TRIGGER experiment_log;
I would like to give you a reference from the documentation, but the lock level of DROP TRIGGER is not documented. However, it is documented that the SQL statement takes the lock:
Also, most PostgreSQL commands automatically acquire locks of appropriate modes to ensure that referenced tables are not dropped or modified in incompatible ways while the command executes.
There you can also find how long a lock is held:
Once acquired, a lock is normally held until the end of the transaction.
To find the lock taken by DROP TRIGGER, try this simple experiment:
CREATE TABLE t();
CREATE TRIGGER whatever BEFORE UPDATE ON t
FOR EACH ROW EXECUTE FUNCTION suppress_redundant_updates_trigger();
BEGIN; -- start a transaction
DROP TRIGGER whatever ON t;
SELECT mode FROM pg_locks
WHERE pid = pg_backend_pid() -- only locks for the current session
AND relation = 't'::regclass; -- only locks on "t"
mode
═════════════════════
AccessShareLock
AccessExclusiveLock
(2 rows)
COMMIT;
You see that an ACCESS SHARE lock and an ACCESS EXCLUSIVE lock are held on the table.
Related
There is a refresh materialized concurrently that takes several hours to run. One of the users regularly has to truncate one of the tables that the materialized view uses. This table is also used in multiple projects so other users run selects on it
Problem is that this truncate stays locked until the refresh finishes, and anyone selecting the table then gets completely stuck and then the database gets jammed. I've instructed the user to only do this truncate at a specific time but he did not listen
How to create a trigger that prevents the user from doing this truncate? Something along the lines
create trigger before truncate on table for each row execute function stoptruncate()
CREATE OR REPLACE FUNCTION stoptruncate()
RETURNS trigger
LANGUAGE plpgsql
AS $function$
BEGIN
if truncate = true and if 'the refresh query is running'
then raise exception 'cannot run truncate while background refresh is running';
end if;
RETURN NEW;
END;
$function$
;
A trigger comes too late to do anything about that. The trigger function is called after the table lock has been granted.
You could keep the user from waiting forever by setting a low default value for lock_timeout for that user:
ALTER ROLE trunc_user SET lock_timeout = '200ms';
I have a main procedure (p_proc_a) in which I create a temp table for logging (tmp_log). In the main procedure I call some other procedures (p_proc_b, p_proc_c). In each of these procedures I insert data into table tmp_log.
How do I save rows from tmp_log into a physical table (log) in case of exception before rollback?
create procedure p_proc_a
language plpgsql
as $body$
begin
create temp table tmp_log (log_message text) on commit drop;
call p_proc_b();
call p_proc_c();
insert into log (log_message)
select log_message from tmp_log;
exception
when others then
begin
get stacked diagnostics
v_message_text = message_text;
insert into log (log_message)
values(v_message_text);
end;
end;
$body$
What is a workround to save logs into a table and rollback changes from p_proc_b and p_proc_c?
That is not possible in PostgreSQL.
The typical workaround is to use dblink to connect to the database itself and write the logs via dblink.
I found three solutions to store data within a transaction (im my case - for debugging propose), and still be able to see that data after rollback-ing the transaction
I have a scenario where inside, I use following block, so it may not apply to your scenario
DO $$
BEGIN
...
ROLLBACK;
END;
$$;
Two first solutions are suggested to me in the Postgres slack, and the other I tried and found after talking with them, a way that worked in other db.
Solutions
1 - Using DBLink
I don't remember how it was done, but you import some libraries and then connect to another db, and use the other DB - which maybe can support to be this db - which seem to be not affected by transactions
2 - Using COPY command
Using the
COPY (SELECT ...) TO PROGRAM 'psql -c "COPY xyz FROM stdin"'
BTW I never used it, and it seems that it requires Super User(SU) permission in Unix. And god knows how it is used, or how it output data
3 - Using Sub-Transactions
In this way, you use a sub-transaction (which I'm not sure it it's correct, but it must be called Autonomous transactions) to commit the result you want to keep.
In my case the command looks like this:
I used a Temp Table, but it seems (I'm not sure) to work with an actual table as well
CREATE TEMP TABLE
IF NOT EXISTS myZone AS
SELECT * from public."Zone"
LIMIT 0;
DO $$
BEGIN
INSERT INTO public."Zone" (...)VALUES(...);
BEGIN
INSERT INTO myZone
SELECT * from public."Zone";
commit;
END;
Rollback;
END; $$;
SELECT * FROM myZone;
DROP TABLE myZone;
don't ask what is the purpose of doing this, I'm creating a test scenario, and I wished to track what I did until now. since this block did not support SELECT of DQL, I had to do something else, and I wanted a clean set of report, not raising errors
According to www.techtarget.com:
*Autonomous transactions allow a single transaction to be subdivided into multiple commit/rollback transactions, each of which
will be tracked for auditing purposes. When an autonomous transaction
is called, the original transaction (calling transaction) is
temporarily suspended.
(This text was indexed by google and existed on 2022-10-11, and the website was not opened due to an E-mail validation issue)
Also this name seems to be coming from Oracle, which this article can relate
EIDTED:
Removing solution 3 as it won't work
POSTGRES 11 Claim to support Autonomous Transactions but it's not what we may expect...
For this functionality Postgres introduced the SAVEPOINT:
SAVEPOINT <name of savepoint>;
<... CODE ...>
<RELEASE|ROLLBACK> SAVEPOINT <name of savepoint>;
Now the issue is:
If you use nested BEGIN, the COMMIT Inside the nested code can COMMIT everything, and the ROLLBACK in the outside block will do none (Not rolling back anything that happened before COMMIT of inner
If you use SAVEPOINT, it is only used to rollbacks part of the code, and even if you COMMIT it, the ROLLBACK in the outside block will rollback the SAVEPOINT too
New to Postgres and PL/pgSQL here.
How do I go about writing a PL/pgSQL function to perform different actions based on the type of update (insert,delete,etc) made to the table/record in a postgres database.
You seem to be looking for a trigger.
In SQL, triggers are procedures that are called (fired) when a specific event happens on an object, for example when a table is updated, deleted from or insterted into. Triggers can respond to many use cases such as implementing business integrity rules, cleaning data, auditing, security, ...
In Postgres, you should first define a PL/pgSQL function, and then reference it in the trigger declaration.
CREATE OR REPLACE FUNCTION my_table_function() RETURNS TRIGGER AS $my_table_trigger$
BEGIN
...
END
$my_table_trigger$ LANGUAGE plpgsql;
CREATE TRIGGER my_table_trigger
AFTER INSERT OR UPDATE OR DELETE ON mytable
FOR EACH ROW EXECUTE PROCEDURE my_table_function();
From within the trigger code, you have access a set of special variables such as :
NEW, OLD : pseudo records that contain new/old database records affected by the query
TG_OP : operation that fired the trigger (INSERT, UPDATE, DELETE, ...)
Using these variables and other triggers mechanisms, you can analyze or alter the on-going operation, or even abort it by raising an exception.
I would recommend reading Postgres documentation for the CREATE TRIGGER statement and Trigger Procedure (the latest gives lots of examples).
I want to prevent any row with VERSIONID=1 from being deleted in a certain table. I also want to log this in an audit table so we can see when this happens for logging purposes. I'm trying to do this with a trigger:
CREATE TRIGGER TPMDBO.PreventVersionDelete
BEFORE DELETE ON TPM_PROJECTVERSION
FOR EACH ROW
DECLARE
BEGIN
IF( :old.VERSIONID = 1 )
THEN
INSERT INTO TPM_AUDIT VALUES ('Query has attempted to delete root project version!', sysdate);
RAISE_APPLICATION_ERROR( -20001, 'Query has attempted to delete root project version!' );
END IF;
END;
I get the following results:
SQL> delete from TPM_PROJECTVERSION where PROJECTID=70 and VERSIONID=1;
delete from TPM_PROJECTVERSION where PROJECTID=70 and VERSIONID=1
*
ERROR at line 1:
ORA-20001: Query has attempted to delete root project version!
ORA-06512: at "TPMDBO.PREVENTVERSIONDELETE", line 6
ORA-04088: error during execution of trigger 'TPMDBO.PREVENTVERSIONDELETE'
However, the table TPM_AUDIT is empty. Am I doing something wrong?
If your trigger raises an error, the DELETE statement fails and the transaction is rolled back to the implicit savepoint that is created before the statement is run. That means that any changes made by the trigger are rolled back as well.
You can work around this by using autonomous transactions. Something like
CREATE PROCEDURE write_audit
AS
PRAGMA AUTOMOMOUS_TRANSACTION;
BEGIN
INSERT INTO tpm_audit
VALUES( 'Query has attempted to delete root project version!',
sysdate );
commit;
END;
CREATE TRIGGER TPMDBO.PreventVersionDelete
BEFORE DELETE ON TPM_PROJECTVERSION
FOR EACH ROW
DECLARE
BEGIN
IF( :old.VERSIONID = 1 )
THEN
write_audit;
RAISE_APPLICATION_ERROR( -20001, 'Query has attempted to delete root project version!' );
END IF;
END;
This will put the INSERT into TPM_AUDIT into a separate transaction that can be committed outside the context of the DELETE statement. Be very careful about using autonomous transactions, however
If you ever find yourself using autonomous transactions for anything other than writing to a log table, you're almost certainly doing something wrong.
Code in a PL/SQL block declared using autonomous transactions is truly autonomous so it cannot see uncommitted changes made by the current session.
Because of write consistency, it is entirely possible that Oracle will partially execute a DELETE statement, firing the row-level trigger a number of times, roll back that work, and then re-execute the DELETE. That silent rollback, however, will not roll back the changes made by the autonomous transaction. So it is entirely possible that a single DELETE of a single row would actually cause the trigger to be fired more than once and, therefore, create multiple rows in TPM_AUDIT.
If you can create a UNIQUE constraint on the TPM_PROJECTVERSION pk columns + the version column, then you can create a second table that would reference those rows.
Trying to delete a row in TPM_PROJECTVERSION would then fail because child rows are present. This would at least throw an error in your application and prevent the deletion.
The other table could be automatically populated through an insert trigger on TPM_PROJECTVERSION.
If you revoke the DELETE privilege on that helper table, it would never be possible to remove those rows.
I believe you need to COMMIT the INSERT operation before calling RAISE_APPLICATION_ERROR, which rolls back the transaction.
Hey, I'm trying to create a trigger in my Oracle database that changes all other records except the one that has just been changed and launched the trigger to 0. Because I am updating records in the same table as the one that launched the trigger I got the mutating table error. To solve this, I put the code as an anonymous transaction, however this causes a deadlock.
Trigger code:
CREATE OR REPLACE TRIGGER check_thumbnail AFTER INSERT OR UPDATE OF thumbnail ON photograph
FOR EACH ROW
BEGIN
IF :new.thumbnail = 1 THEN
check_thumbnail_set_others(:new.url);
END IF;
END;
Procedure code:
CREATE OR REPLACE PROCEDURE check_thumbnail_set_others(p_url IN VARCHAR2)
IS PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
UPDATE photograph SET thumbnail = 0 WHERE url <> p_url;
COMMIT;
END;
I assume I'm causing a deadlock because the trigger is launching itself within itself. Any ideas?
Using an autonomous transaction for this sort of thing is almost certainly a mistake. What happens if the transaction that inserted the new thumbnail needs to rollback? You've already committed the change to the other rows in the table.
If you want the data to be transactionally consistent, you would need multiple triggers and some way of storing state. The simplest option would be to create a package with a collection of thumbnail.url%type then create three triggers on the table. A before statement trigger would clear out the collection. A row-level trigger would insert the :new.url value into the collection. An after statement trigger would then read the values from the collection and call the check_thumbnail_set_others procedure (which would not be an autonomous transaction).