I know it may sound odd but is there any way I can call my trigger on ROLLBACK event in a table? I was going through postgresql triggers documentation, there are events only for CREATE, UPDATE, DELETE and INSERT on table.
My requirement is on transaction ROLLBACK my trigger will select last_id from a table and reset table sequence with value = last_id + 1; in short I want to preserve sequence values on rollback.
Any kind of ideas and feed back will be appreciated guys!
You can't use a sequence for this. You need a single serialization point through which all inserts have to go - otherwise the "gapless" attribute can not be guaranteed. You also need to make sure that no rows will ever be deleted from that table.
The serialization also means that only a single transaction can insert rows into that table - all other inserts have to wait until the "previous" insert has been committed or rolled back.
One pattern how this can be implemented is to have a table where the the "sequence" numbers are stored. Let's assume we need this for invoice numbers which have to be gapless for legal reasons.
So we first create the table to hold the "current value":
create table slow_sequence
(
seq_name varchar(100) not null primary key,
current_value integer not null default 0
);
-- create a "sequence" for invoices
insert into slow_sequence values ('invoice');
Now we need a function that will generate the next number but that guarantees that no two transactions can obtain the next number at the same time.
create or replace function next_number(p_seq_name text)
returns integer
as
$$
update slow_sequence
set current_value = current_value + 1
where seq_name = p_seq_name
returning current_value;
$$
language sql;
The function will increment the counter and return the incremented value as a result. Due to the update the row for the sequence is now locked and no other transaction can update that value. If the calling transaction is rolled back, so is the update to the sequence counter. If it is committed, the new value is persisted.
To ensure that every transaction uses the function, a trigger should be created.
Create the table in question:
create table invoice
(
invoice_number integer not null primary key,
customer_id integer not null,
due_date date not null
);
Now create the trigger function and the trigger:
create or replace function f_invoice_trigger()
returns trigger
as
$$
begin
-- the number is assigned unconditionally so that this can't
-- be prevented by supplying a specific number
new.invoice_number := next_number('invoice');
return new;
end;
$$
language plpgsql;
create trigger invoice_trigger
before insert on invoice
for each row
execute procedure f_invoice_trigger();
Now if one transaction does this:
insert into invoice (customer_id, due_date)
values (42, date '2015-12-01');
The new number is generated. A second transaction then needs to wait until the first insert is committed or rolled back.
As I said: this solution is not scalable. Not at all. It will slow down your application massively if there are a lot of inserts into that table. But you can't have both: a scalable and correct implementation of a gapless sequence.
I'm also pretty sure that there are edge case that are not covered by the above code. So it's pretty likely that you can still wind up with gaps.
Related
I'm moving from MySql to Postgres, and I noticed that when you delete rows from MySql, the unique ids for those rows are re-used when you make new ones. With Postgres, if you create rows, and delete them, the unique ids are not used again.
Is there a reason for this behaviour in Postgres? Can I make it act more like MySql in this case?
Sequences have gaps to permit concurrent inserts. Attempting to avoid gaps or to re-use deleted IDs creates horrible performance problems. See the PostgreSQL wiki FAQ.
PostgreSQL SEQUENCEs are used to allocate IDs. These only ever increase, and they're exempt from the usual transaction rollback rules to permit multiple transactions to grab new IDs at the same time. This means that if a transaction rolls back, those IDs are "thrown away"; there's no list of "free" IDs kept, just the current ID counter. Sequences are also usually incremented if the database shuts down uncleanly.
Synthetic keys (IDs) are meaningless anyway. Their order is not significant, their only property of significance is uniqueness. You can't meaningfully measure how "far apart" two IDs are, nor can you meaningfully say if one is greater or less than another. All you can do is say "equal" or "not equal". Anything else is unsafe. You shouldn't care about gaps.
If you need a gapless sequence that re-uses deleted IDs, you can have one, you just have to give up a huge amount of performance for it - in particular, you cannot have any concurrency on INSERTs at all, because you have to scan the table for the lowest free ID, locking the table for write so no other transaction can claim the same ID. Try searching for "postgresql gapless sequence".
The simplest approach is to use a counter table and a function that gets the next ID. Here's a generalized version that uses a counter table to generate consecutive gapless IDs; it doesn't re-use IDs, though.
CREATE TABLE thetable_id_counter ( last_id integer not null );
INSERT INTO thetable_id_counter VALUES (0);
CREATE OR REPLACE FUNCTION get_next_id(countertable regclass, countercolumn text) RETURNS integer AS $$
DECLARE
next_value integer;
BEGIN
EXECUTE format('UPDATE %s SET %I = %I + 1 RETURNING %I', countertable, countercolumn, countercolumn, countercolumn) INTO next_value;
RETURN next_value;
END;
$$ LANGUAGE plpgsql;
COMMENT ON get_next_id(countername regclass) IS 'Increment and return value from integer column $2 in table $1';
Usage:
INSERT INTO dummy(id, blah)
VALUES ( get_next_id('thetable_id_counter','last_id'), 42 );
Note that when one open transaction has obtained an ID, all other transactions that try to call get_next_id will block until the 1st transaction commits or rolls back. This is unavoidable and for gapless IDs and is by design.
If you want to store multiple counters for different purposes in a table, just add a parameter to the above function, add a column to the counter table, and add a WHERE clause to the UPDATE that matches the parameter to the added column. That way you can have multiple independently-locked counter rows. Do not just add extra columns for new counters.
This function does not re-use deleted IDs, it just avoids introducing gaps.
To re-use IDs I advise ... not re-using IDs.
If you really must, you can do so by adding an ON INSERT OR UPDATE OR DELETE trigger on the table of interest that adds deleted IDs to a free-list side table, and removes them from the free-list table when they're INSERTed. Treat an UPDATE as a DELETE followed by an INSERT. Now modify the ID generation function above so that it does a SELECT free_id INTO next_value FROM free_ids FOR UPDATE LIMIT 1 and if found, DELETEs that row. IF NOT FOUND gets a new ID from the generator table as normal. Here's an untested extension of the prior function to support re-use:
CREATE OR REPLACE FUNCTION get_next_id_reuse(countertable regclass, countercolumn text, freelisttable regclass, freelistcolumn text) RETURNS integer AS $$
DECLARE
next_value integer;
BEGIN
EXECUTE format('SELECT %I FROM %s FOR UPDATE LIMIT 1', freelistcolumn, freelisttable) INTO next_value;
IF next_value IS NOT NULL THEN
EXECUTE format('DELETE FROM %s WHERE %I = %L', freelisttable, freelistcolumn, next_value);
ELSE
EXECUTE format('UPDATE %s SET %I = %I + 1 RETURNING %I', countertable, countercolumn, countercolumn, countercolumn) INTO next_value;
END IF;
RETURN next_value;
END;
$$ LANGUAGE plpgsql;
I have two tables (this is a very simplified model of my use case):
- TableCounter with 2 columns: idEntry, counter
- TableObject with 1 column : idEntry , seq (with the pair idEntry/seq unique)
I need to be able in 1 transaction to:
- increase counter for idEntry = x
- insert (x,new_counter_value) in the TableObject.
knowing that I must not lose any sequence, and it is a transaction highly concurrent and called a lot.
How would you write such a transaction in a statement (not for a stored procedure)? Would you lock the row of TableCounter for idEntry = x?
So far, I have this, but I look for a better solution.
BEGIN TRANSACTION;
SELECT counter FROM TableCounter WHERE idEntry=1 FOR UPDATE;
UPDATE TableCounter SET counter=counter+1 WHERE idEntry=1;
INSERT INTO TableObject(idEntry, seq) SELECT TableCounter.idEntry, TableCounter.counter FROM TableCounter WHERE TableCounter.idEntry = 1;
COMMIT TRANSACTION
Thank you
The select for update is useless if the next thing you do is to update the row anyway (this is true for any DBMS that supports select for update)
For Postgres this can be done in a single statement using a data modifying CTE:
with updated as (
update tablecounter
set counter = counter + 1
where identry = 1
returning identry, counter
)
insert into tableobject (identry, seq)
select identry, counter
from updated;
The update will lock the row, which means that any concurrent insert/update (for the same identry) will have to wait until the above is committed or rolled back.
If I (really) needed a gapless sequence and I could live with the scalability issues of such a solution (because the requirement is more important then performance or scalability) I would probably put that into a function. Something like the following:
Define the sequence (=counter) table
create table gapless_sequence
(
entity text not null primary key,
sequence_value integer not null default 0
);
-- "create" a new sequence
insert into gapless_sequence (entity) values ('some_table');
commit;
Now create a function that claims a new value
create function next_value(p_entity text)
returns integer
as
$$
update gapless_sequence
set sequence_value = sequence_value + 1
where entity = p_entity
returning sequence_value;
$$
language sql;
Same as above: the transaction that acquires the next sequence for an entity will block all subsequent calls to the function for the same entity, until the first transaction is committed (or rolled back).
Now defining a table that uses the gapless sequence is quite easy:
create table some_table
(
id integer primary key default next_value('some_table'),
some_column text
);
And then you simply do:
insert into some_table (some_column) values ('foo');
A concurrent insert into some_table would wait until the first transaction commits. The update will then see the committed value and return the appropriate next sequence value.
Of course this can also be done without using a default clause in the table definition, but then you would need to call the function explicitly in the insert statement:
insert into some_table
(id, some_column)
values
(next_value('some_table'), 'foo');
However that has the potential pitfall that nothing forces you to use the correct entity name when calling the function.
All the examples above assume that auto commit is turned off
Using Postgres 9.4, I have 2 tables streams and comment_replies. I am trying to do is update the streams.comments count each time a new comment_replies is inserted to keep track of the number of comments a particular stream has. I am not getting any errors but when I try to create a new comment it gets ignored.
This is how I am setting up my trigger. stream_id is a foreign key, so every stream_id will correspond to a streams.id which is the primary key of the streams table. I have been looking at this example: Postgres trigger function, but haven't been able to get it to work.
CREATE TABLE comment_replies (
id serial NOT NULL PRIMARY KEY,
created_on timestamp without time zone,
comments text,
profile_id integer,
stream_id integer
);
The trigger function:
CREATE OR REPLACE FUNCTION "Comment_Updates"()
RETURNS trigger AS
$BODY$BEGIN
update streams set streams.comments=streams.comments+1
where streams.id=comment_replies_streamid;
END$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
And the trigger:
CREATE TRIGGER comment_add
BEFORE INSERT OR UPDATE
ON comment_replies
FOR EACH ROW
EXECUTE PROCEDURE "Comment_Updates"();
How can I do this?
There are multiple errors. Try instead:
CREATE OR REPLACE FUNCTION comment_update()
RETURNS trigger AS
$func$
BEGIN
UPDATE streams s
SET streams.comments = s.comments + 1
-- SET comments = COALESCE(s.comments, 0) + 1 -- if the column can be NULL
WHERE s.id = NEW.streamid;
RETURN NEW;
END
$func$ LANGUAGE plpgsql;
CREATE TRIGGER comment_add
BEFORE INSERT OR UPDATE ON comment_replies -- on UPDATE, too? Really?
FOR EACH ROW EXECUTE PROCEDURE comment_update();
You need to consider DELETE as well if that is possible. Also if UPDATE can change stream_id. But why increase the count for every UPDATE? This looks like another error to me.
It's a syntax error to table-qualify the target column in the SET clause of UPDATE.
You need to return NEW in a BEFORE trigger unless you want to cancel the INSERT / UPDATE.
Or you make it an AFTER trigger, which would work for this, too.
You need to reference NEW for the stream_id of the current row (which is automatically visible inside the trigger function.
If streams.comments can be NULL, use COALESCE.
And rather use unquoted, legal, lower-case identifiers.
I have two tables in my project: accounts and transactions (one-to-many relationship). In every transaction I store the balance of the associated account (after the transaction is executed). Additionally in every transaction I store a value of the transaction.
So I needed a trigger fired when someone adds new transaction. It should check whether new account balance will be correct (old account balance + transaction value = new account balance stored in transaction).
So I was suggested, I should use a compound trigger which would:
in before each row section: save a row's PK (made of two columns) somewhere,
in after statement section: check whether all inserted transactions where correct.
Now I can't find anywhere how could I implement the first point.
What I already have:
CREATE OR REPLACE TRIGGER check_account_balance_is_valid
FOR INSERT
ON Transactions
COMPOUND TRIGGER
TYPE Modified_transactions_T IS TABLE OF Transactions%ROWTYPE;
Modified_transactions Modified_transactions_T;
BEFORE STATEMENT IS BEGIN
Modified_transactions := Modified_transactions_T();
END BEFORE STATEMENT;
BEFORE EACH ROW IS BEGIN
Modified_transactions.extend;
Modified_transactions(Modified_transactions.last) := :NEW;
END BEFORE EACH ROW;
AFTER STATEMENT IS BEGIN
NULL; -- I will write something here later
END AFTER STATEMENT;
END check_account_balance_is_valid;
/
However, I got that:
Warning: execution completed with warning
11/58 PLS-00049: bad bind variable 'NEW'
Could someone tell me, how to fix it? Or maybe my whole "compound trigger" idea is wrong and you have better suggestions.
Update 1
Here is my ddl script: http://pastebin.com/MW0Eqf9J
Maybe try this one:
TYPE Modified_transactions_T IS TABLE OF ROWID;
Modified_transactions Modified_transactions_T;
BEFORE STATEMENT IS BEGIN
Modified_transactions := Modified_transactions_T();
END BEFORE STATEMENT;
BEFORE EACH ROW IS BEGIN
Modified_transactions.extend;
Modified_transactions(Modified_transactions.last) := :NEW.ROWID;
END BEFORE EACH ROW;
or this
TYPE PrimaryKeyRecType IS RECORD (
Col1 Transactions.PK_COL_1%TYPE, Col2 Transactions.PK_COL_2%TYPE);
TYPE Modified_transactions_T IS TABLE OF PrimaryKeyRecType;
...
Modified_transactions(Modified_transactions.last) := PrimaryKeyRecType(:NEW.PK_COL_1, :NEW.PK_COL_2);
Your immediate problem is that :new is not a real record so it is not of type Transactions%ROWTYPE. If you're really going to go down this path, you would generally want to declare a collection of the primary key of the table
TYPE Modified_transactions_T IS TABLE OF Transactions.Primary_Key%TYPE;
and then put just the primary key in the collection
BEFORE EACH ROW IS BEGIN
Modified_transactions.extend;
Modified_transactions(Modified_transactions.last) := :NEW.Primary_Key;
END BEFORE EACH ROW;
The fact that you are trying to work around a mutating table exception in the first place, however, almost always indicates that you have an underlying data modeling problem that you should really be solving. If you need to query other rows in the table in order to figure out what you want to do with the new rows, that's a pretty good indication that you have improperly normalized your data model and that one row has some dependency on another row in the same table rather than being an autonomous fact. Fixing the data model is almost always preferrable to working around the mutating table exception.
I want to write a Postgres SQL trigger that will basically find if a number appears in a column 5 or more times. If it appears a 5th time, I want to throw an exception. Here is how the table looks:
create table tab(
first integer not null constraint pk_part_id primary key,
second integer constraint fk_super_part_id references bom,
price integer);
insert into tab values(1,NULL,100), (2,1,50), (3,1,30), (4,2,20), (5,2,10), (6,3,20);
Above are the original inserts into the table. My trigger will occur upon inserting more values into the table.
Basically if a number appears in the 'second' column more than 4 times after inserting into the table, I want to raise an exception. Here is my attempt at writing the trigger:
create function check() return trigger as '
begin
if(select first, second, price
from tab
where second in (
select second from tab
group by second
having count(second) > 4)
) then
raise exception ''Error, there are more than 5 parts.'';
end if;
return null;
end
'language plpgsql;
create trigger check
after insert or update on tab
for each row execute procedure check();
Could anyone help me out? If so that would be great! Thanks!
CREATE FUNCTION trg_upbef()
RETURN trigger as
$func$
BEGIN
IF (SELECT count(*)
FROM tab
WHERE second = NEW.second ) > 3 THEN
RAISE EXCEPTION 'Error: there are more than 5 parts.';
END IF;
RETURN NEW; -- must be NEW for BEFORE trigger
END
$func$ LANGUAGE plpgsql;
CREATE TRIGGER upbef
BEFORE INSERT OR UPDATE ON tab
FOR EACH ROW EXECUTE procedure trg_upbef();
Major points
Keyword is RETURNS, not RETURN.
Use the special variable NEW to refer to the newly inserted / updated row.
Use a BEFORE trigger. Better skip early in case of an exception.
Don't count everything for your test, just what you need. Much faster.
Use dollar-quoting. Makes your live easier.
Concurrency:
If you want to be absolutely sure, you'll have to take an exclusive lock on the table before counting. Else, concurrent inserts / updates might outfox each other under heavy concurrent load. While this is rather unlikely, it's possible.