Select .. for Update in cluster environment - sql

I am working on some legacy product and they are using a table to generate unique keys (primary key) for each table. This table contains latest ID for all other tables. When they want to insert a row in any other table, following is the logic they use to generate unique ID for that new row
Table to generate keys look likes
ID | NEXT_ID | TABLE_NAME
public synchronized long generateKey(Connection con){
// select the latest ID value from the table against a row
// increment the value by 1
// update the table with this latest value
// return the latest value
}
In single node threaded environment everything goes fine. But there are changes to get race condition while executing the above logic in clustered environment. So in order to over come this issue we thought of having a java function that calls a PL/SQL function that do the above job. Code is as follows
public long generateKey(Connection con) {
// call PL/SQL function and return the value
}
Following is the skeleton of PL/SQL function
FUNCTION GET_NEXT_ID(tablename IN VARCHAR2)
RETURN NUMBER IS
PRAGMA AUTONOMOUS_TRANSACTION;
nextID NUMBER;
BEGIN
SELECT NEXT_ID INTO nextID FROM <Key_generator_table> WHERE TABLE_NAME=tablename FOR UPDATE;
UPDATE <Key_generator_table> SET NEXT_ID=NEXT_ID+1 WHERE TABLE_NAME=tablename;
commit;
RETURN (nextID);
END;
What I understand wrt SELECT FOR UPDATE is, it locks the row when we retrieve so that no other transaction can see while it is trying to update the record. So it hold good in non clustered environment. My questions is would this same hold good in clustered environment? Would there be any race conditions with this approach?
Unfortunately we could not change the unique ID generation approach due to product constraints.

This will serialize your transactions. Not a good design. How about having a sequence and getting the value from that in your PL/SQL function?

Related

Increment by 1 in sequence numbering and dynamically partitioned tables

I am using dynamically created table partitions to store event information in a Postgresql 13 database. The master table from which the child tables inhert their structure contains an id field with an auto-incrementing sequence. The sequence, master table and trigger for inserts look as follows:
CREATE SEQUENCE event_id_seq
INCREMENT 1
START 1
MINVALUE 1
MAXVALUE 9223372036854775807
CACHE 1;
CREATE TABLE event_master
(
id bigint NOT NULL DEFAULT nextval('event_id_seq'::regclass),
event jsonb,
insert_time as timestamp
)
CREATE TRIGGER insert_event_trigger
BEFORE INSERT
ON event_master
FOR EACH ROW
EXECUTE PROCEDURE event_insert_function();
Additionally, the event_insert_function() uses the following code to insert new rows posted to the master table:
EXECUTE format('INSERT INTO %I (event, insert_time) VALUES($1,$2)', partition_name) using NEW.event, NEW.insert_time);
When looking at the sequence numbers in the id field, I only get every other number, i.e. 1,3,5,7, ...
Based on some related information I found, I assume this has something to do with Postgresql counting the initial insert into the master table and the triggered insert into the child table as two occurences. So my first question is, whether this is correct, and if so what's the rational behind it and why not "pass through" the insert from master to child?
More importantly though, what do I need to do to set up a properly incrementing sequence (i.e. returning 1,2,3,4 ...)?

Correct usage of Postgres JSON conversion functions

I am working on a Postgres update transaction.
Let's say I have two tables: events and ticket_books with event booking types. The ticket_books table has a foreign key pointing to the events.
I need to update an event stored in the database, including booking type records from the ticket_books table.
To deal with cascading update and delete, I decided to build a transaction, in a "pseudo-code" it looks like:
BEGIN;
DELETE FROM
ticket_books
WHERE
event_id = ${req.params.id} AND
id NOT IN (${bookingIds})
FOR booking IN json_to_recordset('${JSON.stringify(book)}') as book(id int, title varchar(200), price int, ...) LOOP
IF bookind.id THEN
UPDATE
ticket_books
SET
title = booking.title, price = booking.price
WHERE
event_id = ${req.params.id};
ELSE
INSERT INTO
ticket_books (title, price, qty_available, qty_per_sale)
VALUES
(booking.title, booking.price, booking.qty_available, booking.qty_per_sale)
RETURNING
id
END IF;
END LOOP;
UPDATE
event
SET
...
WHERE
id = ...
RETURNING
id;
COMMIT;
I currently get the error: syntax error at or near "json_to_recordset". I never used json_to_recordset or friends before, just saw from the document that from 9.3 and later those are available. Unsure how to get Postgres to understand what I need, though.
I am embedding a JSON array so the final line looks like:
FOR booking IN json_to_record('[{"id":13,"description":"Three day access to the festival","title":"Three Day General Admission","price":260,"qty_available":5000,"qty_per_sale":10},{"id":14,"description":"Single day access to the festival","title":"Single Day General Admission","price":"90.90","qty_available":2000,"qty_per_sale":2},{"title":"Free Admission","price":"0.00","qty_available":0,"qty_per_sale":0}]')
I believe that my JSON array is valid. Apparently, this is not how I should be passing it to the Postgres. What should I be doing instead? My goal is to iterate over the array entries. If there is an integer value for booking.id, I want to update the record, else insert a new one.
You need a query, and a standalone function call usually does not count as a query:
FOR booking IN select * from json_to_recordset(...
Also, you can't use BEGIN to start a transaction in plpgsql. It is only used to start a block. If you are using a procedure rather than a function, then you can COMMIT but then a new transaction starts immediately with no BEGIN token being used.
You are also missing a semicolon between the DELETE and the FOR, but from the error message that seems to be missing from only your post, and not from your actual code.

Oracle database - determine what table triggered a trigger

I am working with Oracle Express 12c. One of the tables I created has an associated trigger which prevents it from directly updating one of its columns. But it fires even when another table, which should have this kind of access, tries to do it.
For example:
I have tables A and B, and B has a foreign key that links it to A. I purposefully added one of the attributes from A to B. One trigger, let's call it UPD_FROM_B, prevents B from updating this attribute. Another, UPD_FROM_A, should update this attribute on B if it is updated on A. Now UPD_FROM_B prevents UPD_FROM_A from doing what it is supposed to.
Or through a working example:
There are two tables: customer and order. Customer can have multiple orders, but one order has only one customer. For the sake of the project, I had to put customer_name on the order, even though every order has customer_id as foreign key.
One trigger - UPD_NAME_ORDER prevents order from updating the
customer_name, and the other - UPD_NAME_CUST updates this column in the appropriate row of the order table whenever customer_name is updated in customer
How can I determine which table triggered the action and allow UPDATE for one, but still prevent it from the other?
I think you must change your trigger UPD_FROM_B only.
Firstly you select value of column from table A when parent key and foreign key are equal, then compare this value to value of column from table B. If this value equal your trigger allow to do this updating, else not. You write this code as follow:
CREATE TRIGGER UPD_FROM_B before update on B
DECLARE
val A.upd_column%TYPE;
BEGIN
select A.upd_column into val
where A.ID=B.FKID
if val=B.upd_column then
RAISE;
else ......
end if;
END;
At face value, the way I know to do this is to use a package variable as a gate key and share it between the 2 triggers. Trigger A will set the state variable before its nested update of B. Trigger B will check if A set the var, if so, the update succeeds, if not, then B knows A is not the caller, and it should block the update.
Also, I assume your intention is to implement an "UPDATE CASCADE" trigger to update the child record foreign key values based on the parent update, preserving the relationship while updating the FK value. If so, you have to be careful with this approach, it will only work correctly if you disallow multi-row updates.
First a package and state var:
CREATE PACKAGE IsUpdating IS
A number;
END;
At top of trigger A do something like below. The exception handler is a "finally" block that always executes to avoid leaving the package variable in an invalid state in case of an error on the update:
CREATE TRIGGER A_UPD_CASCADE after update on A for each row
BEGIN
IsUpdating.A := 1;
update B set B.FKID = :new.FKID WHERE B.FKID = :old.FKID;
IsUpdating.A := 0;
EXCEPTION
WHEN OTHERS
THEN
IsUpdating.A := 0;
RAISE;
END;
Inside trigger B do this:
CREATE TRIGGER B_UPD_CASCADE before update on B
BEGIN
if IsUpdating.A != 1 then
-- Disallow update since it is coming from B alone
RAISE;
end if;
END;
The pitfall with CASCADE UPDATE is with multi-row parent updates in a single statement, Oracle will execute the trigger for each parent value, causing some child values to update multiple times based on chained before and after values.

SQL constraint to prevent updating a column based on its prior value

Can a Check Constraint (or some other technique) be used to prevent a value from being set that contradicts its prior value when its record is updated.
One example would be a NULL timestamp indicating something happened, like "file_exported". Once a file has been exported and has a non-NULL value, it should never be set to NULL again.
Another example would be a hit counter, where an integer is only permitted to increase, but can never decrease.
If it helps I'm using postgresql, but I'd like to see solutions that fit any SQL implementation
Use a trigger. This is a perfect job for a simple PL/PgSQL ON UPDATE ... FOR EACH ROW trigger, which can see both the NEW and OLD values.
See trigger procedures.
lfLoop has the best approach to the question. But to continue Craig Ringer's approach using triggers, here is an example. Essentially, you are setting the value of the column back to the original (old) value before you update.
CREATE OR REPLACE FUNCTION example_trigger()
RETURNS trigger AS
$BODY$
BEGIN
new.valuenottochange := old.valuenottochange;
new.valuenottochange2 := old.valuenottochange2;
RETURN new;
END
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
DROP TRIGGER IF EXISTS trigger_name ON tablename;
CREATE TRIGGER trigger_name BEFORE UPDATE ON tablename
FOR EACH ROW EXECUTE PROCEDURE example_trigger();
One example would be a NULL timestamp indicating something happened,
like "file_exported". Once a file has been exported and has a non-NULL
value, it should never be set to NULL again.
Another example would be a hit counter, where an integer is only
permitted to increase, but can never decrease.
In both of these cases, I simply wouldn't record these changes as attributes on the annotated table; the 'exported' or 'hit count' is a distinct idea, representing related but orthogonal real world notions from the objects they relate to:
So they would simply be different relations. Since We only want "file_exported" to occur once:
CREATE TABLE thing_file_exported(
thing_id INTEGER PRIMARY KEY REFERENCES(thing.id),
file_name VARCHAR NOT NULL
)
The hit counter is similarly a different table:
CREATE TABLE thing_hits(
thing_id INTEGER NOT NULL REFERENCES(thing.id),
hit_date TIMESTAMP NOT NULL,
PRIMARY KEY (thing_id, hit_date)
)
And you might query with
SELECT thing.col1, thing.col2, tfe.file_name, count(th.thing_id)
FROM thing
LEFT OUTER JOIN thing_file_exported tfe
ON (thing.id = tfe.thing_id)
LEFT OUTER JOIN thing_hits th
ON (thing.id = th.thing_id)
GROUP BY thing.col1, thing.col2, tfe.file_name
Stored procedures and functions in PostgreSQL have access to both old and new values, and that code can access arbitrary tables and columns. It's not hard to build simple (crude?) finite state machines in stored procedures. You can even build table-driven state machines that way.

Is it possible to define a serial datatype which autoincrements when updating a row?

When a new row is added to a table containing a serial column, the next highest integer value is assigned to that column when the row is committed. Can I define a serial datatype which will autoincrement when updating a row with a previously assigned serial value? In datablade? I'm currently using the following functionality for an integer column let intcol = select max(intcol) + 1 from table. In my app, when a customer makes an interest pymt, the previous ticket number gets updated with the next available ticket number.
From some of your other questions I gather you're using a pretty ancient version of Informix.
Relatively recent versions (10+, possibly slightly earlier) support SEQUENCE, which will do exactly what you're after:
CREATE SEQUENCE mytable_version
INCREMENT BY 1 START WITH 1;
Then in your update statement:
UPDATE mytable
SET (payment, version) = (:pymt_amt, mytable_version.next_val)
WHERE ...
Every update will cause the version column to be updated with a new sequence number.
If your app has too many different UPDATE statements or access methods you can't control as well as you'd like, you could consider making the UPDATE to version occur as part of an UPDATE trigger.
I think you would need an "AFTER UPDATE" trigger, possibly together with a sequence to avoid the overhead of counting max from the table.
I don't know anything about Informix and let's say I understand your "let intcol..." statement ;) But for example to recreate MySQL's auto increment functionality with Oracle tools you need code similar to this:
create sequence mytable_seq start with 1 increment 1;
create or replace trigger mytable_insert before insert
for each row
begin
select mytable_seq.nextval into :new.intcol from dual;
end;
For Oracle, no.
Typically done with a pre-insert trigger and sequence.