Correct usage of Postgres JSON conversion functions - sql

I am working on a Postgres update transaction.
Let's say I have two tables: events and ticket_books with event booking types. The ticket_books table has a foreign key pointing to the events.
I need to update an event stored in the database, including booking type records from the ticket_books table.
To deal with cascading update and delete, I decided to build a transaction, in a "pseudo-code" it looks like:
BEGIN;
DELETE FROM
ticket_books
WHERE
event_id = ${req.params.id} AND
id NOT IN (${bookingIds})
FOR booking IN json_to_recordset('${JSON.stringify(book)}') as book(id int, title varchar(200), price int, ...) LOOP
IF bookind.id THEN
UPDATE
ticket_books
SET
title = booking.title, price = booking.price
WHERE
event_id = ${req.params.id};
ELSE
INSERT INTO
ticket_books (title, price, qty_available, qty_per_sale)
VALUES
(booking.title, booking.price, booking.qty_available, booking.qty_per_sale)
RETURNING
id
END IF;
END LOOP;
UPDATE
event
SET
...
WHERE
id = ...
RETURNING
id;
COMMIT;
I currently get the error: syntax error at or near "json_to_recordset". I never used json_to_recordset or friends before, just saw from the document that from 9.3 and later those are available. Unsure how to get Postgres to understand what I need, though.
I am embedding a JSON array so the final line looks like:
FOR booking IN json_to_record('[{"id":13,"description":"Three day access to the festival","title":"Three Day General Admission","price":260,"qty_available":5000,"qty_per_sale":10},{"id":14,"description":"Single day access to the festival","title":"Single Day General Admission","price":"90.90","qty_available":2000,"qty_per_sale":2},{"title":"Free Admission","price":"0.00","qty_available":0,"qty_per_sale":0}]')
I believe that my JSON array is valid. Apparently, this is not how I should be passing it to the Postgres. What should I be doing instead? My goal is to iterate over the array entries. If there is an integer value for booking.id, I want to update the record, else insert a new one.

You need a query, and a standalone function call usually does not count as a query:
FOR booking IN select * from json_to_recordset(...
Also, you can't use BEGIN to start a transaction in plpgsql. It is only used to start a block. If you are using a procedure rather than a function, then you can COMMIT but then a new transaction starts immediately with no BEGIN token being used.
You are also missing a semicolon between the DELETE and the FOR, but from the error message that seems to be missing from only your post, and not from your actual code.

Related

SQL Server trigger thinks there's a duplicate in the table when there isn't

I'm a new SQL developer. After recommendations I have altered my trigger (for this task I need to use a trigger so can't avoid it), but I have re-altered my trigger. I want it to prevent a duplication in the Rentals table of the BikeID foreign key contained within it.
This is my code at the moment:
CREATE TRIGGER BikeNotAvailable
ON dbo.SA_Rental
AFTER INSERT
AS
IF EXISTS (SELECT *
FROM SA_Rental
INNER JOIN inserted i ON i.BikeID = dbo.SA_Rental.BikeID)
BEGIN
ROLLBACK
RAISERROR ('This bike is already being hired', 16, 1);
END
go
But when I enter the BikeID in the Rentals table, even though the BikeID is not present inside a row yet, it still raises the error - why? (I have also tested this on an empty table and it still raises the error)
Just some context on my data, the BikeID is a primary key from the 'Bike' table that is shared as a foreign key to the Rentals table, not sure if this has anything to do with the error.
Can someone please help me fix this trigger so it works.
Thanks.
Well, as it's an AFTER trigger, the trigger is run after the new record is added to the table (at least visible for your trigger).
Supposing that your table has an automatically generated ID column, you should exclude the inserted row from your check like this:
CREATE TRIGGER BikeNotAvailable ON dbo.SA_Rental
AFTER INSERT
AS
if exists ( select * from SA_Rental
inner join inserted i on i.BikeID=dbo.SA_Rental.BikeID
where SA_Rental.RentalID <> i.RentalID)
begin
rollback
RAISERROR ('This bike is already being hired', 16, 1);
end
go
A far simpler way to achieve what you are after is to create a unique index:
CREATE UNIQUE INDEX BikeRented ON SA_Rental (BikeID);
This, of course, assumes that you delete the row from your table when the bike is no longer rented (as this is the implied logic in your post). If this is not the case, then we need more detail; what specifies on your table that the rental has completed?
If we assume you have a return date, and the return date is NULL when the bike is yet to be returned, then you would use a filtered index like so:
CREATE UNIQUE INDEX BikeRented ON SA_Rental (BikeID)
WHERE ReturnedDate IS NULL;

Oracle SQL update double-check locking

Suppose we have table A with fields time: date, status: int, playerId: int, serverid: int
We added constraint on time, playerid and serverid (UNQ_TIME_PLAYERID_SERVERID)
At some time we try to update all rows in table A with new status and date:
update status = 1, time = sysdate where serverid=XXX and status != 1 and time > sysdate
Problem that there are two separated processes on separate machines that can execute same update at same sysdate.
And UNQ_TIME_PLAYERID_SERVERID violation occurs!
Is there any possibility to force Oracle check where cause before concrete update (when lock on row acquired)?
I do not want to use any 'select for update' things
If it's really the same update 100% of the time, then just catch the exception and ignore it.
In case you want to prevent an error occuring in the first place, you need to implement some logic to prevent the second update statement from ever executing.
I could think of a "lock table" just for this purpose. Create a table TABLE_A_LOCK_TB (add columns based on what information you want to have stored there for administrative reasons, e.g. user who set the lock or a timestamp, ...).
Before you execute an update statement on table A, just insert a row to TABLE_A_LOCK_TB. Once an update was successful, delete said row.
Before executing any update statement on table A just check whether the TABLE_A_LOCK_TB has a dataset. If it doesn't your update is good to go, if it does you don't execute the update.
To make this process easier you could just write a package for "locking" and "unlocking" table A by inserting / deleting a row from the TABLE_A_LOCK_TB. Also implement a function to check the "lock status".
If you need this logic for several tables you can also make it dynamic by just having a column holding the table name in TABLE_A_LOCK_TB and checking against that.
In your application logic you can handle every update like this then (pseudocode):
IF your_lock_package.lock_status(table_name) = false THEN
your_lock_package.set_lock(table_name);
-- update statement(s)
your_lock_package.release_lock(table_name);
ELSE
-- "error" handling / information to user + exit

DB2 locking when no record yet exists

I have a table, something like:
create table state {foo int not null, bar int not null, baz varchar(32)};
create unique index on state(foo,bar);
I'd like to lock for a unique record in this table. However, if there's no existing record I'd like to prevent anyone else from inserting a record, but without inserting myself.
I'd use "FOR UPDATE WITH RS USE AND KEEP EXCLUSIVE LOCKS" but that only seems to work if the record exists.
A) You can let DB2 create every ID number. Let's say you have defined your Customer table
CREATE TABLE Customers
( CustomerID Int NOT NULL
GENERATED ALWAYS AS IDENTITY
PRIMARY KEY
, Name Varchar(50)
, Billing_Type Char(1)
, Balance Dec(9,2) NOT NULL DEFAULT
);
Insert rows without specifying the CustomerID, since DB2 will always produce the value for you.
INSERT INTO Customers
(Name, Billing_Type)
VALUES
(:cname, :billtype);
If you need to know what the last value assigned in your session was, you can then use the IDENTITY_VAL_LOCAL() function.
B) In my environment, I generally specify GENERATED BY DEFAULT. This is in part due to the nature of our principle programming language, ILE RPG-IV, where developers have traditionally to allowed the compiler to use the entire record definition. This leads me to I can tell everyone to use a sequence to generate ID values for a given table or set of tables.
You can grant select to only you, but if there are others with secadm or other privileges, they could insert.
You can do something with a trigger, something like check the current session, and if the user is your user, then it inserts the row.
if (SESSION_USER <> 'Alex) then
rollback -- or generate an exception
end if;
It seems that you also want to keep just one row, then, you can control that also in a trigger:
select count(0) into value from state
if (value > 1) then
rollback -- or generate an exception
end if;

SQL constraint to prevent updating a column based on its prior value

Can a Check Constraint (or some other technique) be used to prevent a value from being set that contradicts its prior value when its record is updated.
One example would be a NULL timestamp indicating something happened, like "file_exported". Once a file has been exported and has a non-NULL value, it should never be set to NULL again.
Another example would be a hit counter, where an integer is only permitted to increase, but can never decrease.
If it helps I'm using postgresql, but I'd like to see solutions that fit any SQL implementation
Use a trigger. This is a perfect job for a simple PL/PgSQL ON UPDATE ... FOR EACH ROW trigger, which can see both the NEW and OLD values.
See trigger procedures.
lfLoop has the best approach to the question. But to continue Craig Ringer's approach using triggers, here is an example. Essentially, you are setting the value of the column back to the original (old) value before you update.
CREATE OR REPLACE FUNCTION example_trigger()
RETURNS trigger AS
$BODY$
BEGIN
new.valuenottochange := old.valuenottochange;
new.valuenottochange2 := old.valuenottochange2;
RETURN new;
END
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
DROP TRIGGER IF EXISTS trigger_name ON tablename;
CREATE TRIGGER trigger_name BEFORE UPDATE ON tablename
FOR EACH ROW EXECUTE PROCEDURE example_trigger();
One example would be a NULL timestamp indicating something happened,
like "file_exported". Once a file has been exported and has a non-NULL
value, it should never be set to NULL again.
Another example would be a hit counter, where an integer is only
permitted to increase, but can never decrease.
In both of these cases, I simply wouldn't record these changes as attributes on the annotated table; the 'exported' or 'hit count' is a distinct idea, representing related but orthogonal real world notions from the objects they relate to:
So they would simply be different relations. Since We only want "file_exported" to occur once:
CREATE TABLE thing_file_exported(
thing_id INTEGER PRIMARY KEY REFERENCES(thing.id),
file_name VARCHAR NOT NULL
)
The hit counter is similarly a different table:
CREATE TABLE thing_hits(
thing_id INTEGER NOT NULL REFERENCES(thing.id),
hit_date TIMESTAMP NOT NULL,
PRIMARY KEY (thing_id, hit_date)
)
And you might query with
SELECT thing.col1, thing.col2, tfe.file_name, count(th.thing_id)
FROM thing
LEFT OUTER JOIN thing_file_exported tfe
ON (thing.id = tfe.thing_id)
LEFT OUTER JOIN thing_hits th
ON (thing.id = th.thing_id)
GROUP BY thing.col1, thing.col2, tfe.file_name
Stored procedures and functions in PostgreSQL have access to both old and new values, and that code can access arbitrary tables and columns. It's not hard to build simple (crude?) finite state machines in stored procedures. You can even build table-driven state machines that way.

A trigger to find the sum of one field in a different table and error if it's over a certain value in oracle

I have two tables
moduleprogress which contains fields:
studentid
modulecode
moduleyear
modules which contains fields:
modulecode
credits
I need a trigger to run when the user is attempting to insert or update data in the moduleprogress table.
The trigger needs to:
look at the studentid that the user has input and look at all modules that they have taken in moduleyear "1".
take the modulecode the user input and look at the modules table and find the sum of the credits field for all these modules (each module is worth 10 or 20 credits).
if the value is above 120 (yearly credit limit) then it needs to error; if not, input is ok.
Does this make sense? Is this possible?
#a_horse_with_no_name
This looks like it will work but I will only be using the database to input data manually so it needs to error on input. I'm trying to get a trigger similar to this to solve the problem(trigger doesn't work) and forget that "UOS_" is before everything. Just helps me with my database and other functions.
CREATE OR REPLACE TRIGGER "UOS_TESTINGS"
BEFORE UPDATE OR INSERT ON UOS_MODULE_PROGRESS
REFERENCING NEW AS NEW OLD AS OLD
DECLARE
MODULECREDITS INTEGER;
BEGIN
SELECT
m.UOS_CREDITS,
mp.UOS_MODULE_YEAR,
SUM(m.UOS_CREDITS)
INTO MODULECREDITS
FROM UOS_MODULE_PROGRESS mp JOIN UOS_MODULES m
ON m.UOS_MODULE_CODE = mp.UOS_MODULE_CODE
WHERE mp.UOS_MODULE_YEAR = 1;
IF MODULECREDITS >= 120 THEN
RAISE_APPLICATION_ERROR(-20000, 'Students are only allowed to take upto 120 credits per year');
END IF;
END;
I get the error message :
8 23 PL/SQL: ORA-00947: not enough values
4 1 PL/SQL: SQL Statement ignored
I'm not sure I understand your description, but the way I understand it, this can be solved using a materialized view, which might give better transactional behaviour than the trigger:
CREATE MATERIALIZED VIEW LOG
ON moduleprogress WITH ROWID (modulecode, studentid, moduleyear)
INCLUDING NEW VALUES;
CREATE MATERIALIZED VIEW LOG
ON modules with rowid (modulecode, credits)
INCLUDING NEW VALUES;
CREATE MATERIALIZED VIEW mv_module_credits
REFRESH FAST ON COMMIT WITH ROWID
AS
SELECT pr.studentid,
SUM(m.credits) AS total_credits
FROM moduleprogress pr
JOIN modules m ON pr.modulecode = m.modulecode
WHERE pr.moduleyear = 1
GROUP BY pr.studentid;
ALTER TABLE mv_module_credits
ADD CONSTRAINT check_total_credits CHECK (total_credits <= 120)
But: depending on the size of the table this might however be slower than a pure trigger based solution.
The only drawback of this solution is, that the error will be thrown at commit time, not when the insert happens (because the MV is only refreshed on commit, and the check constraint is evaluated then)