Triggers and cursors - sql

I'm new in triggers and I'm trying to create one. So, the trigger has to give an error when I introduce a date and the subtraction between this date and the old dates with same key are less than 1. So, my code is:
CREATE TRIGGER pickup
BEFORE INSERT ON Pickingup
FOR EACH ROW
DECLARE
substraction INTEGER;
BEGIN
SELECT (EXTRACT(HOUR FROM(:new.date - date))) INTO substraction FROM Pickingup WHERE (:new.Id = Id AND :new.Year = Year);
IF (substraction < 1) THEN
raise_application_error(-20600, :new.date || 'Error');
END IF;
END;
After this, I introdice a new value and I get this error:
exact fetch returns more than requested number of rows
Could someone give me any clue or help about what do I have to do?

When you select INTO a variable, that variable can only contain a single value. Therefore, your SELECT must return only one row. Your
WHERE (:new.Id = Id AND :new.Year = Year)
is returning more than one row. That is you have more than one row that satisfies the WHERE condition.

Your trigger has several issues:
SELECT (EXTRACT(HOUR FROM(:new.date - date)))
INTO substraction
FROM Pickingup
WHERE :new.Id = Id AND :new.Year = Year;
The SELECT statement may return more than one row. An SELECT ... INTO ... must return exactly one row - no more, no less.
DATE is a reserved keyword in Oracle, by default you cannot use it as column name. Choose a different name.
I guess date column is a DATE data type. The difference of two DATE value is number (the difference in days). You can use EXTRACT only on DATE or TIMESTAMP values, not numbers or INTERVAL.
Try 24 * (:new.date - date) to get the difference in hours.
Within a Row-Level trigger you cannot select the triggering table, i.e. you defined a trigger on table Pickingup, thus you cannot select table Pickingup within your trigger.
You will get an ORA-04091: table Pickingup is mutating, trigger/function may not see it error.
Most people feel WHERE Id = :new.Id AND Year = :new.Year is better readable than your code (but that's just cosmetic).
Your actual requirement is not so clear to me, please provide sample data and expected results.

Related

Find the next free timestamp not in a table yet

I have a table, event, with a column unique_time of type timestamptz. I need each of the values in unique_time to be unique.
Given a timestamptz input, input_time, I need to find the minimum timestamptz value that satisfies the following criteria:
the result must be >= input_time
the result must not already be in unique_time
I cannot merely add one microsecond to the greatest value in unique_time, because I need the minimum value that satisfies the above criteria.
Is there a concise way to compute this as part of an insert or update to the event table?
I suggest a function with a loop:
CREATE OR REPLACE FUNCTION f_next_free(_input_time timestamptz, OUT _next_free timestamptz)
LANGUAGE plpgsql STABLE STRICT AS
$func$
BEGIN
LOOP
SELECT INTO _next_free _input_time
WHERE NOT EXISTS (SELECT FROM event WHERE unique_time = _input_time);
EXIT WHEN FOUND;
_input_time := _input_time + interval '1 us';
END LOOP;
END
$func$;
Call:
SELECT f_next_free('2022-05-17 03:44:22.771741+02');
Be sure to have an index on event(unique_time). If the column is defined UNIQUE or PRIMARY KEY, that index is there implicitly.
Related:
Can I make a plpgsql function return an integer without using a variable?
Select rows which are not present in other table
BREAK statement in PL/pgSQL
Since Postgres timestamps have microsecond resolution, the next free timestamp is at least 1 microsecond (interval '1 us') away. See:
Ignoring time zones altogether in Rails and PostgreSQL
Could also be a recursive CTE, but the overhead is probably bigger.
Concurrency!
Is there a concise way to compute this as part of an INSERT or UPDATE to the event table?
The above is obviously subject to a race condition. Any number of concurrent transaction might find the same free spot. Postgres cannot lock rows that are not there, yet.
Since you want to INSERT (similar for UPDATE) I suggest INSERT .. ON CONFLICT DO NOTHING instead in a loop directly. Again, we need a UNIQUE or PRIMARY KEY on unique_time:
CREATE OR REPLACE FUNCTION f_next_free(INOUT _input_time timestamptz, _payload text)
LANGUAGE plpgsql AS
$func$
BEGIN
LOOP
INSERT INTO event (unique_time, payload)
VALUES (_input_time, _payload)
ON CONFLICT (unique_time) DO NOTHING;
EXIT WHEN FOUND;
_input_time := _input_time + interval '1 us';
END LOOP;
END
$func$;
Adapt your "payload" accordingly.
A successful INSERT locks the row. Even if concurrent transactions cannot see the inserted row yet, a UNIQUE index is absolute.
(You could make it work with advisory locks ...)
Ah, forgot about the approaches from my comment that would try to generate an (infinite) sequence of all microsecond timestamps following the $input_time. There's a much simpler query that can generate exactly the timestamp you need:
INSERT INTO event(unique_time, others)
SELECT MIN(candidates.time), $other_values
FROM (
SELECT $input_time AS "time"
UNION ALL
SELECT unique_time + 1 microsecond AS time
FROM event
WHERE unique_time >= $input_time
) AS candidates
WHERE NOT EXISTS (
SELECT *
FROM unique_time coll
WHERE coll.unique_time = candidates.time
);
However, I'm not sure how well Postgres can optimise this, the MIN aggregate might load all the timestamps from event that are larger than $input_time - which might be fine if you always append events at the end, but still. A probably better alternative would be
INSERT INTO event(unique_time, others)
SELECT available.time, $other_values
FROM (
SELECT *
FROM (
SELECT $input_time AS "time"
UNION ALL
SELECT unique_time + 1 microsecond AS time
FROM event
WHERE unique_time >= $input_time
) AS candidates
WHERE NOT EXISTS (
SELECT *
FROM unique_time coll
WHERE coll.unique_time = candidates.time
)
ORDER BY candidates.unique_time ASC
) AS available
ORDER BY available.time ASC
LIMIT 1;
This might (I don't know) still have to evaluate the complex subquery every time you insert something though, which would be rather inefficient if most of the input don't cause a collision. Also I have no idea how well this works under concurrent loads (i.e. multiple transactions running the query at the same time) and whether it has possible race conditions.
Alternatively, just use a WHILE loop (in the client or PL/SQL) that attempts to insert the value until it succeeds and increments the timestamp on every iteration - see #Erwin Brandstetter's answer for that.

Create a trigger after insert that update another table

I'm trying to create a trigger that update another table after an insert, when the state of the swab test changes from positive to negative.
I have created this trigger, but the problem is that every time there is a user with a negative swab, the user id is copied to the table, even if this user has never been positive. Maybe, have I to compare date?
Create or replace trigger trigger_healed
After insert on swab_test
For each row
Begin
if :new.result = 'Negative' then
UPDATE illness_update
SET illness_update.state = 'healed'
WHERE illness_update.id_user = :new.id_user;
end if;
end;
This is the result that I'm trying to get.
SWAB_TEST
id_user id_swab swab_result date
1 test1 'positive' May-01-2020
1 test1 'negative' May-08-2020
2 test2 'negative' May-02-2020
ILLNESS_UPDATE
id_user state date
1 'healed' May-08-2020
What you ask for would require the trigger to look at the existing rows in the table that is being inserted on - which by default cannot be done, since a trigger cannot action the table it fires upon.
Instead of trying to work around that, I would suggest simply creating a view to generate the result that you want. This gives you an always up-to-date perspective at your data without any maintenance cost:
create view illness_update_view(id_user, state, date) as
select id_user, 'healed', date
from (
select
s.*,
lag(swab_result) over(partition by id_user order by date) lag_swab_result
from swab_test s
) s
where lag_swab_result = 'positive' and swab_result = 'negative'
The view uses window function lag() to recover the "previous" result of each row (per user). Rows that represents transitions from a positive to a negative result are retained.
As #GMB indicates you cannot do what you are asking with a standard before/after row trigger as it that cannot reference swab_test as that is the table causing the trigger to fire (that would result in an ORA-04091 mutating table error). But you can do this in a Compound Trigger (or an After statement). But before getting to that I think your data model has a fatal flaw.
You have established the capability for multiple swab tests. A logical extension for this being that each id_swab tests for a different condition, or a different test for the same condition. However, the test (id_swab) is not in your illness update table. This means if any test goes to negative result after having a prior positive result the user is healed from ALL tests. To correct this you need to a include id_swab id making the healed determination. Since GMB offers the best solution I'll expand upon that. First drop the table Illness_update. Then create Illness_update as a view. (NOTE: in answer to your question you DO NOT need a trigger for the view, everything necessary is in the swab_test; see lag windowed function.
create view illness_update(id_user, state, swab_date) as
select id_user, id_swab, 'healed' state,swab_date
from (
select
s.*
, lag(swab_result) over(partition by id_user, id_swab
order by id_user, id_swab, swab_date) as lag_swab_result
from swab_test s
) s
where lag_swab_result = 'positive'
and swab_result = 'negative';
Now, as mentioned above, if your assignment requires the use of a trigger then see fiddle. Note: I do not use date (or any data type) as a column name. Here I use swab_date in all instances.

How do I create custom sequence in PostgreSQL based on date of row creation?

I am in the process of replacing a legacy order management application for my employer. One of the specs for the new system is that the order numbering system remain in place. Right now, our order numbers are formatted like so:
The first four digits are the current year
The next two digits are the current month
The next (and last) four digits are a counter that increments by one each time an order is placed in that month.
For example, the first order placed in June 2014 would have order number 2014060001. The next order placed would have order number 2014060002 and so on.
This order number will need to be the primary ID in the Orders table. It appears that I need to set a custom sequence for PostgreSQL to use to assign the primary key, however the only documentation I can find for creation of custom sequences is very basic (how to increment by two instead of one, etc.).
How do I create a custom sequence based on the date as described above?
You can set your manually created sequence to a specific value using the EXTRACT() function:
setval('my_sequence',
(EXTRACT(YEAR FROM now())::integer * 1000000) +
(EXTRACT(MONTH FROM now())::integer * 10000)
);
The next order entered will take the next value in the sequence, i.e. YYYYMM0001 etc.
The trick is when to update the sequence value. You could do it the hard way inside PG and write a BEFORE INSERT trigger on your orders table that checks if this is the first record in a new month:
CREATE FUNCTION before_insert_order() RETURNS trigger AS $$
DECLARE
base_val integer;
BEGIN
-- base_val is the minimal value of the sequence for the current month: YYYYMM0000
base_val := (EXTRACT(YEAR FROM now())::integer * 1000000) +
(EXTRACT(MONTH FROM now())::integer * 10000);
-- So if the sequence is less, then update it
IF (currval('my_sequence') < base_val)
setval('my_sequence', base_val);
END IF;
-- Now assign the order id and continue with the insert
NEW.id := nextval('my_sequence');
RETURN NEW;
END; $$ LANGUAGE plpgsql;
CREATE TRIGGER tr_bi_order
BEFORE INSERT ON order_table
FOR EACH ROW EXECUTE PROCEDURE before_insert_order();
Why is this the hard way? Because you check the value of the sequence on every insert. If you have only a few inserts per day and your system is not very busy, this is a viable approach.
If you cannot spare all those CPU cycles you could schedule a cron job to run at 00:00:01 of every first day of the month to execute a PG function via psql to update the sequence and then just use the sequence as a default value for new order records (so no trigger needed).
Another idea, which I would prefer is this,
Create a function which generates your id from from the timestamp and your invoice number,
Create regular table with,
a foo_id: simple sequence (incrementing int)
a ts_created field.
Generate your invoice ids on the query when required,
Here is how it looks, first we create the function to generate an acme_id from a bigint and a timestamp
CREATE FUNCTION acme_id( seqint bigint, seqts timestamp with time zone )
RETURNS char(10)
AS $$
SELECT format(
'%04s%02s%04s',
EXTRACT(year FROM seqts),
EXTRACT(month from seqts),
to_char(seqint, 'fm0000')
);
$$ LANGUAGE SQL
IMMUTABLE;
And then we create a table.
CREATE TABLE foo (
foo_id int PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
data text,
ts_created timestamp with time zone DEFAULT NOW()
);
CREATE INDEX ON foo(ts_created, foo_id);
Now you can generate what you're looking for with a simple window function.
SELECT acme_id(
ROW_NUMBER() OVER (
PARTITION BY date_trunc('MONTH', ts_created)
ORDER BY ts_created
),
ts_created
), *
FROM foo;
I would build my system such that the foo_id is used internally. So long as you don't have deletions from foo you'll always be able to render the same invoice id from the row, you just won't have to store it.
You can even cache the rendering and invoice ids with a [materialized] view.
CREATE MATERIALIZED VIEW acme_invoice_view
AS
SELECT acme_id(
ROW_NUMBER() OVER (
PARTITION BY date_trunc('MONTH', ts_created)
ORDER BY ts_created
),
ts_created
), *
FROM foo;
;
SELECT * FROM acme_invoice_view;
acme_id | foo_id | insert_date | data
------------+--------+-------------+------
2021100001 | 1 | 2021-10-12 | bar
(1 row)
Keep in mind the drawbacks to this approach:
Rows in the invoice table can never be deleted, (you could add a bool to deactivate them),
The foo_id and ts_created should be immutable (never updated) or you may get a new Invoice ID. Surrogate keys (foo_id should never change by definition anyway).
The benefits of this approach:
Storing a real timestamp which is likely very useful on an invoice
Real surrogate key (which I would use in all contexts instead of an invoice ID), simplifies linking to other tables and is more efficient and fast.
Single source of truth for the invoice date
Easy to issue a new invoice-id scheme and to even map it to an older scheme.

Calculating age from birthday with oracle plsql trigger and insert the age in table

i have a table
dates
(dob date,
age number(4)
);
I will insert a date of birth and a trigger will calculate the age and insert that age in the age field.
CREATE OR REPLACE PROCEDURE getage IS
ndob date;
nage number(10);
BEGIN
select dob into ndob from dates;
select (sysdate-to_date(ndob))/365 into nage from dual;
update dates set age=nage;
END;
/
SHOW ERRORS;
this procedure works fine but for only one row, and I need trigger for all the rows but if I call it from a trigger then the error occurs.
CREATE OR REPLACE TRIGGER agec after INSERT OR UPDATE ON dates
FOR EACH ROW
BEGIN
getage;
END;
/
please help...i really need this...
No, you don't. I'm not sure you'll pay attention; and there's no reason why you should :-) but:
Do not store age in your database. You are absolutely guaranteed to be wrong occasionally. Age changes each year for each person, however, it changes every day for some people. This in turn means you need a batch job to run every day and update age. If this fails, or isn't extremely strict and gets run twice, you're in trouble.
You should always calculate the age when you need it. It's a fairly simple query and saves you a lot of pain in the longer run.
select floor(months_between(sysdate,<dob>)/12) from dual
I've set up a little SQL Fiddle to demonstrate
Now, to actually answer your question
this procedure works fine but for only one row,,,but for all the rows
i need trigger but if i call it from a trigger then the error
occurs...
You don't mention the error, please do this in future as it's very helpful, but I suspect you're getting
ORA-04091: table string.string is mutating, trigger/function may not
see it
This is because your procedure is querying the table that is being updated. Oracle does not allow this in order to maintain a read-consistent view of the data. The way to avoid this is to not query the table, which you don't need to do. Change your procedure to a function that returns the correct result given a date of birth:
function get_age (pDOB date) return number is
/* Return the the number of full years between
the date given and sysdate.
*/
begin
return floor(months_between(sysdate,pDOB)/12);
end;
Notice once again that I'm using the months_between() function as not all years have 365 days.
In your trigger you then assign the value directly to the column.
CREATE OR REPLACE TRIGGER agec before INSERT OR UPDATE ON dates
FOR EACH ROW
BEGIN
:new.age := get_age(:new.dob);
END;
The :new.<column> syntax is a reference to the <column> that is being updated. In this case :new.age is the actual value that is going to be put in the table.
This means that your table will automatically be updated, which is the point of a DML trigger.
As you can see there's little point to the function at all; your trigger can become
CREATE OR REPLACE TRIGGER agec before INSERT OR UPDATE ON dates
FOR EACH ROW
BEGIN
:new.age := floor(months_between(sysdate,:new,DOB)/12);
END;
However, having said that, if you are going to use this function elsewhere in the database then keep it separate. It's good practice to keep code that is used in multiple places in a function like this so it is always used in the same way. It also ensures that whenever anyone calculates age they'll do it properly.
As a little aside are you sure you want to allow people to be 9,999 years old? Or 0.000000000001998 (proof)? Numeric precision is based on the number of significant digits; this (according to Oracle) is non-zero numbers only. You can easily be caught out by this. The point of a database is to restrict the possible input values to only those that are valid. I'd seriously consider declaring your age column as number(3,0) to ensure that only "possible" values are included.
If you do it by a trigger the age is potentially wrong after one day. This is why saving the
age in a database table is bad practice and nobody does that. What you need is a view.
create view person_age as
select person_id,
floor(months_between(trunc(sysdate),birthdate)/12) as age
from birthdays;
Look at the SQL Fiddle
If you want AGE to be available from the database there are two options:
Define a view which contains the age computation, or
Define a virtual column which does the same
To define such a view:
CREATE OR REPLACE VIEW DATES_VIEW
AS SELECT DOB,
FLOOR(MONTHS_BETWEEN(SYSDATE, DOB) / 12) AS AGE
FROM DATES
The problem with this is that you have to remember that you SELECT from DATES_VIEW, but need to update DATES, which is mentally messy. IMO adding a virtual column to the table is cleaner:
CREATE TABLE DATES
(DOB DATE,
AGE GENERATED ALWAYS AS (FLOOR(MONTHS_BETWEEN(SYSDATE, DOB) / 12)) VIRTUAL);
Note that virtual columns cannot be updated.
Either method helps to ensure that AGE will always be consistent.
Share and enjoy.

Prevent inserting overlapping date ranges using a SQL trigger

I have a table that simplified looks like this:
create table Test
(
ValidFrom date not null,
ValidTo date not null,
check (ValidTo > ValidFrom)
)
I would like to write a trigger that prevents inserting values that overlap an existing date range. I've written a trigger that looks like this:
create trigger Trigger_Test
on Test
for insert
as
begin
if exists(
select *
from Test t
join inserted i
on ((i.ValidTo >= t.ValidFrom) and (i.ValidFrom <= t.ValidTo))
)
begin
raiserror (N'Overlapping range.', 16, 1);
rollback transaction;
return
end;
end
But it doesn't work, since my newly inserted record is part of both tables Test and inserted while inside a trigger. So the new record in inserted table is always joined to itself in the Test table. Trigger will always revert transation.
I can't distinguish new records from existing ones. So if I'd exclude same date ranges I would be able to insert multiple exactly-same ranges in the table.
The main question is
Is it possible to write a trigger that would work as expected without adding an additional identity column to my Test table that I could use to exclude newly inserted records from my exists() statement like:
create trigger Trigger_Test
on Test
for insert
as
begin
if exists(
select *
from Test t
join inserted i
on (
i.ID <> t.ID and /* exclude myself out */
i.ValidTo >= t.ValidFrom and i.ValidFrom <=t.ValidTo
)
)
begin
raiserror (N'Overlapping range.', 16, 1);
rollback transaction;
return
end;
end
Important: If impossible without identity is the only answer, I welcome you to present it along with a reasonable explanation why.
I know this is already answered, but I tackled this problem recently and came up with something that works (and performs well doing a singleton seek for each inserted row). See the example in this article:
http://michaeljswart.com/2011/06/enforcing-business-rules-vs-avoiding-triggers-which-is-better/
(and it doesn't make use of an identity column)
Two minor changes and everything should work just fine.
First, add a where clause to your trigger to exclude the duplicate records from the join. Then you won't be comparing the inserted records to themselves:
select *
from testdatetrigger t
join inserted i
on ((i.ValidTo >= t.ValidFrom) and (i.ValidFrom <= t.ValidTo))
Where not (i.ValidTo=t.Validto and i.ValidFrom=t.ValidFrom)
Except, this would allow for exact duplicate ranges, so you will have to add a unique constraint across the two columns. Actually, you may want a unique constraint on each column, since any two ranges that start (or finish) on the same day are by default overlapping.