check value in more than one inserted object Postgresql - sql

What I want to achive is check if hours count in day is higher than 24.
For example:
I've got entity hour with fields:
Integer id;
LocalDate date;
Integer hours;
Now I use post method to add new objects for example:
I add first one:
id - 1
date - 22.08.2018
hours - 10
I add second one:
id - 2
date - 22.08.2018
hours - 10
I add thrid one (hours count is now higher than 24 so I need an exception to be thrown )
id - 3
date - 22.08.2018
hours - 10
What I already have:
<sql>
ALTER TABLE hours_worked ADD CONSTRAINT more_than24h CHECK (hours >=0 AND hours <= 24)
</sql>
But this one only checks if I'm not adding more than 24 at once.

you can use constraint to check OTHER rows - you need trigger here, example for insert:
t=# create table i (id int, h int);
CREATE TABLE
t=# insert into i values(1,10),(1,10),(2,23);
INSERT 0 3
t=# create or replace function fi() returns trigger as $$ begin
if (select sum(h)+NEW.h > 24 from i where id = NEW.id) then
raise exception '%','over 24 for '||NEW.id;
end if;
return NEW; end;
$$ language plpgsql
;
CREATE FUNCTION
t=# create trigger ti before insert ON i for each row EXECUTE PROCEDURE fi();
CREATE TRIGGER
t=# insert into i values(1,1);
INSERT 0 1
t=# insert into i values(1,1);
INSERT 0 1
t=# insert into i values(1,1);
INSERT 0 1
t=# insert into i values(1,1);
INSERT 0 1
t=# insert into i values(1,1);
ERROR: over 24 for 1
CONTEXT: PL/pgSQL function fi() line 3 at RAISE
of course you need similar for UPDATE

Related

Trigger insert of data into table with result from another select query

Starting my first steps in SQL.
I'm trying to insert the last row of a timestamped table to another table.
I've written this trigger:
CREATE TRIGGER update_analysis()
AFTER INSERT ON data
FOR EACH ROW
EXECUTE PROCEDURE insert_to_analysis();
I've defined a function, which I know is wrong, but don't understand how to write it correctly. (the target table has these columns)
CREATE FUNCTION UPDATE_ANALYSIS() RETURNS TABLE
AS
$$ BEGIN
INSERT INTO ANALYSIS (TIME, CYCLE_NUMBER,CAT1,CAT2,CAT3)
SELECT (TIME, CYCLENO , I1 , I2 * 2 ,I3*3)
FROM DATA
ORDER BY TIME DESC
LIMIT 1;)
RETURN
END;
$$ LANGUAGE 'plpgsql';
Thanks in advance
If you are referencing the same data you just inserted into data, you could simply refer to what you inserted instead of SELECT:ing, like so:
CREATE FUNCTION UPDATE_ANALYSIS() RETURNS TABLE
AS
$$ BEGIN
INSERT INTO ANALYSIS (TIME, CYCLE_NUMBER,CAT1,CAT2,CAT3)
VALUES (NEW.TIME, NEW.CYCLENO , NEW.I1 , NEW.I2 * 2 ,NEW.I3 * 3);
RETURN NEW;
END;
$$ LANGUAGE 'plpgsql';
Since it's a function for a trigger, it should return a trigger.
And it's for each row, so an insert from values is possible.
Example
CREATE FUNCTION FN_INSERT_TO_ANALYSIS() RETURNS TRIGGER
AS $ins_analysis$
BEGIN
INSERT INTO ANALYSIS (TIME, CYCLE_NUMBER, CAT1, CAT2, CAT3)
VALUES (NEW.TIME, NEW.CYCLENO, NEW.I1, NEW.I2 * 2 , NEW.I3 * 3);
RETURN NEW;
END $ins_analysis$ LANGUAGE 'plpgsql';
CREATE TRIGGER trg_data_ins_analysis
AFTER INSERT ON data
FOR EACH ROW
EXECUTE PROCEDURE fn_insert_to_analysis();
insert into data values (current_timestamp,1,1,1,1)
select * from data
time
cycleno
i1
i2
i3
2021-12-27 14:53:17.822649
1
1
1
1
select * from ANALYSIS
time
cycle_number
cat1
cat2
cat3
2021-12-27 14:53:17.822649
1
1
2
3
Demo on db<>fiddle here

Change number of Rows Affected by Update

I am trying to achieve here is to basically override 0 rows Updated, when UPDATE is issued in-case the actual PK/UK value doesn't exist in the table. This is what I have done:
Actual Table:
CREATE TABLE fdrgiit.vereine(
team numeric(10) primary key,
punkte int not null,
serie int not null
);
Dummy Table:
CREATE TABLE fdrgiit.dummyup
(
id numeric(1) PRIMARY KEY,
datetest timestamp
);
Inserted records in both the tables:
insert into vereine(team,punkte,serie) values(1, 50, 1);
insert into vereine(team,punkte,serie) values(2, 30, 1);
insert into vereine(team,punkte,serie) values(3, 25, 1);
insert into vereine(team,punkte,serie) values(4, 37, 2);
insert into dummyup values(1, now());
Created the following function and trigger:
create or replace function updateover()
returns trigger as
$BODY$
begin
if EXISTS (select 1 FROM vereine WHERE team = new.team ) then
RETURN NEW;
else
UPDATE fdrgiit.dummyup set datetest=now() where id=1;
RETURN NULL;
end if;
end;
$BODY$
LANGUAGE plpgsql;
create trigger update_redundancy
before update on vereine
for each row
execute procedure updateover() ;
But when I execute an UPDATE like this on the , I am still get 0 rows affected
update vereine set punkte=87 where team=5;
Kindly review and please suggest if this is something that can be done.
You cannot trigger anything with an UPDATE that does not affect row as triggers are only fired for affected rows.
But you could wrap your alternative UPDATE into a function:
CREATE OR REPLACE FUNCTION updateover()
RETURNS int AS
$func$
UPDATE dummyup
SET datetest = now()
WHERE id = 1
RETURNING 2;
$func$ LANGUAGE sql;
... and run your UPDATE nested like this:
WITH upd AS (
UPDATE vereine
SET punkte = 87
WHERE team = 5 -- does not exist!
RETURNING 1
)
SELECT 1 FROM upd
UNION ALL
SELECT updateover()
LIMIT 1;
db<>fiddle here
If no row qualifies for an UPDATE, then 1st outer SELECT 1 FROM upd returns no row and Postgres keeps processing the 2nd SELECT updateover(). But if at least one row is affected, the final SELECT is never executed. Exactly what you want.
This updates dummyup one time if the UPDATE on vereine does not affect any rows; never several times. But that's ok, since now() is STABLE for the duration of the transaction.
Related:
Return a value if no record is found

Timetable allocation using SQL

I have these two tables in my database:
Session(startTime,endTime,date)
Allocation(startTime,endTime,date)
There are already existing sessions in the timetable, and I need to allocate a new session in a way that there are no confusions between all the sessions. I thought about something like:
ALTER TABLE allocation ADD CONSTRAINT timeC
check (startTime not between (select startTime from session)
and (select endTime from session))
The problem is that it's impossible to do so as we can't use the keyword "between" for two sets of values (same thing with the endTime).
How can I manage to add this constraint ? (I use Oracle 11g)
That can't be done via a check constraint; a trigger might help, though (as Gordon has already said). Here's an example:
SQL> create table tsession
2 (id number constraint pk_tsess primary key,
3 starttime date,
4 endtime date);
Table created.
SQL>
SQL> create table tallocation
2 (id number constraint fk_all_sess references tsession (id),
3 starttime date,
4 endtime date);
Table created.
SQL>
SQL> create or replace trigger trg_biu_all
2 before insert or update on tallocation
3 for each row
4 declare
5 l_dummy varchar2(1);
6 begin
7 select 'x'
8 into l_dummy
9 from tsession s
10 where s.id = :new.id
11 and :new.starttime between s.starttime and s.endtime;
12
13 raise_application_error(-20001, 'Can not set such a start time as it collides with TSESSION values');
14 exception
15 when no_data_found then
16 -- OK, no problem - no collision
17 null;
18 end;
19 /
Trigger created.
Now, testing:
SQL> -- Insert master record, ID = 1; it'll take whole February
SQL> insert into tsession values (1, date '2018-02-01', date '2018-02-28');
1 row created.
SQL> -- I don't want to allow this date to "jump in" between 2018-02-01 and 2018-02-28
SQL> insert into tallocation (id, starttime) values (1, date '2018-02-13');
insert into tallocation (id, starttime) values (1, date '2018-02-13')
*
ERROR at line 1:
ORA-20001: Can not set such a start time as it collides with TSESSION values
ORA-06512: at "HR.TRG_BIU_ALL", line 10
ORA-04088: error during execution of trigger 'HR.TRG_BIU_ALL'
SQL> -- This one should be OK, as it is in March
SQL> insert into tallocation (id, starttime) values (1, date '2018-03-22');
1 row created.
SQL>

PostgreSQL Trigger/Function to ensure date is in past

I want to put a constraint on a date of birth field for one of my db tables. Essentially I want to ensure pat_dob_dt is at least 16 years ago (from current date). I am using PostgreSQL 8.4.20 and used here for guidance:
CREATE OR REPLACE FUNCTION patient_dob_in_past()
RETURNS TRIGGER AS $$
BEGIN
-- check pat_dob_dt is in past --
IF ( NEW.pat_dob_dt > current_date - interval '16 years' ) THEN
RAISE EXCEPTION '% must be 16 years in past', NEW.pat_dob_dt
END IF;
RETURN NEW;
END;
$$ language 'plpgsql';
CREATE OR REPLACE TRIGGER patient_dob_in_past BEFORE UPDATE OR INSERT
ON patients FOR EACH ROW EXECUTE PROCEDURE patient_dob_in_past();
Unfortunately I am met with the following error
ERROR: syntax error at or near "END" at character 14
QUERY: SELECT $1 END IF
CONTEXT: SQL statement in PL/PgSQL function "patient_dob_in_past" near line 4
LINE 1: SELECT $1 END IF
Not sure where I am going wrong since I am following the psql docs for 8.4
EDIT
Semicolon fixeds function issue. I also get an error for my trigger
ERROR: syntax error at or near "TRIGGER" at character 19
LINE 1: CREATE OR REPLACE TRIGGER patient_dob_in_past BEFORE UPDATE ...
try:
CREATE OR REPLACE FUNCTION patient_dob_in_past()
RETURNS TRIGGER AS $$
BEGIN
-- check pat_dob_dt is in past --
IF ( NEW.pat_dob_dt > current_date - interval '16 years' ) THEN
RAISE EXCEPTION '% must be 16 years in past', NEW.pat_dob_dt;
END IF;
RETURN NEW;
END;
$$ language 'plpgsql';
also https://www.postgresql.org/docs/current/static/sql-createtrigger.html
CREATE OR REPLACE TRIGGER
will fail as it does not work with OR REPLACE - use just CREATE TRIGGER instead
also why not CHECK constraints? eg:
t=# create table q2(t timestamptz check (t < now() - '16 years'::interval));
CREATE TABLE
t=# insert into q2 select now();
ERROR: new row for relation "q2" violates check constraint "q2_t_check"
DETAIL: Failing row contains (2017-10-10 11:41:01.062535+00).
t=# insert into q2 select now() - '16 years'::interval;
ERROR: new row for relation "q2" violates check constraint "q2_t_check"
DETAIL: Failing row contains (2001-10-10 11:41:13.031769+00).
t=# insert into q2 select now() - '16 years'::interval -'1 second'::interval;
INSERT 0 1
update
In case of existing previous values that do not match check constraint - you can delay check with NOT VALID, eg:
t=# create table q2(t timestamptz);
CREATE TABLE
t=# insert into q2 select now();
INSERT 0 1
t=# alter table q2 add constraint q2c check (t < (now() - '16 years'::interval)) not valid;
ALTER TABLE
t=# insert into q2 select now();
ERROR: new row for relation "q2" violates check constraint "q2c"
DETAIL: Failing row contains (2017-10-10 11:56:02.705578+00).
You missed semicolon at the end of the line.
RAISE EXCEPTION '% must be 16 years in past', NEW.pat_dob_dt;

Different timestamp for different DML queries in single transaction in oracle

I am doing an insert and delete operation on a table in single transaction.I have trigger on this table which update the log.The log has primary key as sequenceId which always gets incremented on insertion.I do delete first and then insert in the transaction.
I have two issues :
The timestamp in the log for the insert and delete is being same. Can I force it to be different.
The order of operation(insert/delete) in the log is getting reversed. It shows delete operation coming after insert operation(according to sequenceId).How can I ensure that the order is consistent in the log(insert after delete).
Example :
create table address (ID number, COUNTRY char(2));
create table address_log(SEQ_ID number, ID number, COUNTRY char(2), DML_TYPE char(1), CHANGE_DATE timestamp(6));
create sequence seq_id start with 1 increment by 100 nominvalue nomaxvalue cache 20 noorder;
create or replace trigger trg_add
before insert or delete on address
FOR EACH ROW
BEGIN
if inserting then
insert into address_log values(SEQ_ID.nextval, :new.ID, :new.COUNTRY, 'I', sysdate);
else
insert into address_log values(SEQ_ID.nextval, :old.ID, :old.COUNTRY, 'D', sysdate);
end if;
end;
insert into address values(1,'US');
insert into address values(2,'CA');
delete from address where id = 1;
insert into address values(3,'UK');
delete from address where id = 3;
if I commit last DML queries in single transaction, then I should see the same order in address_log.
What is the datatype of your timestamp column?
If you use TIMESTAMP with a large enough precision, the order should be preserved.
For example TIMESTAMP(6) (precision to the micro-second) -- which is the default precision:
SQL> CREATE TABLE t_data (ID NUMBER, d VARCHAR2(30));
Table created
SQL> CREATE TABLE t_log (ts TIMESTAMP (6), ID NUMBER, action VARCHAR2(1));
Table created
SQL> CREATE OR REPLACE TRIGGER trg
2 BEFORE INSERT ON t_data
3 FOR EACH ROW
4 BEGIN
5 INSERT INTO t_log VALUES (systimestamp, :NEW.id, 'I');
6 END;
7 /
Trigger created
SQL> INSERT INTO t_data (SELECT ROWNUM, 'x' FROM dual CONNECT BY LEVEL <= 10);
10 rows inserted
SQL> SELECT * FROM t_log ORDER BY ts;
TS ID ACTION
----------------------------- ---------- ------
19/06/13 15:47:51,686192 1 I
19/06/13 15:47:51,686481 2 I
19/06/13 15:47:51,686595 3 I
19/06/13 15:47:51,686699 4 I
19/06/13 15:47:51,686800 5 I
19/06/13 15:47:51,686901 6 I
...
In any case, if you really want to distinguish simultaneous events (concurrent inserts for instance), you can always use a sequence in addition, with the ORDER keyword to guarantee that the rows will be ordered:
CREATE SEQUENCE log_sequence ORDER
This would allow you to have a reliable sort order, even though the events took place at the same time.